Fact-checked by Grok 2 weeks ago

Super-resolution imaging

Super-resolution imaging encompasses a class of techniques, including optical and computational methods, that enhance the of imaging systems beyond conventional limits such as the barrier in light microscopy. (SRM), a prominent subset, consists of fluorescence-based optical imaging techniques that overcome the limit of conventional light microscopy, enabling visualization of biological structures at s below approximately 200 nanometres for visible wavelengths. This limit, established by in 1873, arises from the wave nature of light, which causes point sources to blur into patterns rather than sharp points. SRM methods achieve nanoscale —often 20–50 nanometres or better—by exploiting nonlinear optical effects, fluorophore activation, or structured illumination to bypass these physical constraints, while computational approaches reconstruct higher from multiple low-resolution images or enhance single frames using algorithms and . The development of optical SRM was recognized by the 2014 , awarded jointly to Eric Betzig, Stefan W. Hell, and for their foundational contributions to super-resolved . Hell's stimulated emission depletion () , proposed in 1994, uses a secondary to deplete emission outside a tiny focal spot, effectively shrinking the excitation volume to scales. Independently, Betzig and Moerner advanced single-molecule localization techniques, such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction (), which involve activating and localizing sparse subsets of fluorophores over multiple cycles to reconstruct high-resolution images from precise positional data. Another key approach, structured illumination (SIM), enhances resolution by illuminating samples with patterned light and computationally reconstructing finer details, effectively doubling the resolution of widefield . These techniques have transformed biological research by revealing dynamic cellular processes invisible to traditional , such as protein in synapses, mitochondrial , and viral . SRM is now routinely applied in live-cell , three-dimensional reconstructions, and multi-color labeling, with resolutions approaching 1 in advanced variants like MINFLUX. Ongoing innovations, including and expansion microscopy, continue to expand SRM's accessibility and utility across , , and , while computational super-resolution extends these capabilities to diverse fields like and .

Fundamentals

The Diffraction Limit in Imaging

The diffraction limit represents the fundamental physical constraint on the of optical imaging systems, defining the smallest resolvable feature size due to the wave nature of . In conventional , this limit arises from the diffraction of waves as they pass through the objective lens, preventing the formation of sharply focused images for structures smaller than a certain scale. first formulated this concept in 1873, establishing that the is governed by the of and the optical system's ability to collect diffracted rays from the specimen. Abbe's diffraction limit is quantitatively expressed by the equation for lateral resolution: d = \frac{\lambda}{2 \, \mathrm{NA}} where d is the minimum resolvable distance, \lambda is the wavelength of the illuminating light, and \mathrm{NA} is the numerical aperture of the objective lens, defined as n \sin \alpha with n as the refractive index of the medium and \alpha as the half-angle of the maximum cone of light accepted by the lens. This formula emerges from Fourier optics, where image formation is analyzed as the filtering of spatial frequencies in the Fourier domain. The specimen scatters light into a range of diffraction orders, each corresponding to a spatial frequency; the objective lens acts as a low-pass filter, capturing only frequencies up to a cutoff determined by the NA, beyond which higher frequencies (finer details) are lost, leading to the resolution limit. Abbe derived this by considering the periodic structure of specimens and the interference of diffracted waves, showing that resolving a grating requires at least the zeroth and first diffraction orders to be collected. In light microscopy, this limit typically yields a lateral resolution of approximately 200 nm for visible wavelengths around 500 nm and high-NA objectives (NA ≈ 1.4), while axial resolution is poorer at about 500 nm due to the equation d_z = \frac{2\lambda}{\mathrm{NA}^2}. For electron microscopy, the shorter de Broglie wavelength of electrons (e.g., λ ≈ 0.0037 nm at 100 keV acceleration voltage) pushes the theoretical diffraction limit to around 0.2–0.3 nm, though practical resolutions are often limited by lens aberrations rather than diffraction alone. These scales highlight the barrier's role across modalities. The impact of the limit manifests as blurring or merging of sub-wavelength features in images, as light from adjacent points overlaps in the point spread function (), typically an with a central maximum width set by the limit. Structures smaller than d cannot be distinguished because their patterns interfere constructively, reducing contrast and preventing localization of fine details like molecular assemblies or nanostructures. This blurring effect scales with and inversely with , underscoring why shorter wavelengths or higher improve up to the physical maximum.

Motivation and Historical Development

The primary motivation for super-resolution imaging stems from the need to visualize subcellular structures, such as proteins, organelles, and molecular complexes, at scales below the approximately 200 nm diffraction limit of conventional light microscopy. This limitation has long hindered the study of dynamic biological processes in intact cells, where electron microscopy provides high resolution but lacks specificity for live imaging or molecular labeling. Super-resolution techniques bridge this gap, enabling the observation of nanoscale events like synaptic vesicle dynamics or protein clustering in living systems, thus advancing fields such as and . Technological drivers have been crucial, with advances in fluorescence labeling—such as photoactivatable proteins and organic dyes that allow precise control of emission—and highly sensitive detectors like electron-multiplying charge-coupled devices (EMCCDs) and scientific complementary metal-oxide-semiconductor (sCMOS) cameras facilitating single-molecule detection and higher signal-to-noise ratios. These innovations, building on earlier developments like (GFP) tagging in the , have enabled the transition from broad-field illumination to targeted, high-precision . Historically, the field evolved from precursors in the 1970s and 1980s, including confocal microscopy, which improved axial resolution through pinhole-based optical sectioning but remained bound by the diffraction limit laterally. Deconvolution methods, introduced in the early 1980s, computationally removed out-of-focus blur to enhance contrast, laying groundwork for post-processing super-resolution. In the 1990s, near-field scanning optical microscopy (NSOM), pioneered by Eric Betzig in 1986, achieved sub-wavelength resolution by scanning apertures close to the sample, though limited to surfaces. The 2000s marked a pivotal shift to practical far-field methods, with Stefan Hell demonstrating stimulated emission depletion (STED) in 2000, William E. Moerner advancing single-molecule spectroscopy, and Betzig developing photoactivated localization microscopy (PALM) in 2006, followed by stochastic optical reconstruction microscopy (STORM). These breakthroughs, recognized by the 2014 Nobel Prize in Chemistry awarded to Betzig, Hell, and Moerner, transformed theoretical concepts into widely adopted tools for biological research.

Principles of Super-Resolution

Optical and Physical Principles

Super-resolution imaging techniques exploit specific optical and physical principles to overcome the diffraction limit of conventional far-field microscopy, primarily through controlled interactions between light and matter that enable the encoding and retrieval of sub-wavelength spatial information. These methods manipulate the propagation, excitation, or emission of light to access higher spatial frequencies or localize emission more precisely than allowed by the linear response of fluorophores under uniform illumination. The diffraction limit, arising from the wave nature of light, confines far-field resolution to approximately half the wavelength of light divided by the numerical aperture (λ/(2NA)), but optical super-resolution circumvents this by introducing nonlinearity or structured fields that effectively shrink the point spread function (PSF) or shift information into the detectable passband. Nonlinear optical effects form a cornerstone of many super-resolution approaches, particularly through the saturation or depletion of fluorophore excited states, which allows selective emission from smaller regions than the excitation PSF would permit. In stimulated emission depletion (STED) microscopy, for instance, a high-intensity depletion beam shaped as a doughnut suppresses fluorescence in the periphery of the excitation spot by driving fluorophores back to the ground state via stimulated emission before they can fluoresce spontaneously; this nonlinear dependence on intensity ensures that only the central region contributes to the image. The resolution improvement arises from the exponential decay of the excited-state population with depletion intensity, enabling effective PSF narrowing. The lateral resolution d in STED is given by d \approx \frac{\lambda}{2 \mathrm{NA} \sqrt{1 + I / I_\mathrm{sat}}}, where I is the depletion intensity at the doughnut crest and I_\mathrm{sat} is the saturation intensity required for 50% depletion; for I \gg I_\mathrm{sat}, this approximates to a factor of \sqrt{I_\mathrm{sat} / I} times the diffraction-limited resolution. Spatial frequency shifting, another key principle, extends the observable Fourier spectrum by using patterned illumination to mix high-frequency sample information with lower-frequency illumination patterns, thereby translating sub-diffraction details into the passband of the optical transfer function (OTF). In structured illumination microscopy (SIM), sinusoidal or grating-based patterns are projected onto the sample, creating moiré fringes that encode higher spatial frequencies; demodulation of multiple shifted images reconstructs an expanded frequency content, effectively doubling the resolution in linear implementations. This process relies on the convolution of the sample's spectrum with the illumination pattern's frequencies, allowing recovery of components up to twice the cutoff frequency of the system. Nonlinear variants of SIM further amplify this by exploiting higher-order harmonics from saturated excitation, accessing even greater frequency shifts. Point spread function (PSF) engineering modifies the shape or profile of the excitation or emission PSF to enhance localization or suppress sidelobes, thereby improving the effective resolution without altering the fundamental diffraction barrier directly. Techniques such as phase-modulated wavefronts or adaptive optics tailor the PSF to confine emission to sub-wavelength volumes, as seen in depletion methods where the depletion beam's zero-intensity core shrinks the fluorescent region. This engineering expands the spatial frequency support of the imaging system, enabling sharper contrast and reduced crosstalk between adjacent features. In emission-based approaches, engineered PSFs with elongated or rotated profiles aid in precise 3D localization by encoding depth information. Distinctions between near-field and far-field regimes highlight how evanescent in near-field methods enable sub-wavelength probing by avoiding losses that filter high spatial frequencies. In far-field super-resolution, propagates to the detector, limited by evanescent ' rapid decay beyond the near zone (typically < λ/2), but techniques like STED or SIM operate entirely in the far field by manipulating propagating . Near-field approaches, such as near-field scanning optical microscopy (NSOM), exploit evanescent fields generated at sub-wavelength apertures or interfaces (e.g., total internal reflection fluorescence, TIRF) to illuminate and collect signals from nanoscale volumes directly adjacent to the probe, achieving resolutions down to 10-20 nm by coupling to non-radiative fields that carry high-k vectors. This contrasts with far-field methods, where resolution gains stem from reversible fluorophore control rather than physical proximity.

Computational and Information-Theoretic Principles

The Shannon-Nyquist sampling theorem establishes that a continuous signal can be perfectly reconstructed from its samples if sampled at least twice the highest frequency component, setting a fundamental limit on resolution in imaging systems limited by the diffraction of light. In super-resolution imaging, this limit is extended by leveraging sparsity assumptions, where the underlying signal or object is assumed to be sparse in a certain transform domain, such as wavelets or total variation, allowing recovery of fine details from sub-Nyquist samples via compressive sensing techniques. This approach breaks the classical barrier by exploiting prior knowledge of sparsity, enabling stable super-resolution under noise, as demonstrated in mathematical frameworks showing exact recovery guarantees for sparse measures separated by more than the Rayleigh length. Bayesian frameworks provide a probabilistic foundation for super-resolution by formulating the problem as posterior estimation of a high-resolution image given low-resolution observations, incorporating priors to regularize the ill-posed inverse problem. In these models, the likelihood captures the imaging process, including blur and downsampling, while priors such as Gaussian processes or total variation enforce smoothness and edge preservation, respectively; for instance, total variation priors minimize the L1 norm of image gradients to promote piecewise constant structures. Seminal work has shown that marginalizing over nuisance parameters like motion yields robust estimates, with variational approximations or Markov chain Monte Carlo enabling efficient inference. Information-theoretic limits in super-resolution are quantified by bounds like the Cramér-Rao lower bound (CRLB) for localization precision, which sets the minimum variance achievable for estimating emitter positions in localization microscopy. For a point source imaged through a diffraction-limited system, the CRLB is given by \sigma \geq \frac{\lambda}{2\pi \, \mathrm{NA} \, \sqrt{N}}, where \lambda is the wavelength, NA is the numerical aperture, and N is the number of detected photons, highlighting that precision scales with photon count and optical parameters but is fundamentally constrained without additional structure. Achieving this bound requires maximum-likelihood estimators that efficiently use all available information, underscoring the role of statistical efficiency in pushing beyond optical limits. Redundancy across multiple frames, such as sub-pixel shifts from motion or temporal sequences, plays a crucial role in super-resolution by providing complementary information that fills in high-frequency details lost in individual low-resolution images. In multi-frame setups, this redundancy allows fusion of angular, temporal, or spatial data to reconstruct sub-pixel structures, effectively increasing the effective sampling rate and enabling resolutions finer than single-frame capabilities. Early demonstrations showed that even small inter-frame displacements can yield significant gains when aligned and fused under motion models. Unlike deconvolution, which reverses known blur within the support of the point spread function to restore frequencies up to the diffraction limit without introducing new information, super-resolution predicts and synthesizes unseen high-frequency details using priors or multi-view redundancy, potentially exceeding optical constraints. This distinction allows super-resolution to generate plausible structures beyond mere sharpening, though it risks artifacts if priors mismatch the scene.

Optical Super-Resolution Techniques

Structured Illumination Microscopy

Structured illumination microscopy (SIM) is a wide-field super-resolution technique that overcomes the diffraction limit by using patterned illumination to encode high spatial frequency information from the sample into the passband of the microscope's optical transfer function. Introduced by Mats G. L. Gustafsson, the method relies on illuminating the specimen with sinusoidal interference patterns oriented at multiple angles, which generate moiré effects that shift otherwise inaccessible high-frequency components to detectable lower frequencies. This allows reconstruction of images with enhanced resolution through computational demodulation of the acquired data. The mechanism involves projecting coherent light beams to create fine-striped patterns on the sample, typically using three mutually incoherent or phase-controlled beams to form the fringes. By acquiring images at several phase shifts (usually five) and pattern orientations (three for isotropic coverage), the system captures 9 to 15 raw frames per reconstructed plane. These images are processed in the Fourier domain, where the shifted frequency components—arising from the interaction of the illumination pattern and sample structure—are separated, remapped to their original positions, and combined using filters like a generalized Wiener deconvolution to suppress noise and artifacts. This Fourier stitching effectively doubles the observable spatial frequency range compared to uniform illumination. In standard linear SIM, the resolution gain is approximately twofold, yielding lateral resolutions of about 100 nm and axial resolutions around 280 nm in fluorescence microscopy, assuming a typical 488 nm excitation wavelength and high numerical aperture objectives. Nonlinear variants exploit higher-order harmonics generated by nonlinear fluorophore responses, such as excited-state saturation or multi-photon absorption, to extend the frequency shift further; for instance, the foundational nonlinear SIM achieved point resolutions below 50 nm, with some multi-photon implementations reaching up to eightfold improvement over diffraction-limited imaging. Implementation typically employs a diffraction grating or, in modern setups, a spatial light modulator (SLM) to generate and precisely control the illumination patterns without mechanical movement, enabling rapid switching between phases and orientations. The computational reconstruction, performed offline or in real-time with optimized algorithms, is essential for extracting the super-resolved information. SIM offers advantages in speed—acquiring full-frame data in seconds—and compatibility with live-cell imaging due to its gentle wide-field exposure, but it is constrained by the twofold limit in linear mode, sensitivity to sample drift during acquisition, and increased photobleaching in nonlinear modes requiring intense illumination. Gustafsson's 2008 work established practical three-dimensional linear SIM, building on earlier nonlinear concepts to enable isotropic resolution enhancement without the missing-cone artifacts of confocal methods.

Depletion and Reversible Methods

Depletion and reversible methods achieve super-resolution by exploiting reversible photochemical or photophysical transitions in fluorophores to confine emission to sub-diffraction volumes. In stimulated emission depletion (STED) microscopy, a doughnut-shaped depletion beam overlaps with the excitation spot to silence fluorophores in the periphery, allowing only those at the center to fluoresce. This deterministic approach enables scanned-point imaging with resolutions far below the diffraction limit. The STED setup employs a dual-beam configuration: an excitation laser illuminates the sample, while a higher-intensity depletion laser, typically red-shifted to match the fluorophore's stimulated emission wavelength, is shaped into a doughnut profile using a phase mask such as a spiral or vortex plate. This mask introduces a helical phase ramp to the depletion beam, creating destructive interference at the center and a ring of high intensity around it. The beams are combined via a dichroic mirror and scanned across the sample using galvanometric mirrors or resonant scanners. Depletion can occur via stimulated emission, where excited fluorophores are forced back to the ground state without fluorescence, or ground-state depletion, which switches fluorophores to a non-fluorescent dark state. The effective resolution d in STED is given by d = \frac{d_0}{\sqrt{1 + \frac{I}{I_\mathrm{sat}}}}, where d_0 is the diffraction-limited spot size, I is the depletion beam intensity at the periphery, and I_\mathrm{sat} is the saturation intensity required to deplete half the fluorophores. This formula arises from the nonlinear dependence of depletion efficiency on intensity, enabling resolutions of 20–50 nm laterally by increasing I relative to I_\mathrm{sat}, though higher powers risk photobleaching. STED was proposed by Stefan W. Hell and Jan Wichmann in 1994 as a far-field method to overcome the diffraction barrier through targeted depletion. The first experimental demonstration, achieving ~100 nm resolution on beads, came in 2000 from Hell's group using pulsed lasers for excitation and depletion. RESOLFT (reversible saturable optical fluorescence transitions) extends STED principles to lower light intensities by using reversible switching mechanisms, such as photoswitchable proteins or organic dyes, to deplete fluorophores without high-power lasers. This allows gentler illumination suitable for live-cell imaging, with resolutions approaching 50–100 nm while minimizing phototoxicity. Early demonstrations used reversibly switchable fluorescent proteins to create sub-diffraction emission shells. Developments include 3D variants, such as those employing dual depletion patterns—a lateral doughnut and an axial bottle beam—to achieve isotropic nanoscale resolution (~60 nm) in thick samples. A compact 3D STED implementation in 2007 demonstrated live imaging of synaptic vesicles with ~70 nm isotropic resolution using continuous-wave lasers.

Localization Microscopy

Localization microscopy, a cornerstone of super-resolution fluorescence imaging, operates by exploiting the stochastic blinking or activation of individual fluorescent probes to enable precise determination of their positions. In this approach, photoactivatable or photoswitchable fluorophores are used such that only a sparse subset—typically fewer than one molecule per diffraction-limited spot—emits light in each imaging frame, preventing overlap of their point spread functions (PSFs). The position of each emitter is then calculated by fitting a model PSF to the observed image, yielding localization precisions as fine as ~1 nm, depending on factors like photon count and background noise. By repeating this process over thousands of frames and reconstructing a composite image from the accumulated localizations, structural resolutions of 10–20 nm are routinely achieved, far surpassing the diffraction limit of conventional light microscopy. Two seminal techniques define the field: photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM). PALM, developed by Betzig and colleagues in 2006, relies on genetically encoded photoactivatable fluorescent proteins that are irreversibly activated by light and subsequently photobleached after emission, allowing sequential imaging of distinct molecular subsets. In contrast, STORM, introduced independently by Rust, Bates, and Zhuang in the same year, uses synthetic organic dyes capable of reversible photoswitching, where fluorophores cycle between fluorescent ("on") and dark ("off") states multiple times, enabling denser labeling and repeated imaging of the same structures. Both methods demand careful control of activation density to maintain sparsity, but STORM's reversibility often allows for higher labeling efficiency with conventional antibodies. Extensions to three-dimensional imaging have enhanced the utility of localization microscopy. By modifying the microscope's optics to alter the PSF shape—such as introducing astigmatism, which elongates the PSF differently along the optical axis, or employing a double-helix PSF that rotates into two lobes as a function of defocus—axial localization becomes feasible with precisions around 50 nm over ranges of several micrometers. These 3D variants, often integrated with or , have enabled volumetric reconstructions of cellular structures like organelles and protein clusters. Despite their precision, localization methods face practical limitations. Imaging is inherently non-real-time, as it requires acquiring and processing extensive time-lapse sequences, often spanning minutes to hours, followed by computational reconstruction. Furthermore, STORM typically necessitates specialized imaging buffers containing reducing agents (e.g., mercaptoethylamine) and oxygen scavengers to induce and stabilize the reversible blinking of organic dyes, while PALM demands photoactivatable probes that may not be compatible with all biological targets. These requirements can complicate live-cell applications and sample preparation.

Expansion and Near-Field Methods

Expansion microscopy (ExM) achieves super-resolution by physically enlarging biological specimens through the embedding of swellable hydrogel networks, allowing imaging with conventional diffraction-limited microscopes to yield effective resolutions of approximately 60-70 nm. Introduced in 2015 by Edward Boyden and colleagues, the method involves fixing samples, anchoring fluorescent labels to a polymer scaffold composed of sodium acrylate, acrylamide, and N,N'-methylenebisacrylamide, digesting proteins to reduce structural rigidity, and then swelling the gel isotropically via water dialysis, typically achieving a 4.5-fold linear expansion. This physical magnification effectively scales down the diffraction limit, enabling nanoscale visualization of structures like microtubules in cultured cells and neural circuits in mouse brain tissue without requiring specialized optics. A variant, protein retention ExM (proExM), simplifies the process by directly anchoring endogenous proteins to the hydrogel rather than relying on pre-labeled antibodies or oligonucleotides, preserving native labeling from standard fluorescent proteins or immunostaining protocols. Developed in 2016 by the Boyden group, proExM supports multicolor imaging with an expansion factor of about 4-fold and effective resolutions around 70 nm, making it compatible with fixed mammalian cells and tissues for applications such as subcellular protein localization. Both ExM and proExM are particularly suited to fixed samples, where isotropic expansion minimizes distortion, and they can be iteratively combined with other super-resolution techniques to further enhance resolution beyond 50 nm in some implementations. Near-field methods, such as near-field scanning optical microscopy (NSOM) or scanning near-field optical microscopy (SNOM), bypass the far-field diffraction limit by positioning a nanoscale probe in close proximity (typically <10 nm) to the sample, allowing interaction with evanescent waves that decay rapidly beyond the near field. Pioneered in the early 1990s, these techniques use a tapered optical fiber or aperture probe with openings as small as 50 nm to illuminate or collect light, achieving spatial resolutions of 10-100 nm depending on aperture size and probe-sample distance, independent of the illumination wavelength. Variants include illumination mode, where the probe delivers light to excite the sample, and collection mode, where it gathers emitted fluorescence; apertureless configurations employ scattering tips integrated with atomic force microscopy for even higher resolution. The core principle—sub-wavelength proximity—enables direct probing of surface features, such as lipid domains or protein clusters in fixed biological membranes, with applications in materials science for nanoscale topography and spectroscopy.

Computational Super-Resolution Techniques

Multi-Frame Processing Methods

Multi-frame processing methods in super-resolution imaging involve the fusion of multiple low-resolution images of the same scene, captured under slightly different conditions such as sub-pixel shifts or varying exposures, to reconstruct a single high-resolution image. These techniques exploit temporal redundancy across frames to overcome the limitations of individual low-resolution observations, including aliasing and noise, by aligning and combining the data. Originating in the image processing literature of the 1980s and 1990s, multi-frame super-resolution (SR) builds on early work in multiframe image restoration and registration, where sequences of undersampled frames are used to recover higher-frequency details lost in single-frame capture. A critical initial step in multi-frame SR is image registration, which aligns the low-resolution frames to a common coordinate system to account for relative motions or shifts between captures. Registration algorithms typically fall into two categories: feature-based methods, which detect and match distinctive keypoints such as edges or corners across frames using descriptors like scale-invariant feature transform (SIFT), and intensity-based methods, which optimize alignment by maximizing cross-correlation of pixel intensities between frames. Feature-based approaches are robust to moderate noise and illumination changes but can fail in textureless regions, while intensity correlation excels in uniform scenes by computing similarity metrics like normalized cross-correlation. Accurate sub-pixel registration is essential, as misalignment can introduce artifacts that degrade the final resolution. Once aligned, the frames are fused using reconstruction algorithms to estimate the high-resolution image. A foundational non-iterative method is shift-and-add, which upsamples each low-resolution frame via interpolation, shifts it according to the registration parameters, and sums the contributions pixel-wise; this simple approach, originally developed for astronomical speckle imaging in the late 1980s and early 1990s, effectively averages out noise while preserving high-frequency details from sub-pixel shifts. For more sophisticated fusion, iterative back-projection techniques refine the estimate by projecting the observed low-resolution frames onto a high-resolution grid and iteratively minimizing the reconstruction error through back-projection of discrepancies; introduced in seminal work on multiframe restoration, this method models the imaging process including blur and decimation to achieve convergence on a super-resolved output. In video-based SR, these methods process sequences from handheld or mounted cameras, leveraging natural motion for diverse sampling. Multi-exposure strategies within multi-frame SR further enhance performance by capturing frames at different integration times to mitigate noise, particularly in low-light scenarios. Averaging aligned multi-exposure frames boosts the signal-to-noise ratio (SNR) by a factor of approximately \sqrt{M}, where M is the number of frames, enabling the recovery of finer details that would otherwise be obscured by photon or read noise. This noise reduction complements the spatial fusion, as higher SNR facilitates better estimation of high-frequency components during reconstruction. Theoretically, multi-frame SR can yield a linear resolution gain of up to \sqrt{M} under ideal conditions with diverse sub-pixel shifts and sufficient frame diversity, translating to practical improvements of 2-4 times in applications like video enhancement; for instance, using 8-16 frames often achieves 2x-4x upsampling without excessive artifacts. In astronomy, multi-frame SR via shift-and-add has been applied to telescope sequences to resolve fine stellar structures beyond atmospheric seeing limits, as demonstrated in Hubble Space Telescope image processing. Similarly, in microscopy, computational multi-frame fusion reconstructs super-resolved views from lensless or wide-field fluorescence sequences, improving resolution in biological samples by combining shifted raw holograms or scan frames. These methods, while computationally efficient for moderate frame counts, rely on accurate motion models and can be sensitive to outliers like occlusions.

Single-Frame Deblurring and Enhancement

Single-frame deblurring and enhancement techniques aim to reconstruct a higher-resolution image from a single low-resolution, blurred input by inverting the degradation process through optimization frameworks that incorporate prior knowledge about image structure. These methods model the observed low-resolution image y as the convolution of the high-resolution image x with a blur operator H (typically representing the point spread function), plus noise: y = Hx + n. The core approach solves the ill-posed inverse problem via regularized least-squares minimization: \hat{x} = \arg\min_x \| y - Hx \|^2 + \lambda R(x), where R(x) is a regularization term enforcing priors such as smoothness or sparsity, and \lambda balances data fidelity and regularization. Deconvolution variants form a foundational class of these techniques, extending classical iterative or frequency-domain filters with sparsity-inducing priors to mitigate noise amplification inherent in naive inversion. The Richardson-Lucy (RL) algorithm, an iterative maximum-likelihood estimator for Poisson noise prevalent in imaging, iteratively refines the estimate by back-projecting the ratio of observed to convolved data, enabling resolution enhancement in single frames by sharpening blurred features while preserving photon statistics. Extensions incorporate sparsity, such as promoting sparse representations in wavelet or gradient domains during iterations, which stabilizes convergence and improves edge recovery in sparse-signal scenarios like microscopy. Similarly, the Wiener filter, a linear minimum mean-square error estimator in the frequency domain, deconvolves by dividing the blurred spectrum by the blur transfer function while suppressing noise via a signal-to-noise ratio term, yielding effective single-frame deblurring for known blur kernels. To address limitations like ringing artifacts in textured regions, sparsity-aware extensions apply the filter patch-wise with collaborative priors, such as block-matching 3D transforms that exploit non-local self-similarity to regularize sparse coefficients, enhancing resolution in natural images. Edge-directed interpolation methods surpass traditional bicubic interpolation—which smoothly upsamples via cubic polynomials but introduces blurring in high-frequency areas—by prioritizing edge preservation through total variation (TV) minimization. The TV regularizer R(x) = \| \nabla x \|_1 penalizes variations while allowing sharp discontinuities, solving the optimization to reconstruct edges with sub-pixel accuracy and reducing aliasing in single-frame super-resolution. Seminal work established TV as an edge-preserving prior for denoising and deblurring, later adapted for single-image upscaling where it achieves visually sharper results than bicubic at scales up to 2x, though at higher computational cost. For sparse scenes like particle imaging, sub-pixel localization enables super-resolution by fitting parametric models to diffraction-limited spots in a single frame. Centroid fitting computes the intensity-weighted center of mass within a spot's support, achieving localization precision beyond the pixel grid (often 1/10th pixel or better, depending on signal-to-noise ratio) for tracking diffusing particles without multi-frame alignment. This method, widely adopted in colloidal and biological studies, relies on Gaussian or radial symmetry assumptions to refine positions iteratively, supporting real-time analysis of dynamics at nanometer scales. These techniques find applications in real-time image enhancement, such as live-cell microscopy or video processing, where computational efficiency allows on-the-fly deblurring without additional acquisitions. However, without strong priors, they are limited to modest upscaling factors of approximately 1.5-2x, as further magnification amplifies noise and violates information-theoretic bounds on recoverable detail from a single observation.

Deep Learning-Based Approaches

Deep learning-based approaches to super-resolution imaging leverage neural networks to learn mappings from low-resolution inputs to high-resolution outputs, surpassing traditional methods by exploiting data-driven priors rather than hand-crafted ones. These techniques emerged prominently in the mid-2010s, initially focusing on single-image super-resolution (SISR) for general imaging tasks before adapting to microscopy applications. By training on large datasets of paired low- and high-resolution images, convolutional neural networks (CNNs) and subsequent architectures reconstruct finer details, effectively breaking the diffraction limit in optical systems through computational enhancement. A seminal CNN architecture for SISR is the Super-Resolution Convolutional Neural Network (SRCNN), introduced in 2014, which employs three convolutional layers to directly map low-resolution images to high-resolution ones, achieving superior performance over sparse-coding methods with minimal training iterations. SRCNN uses mean squared error (MSE) loss on pixel-wise differences between predicted and ground-truth high-resolution images, enabling end-to-end learning that improves peak signal-to-noise ratio (PSNR) by 1-2 dB on standard benchmarks like Set5. Building on this, generative adversarial networks (GANs) enhanced perceptual quality; the Enhanced Super-Resolution GAN (ESRGAN) from 2018 incorporates relativistic adversarial training and perceptual loss derived from VGG network features, prioritizing realistic textures over pixel accuracy to reduce artifacts like blurring in upscaled images. ESRGAN outperformed prior GANs like SRGAN on perceptual metrics such as learned perceptual image patch similarity (LPIPS), winning challenges like PIRM2018-SR with visually sharper results at 4x upscaling. In microscopy, deep learning methods address noise and sparsity inherent to biological samples. The Content-Aware image Restoration (CARE) framework, developed in 2018, uses U-Net-like CNNs trained on paired noisy/low-resolution fluorescence images to denoise and enhance resolution, effectively doubling axial and lateral resolution in live-cell imaging without additional hardware. CARE applies MSE combined with noise2void self-supervision for unpaired data, enabling 10-100x faster acquisition than traditional super-resolution techniques while preserving structural fidelity in applications like organelle tracking. Similarly, deepSTORM, also from 2018, employs CNNs to predict localization maps from stochastically blinking fluorophores in single-molecule data, accelerating super-resolution reconstruction by predicting positions from raw frames rather than iterative fitting, achieving sub-10 nm precision at densities up to 10 times higher than classical methods like ThunderSTORM. Training these models typically involves paired low/high-resolution datasets generated via bicubic downsampling or experimental degradation, with MSE loss for fidelity and perceptual losses (e.g., from pre-trained VGG classifiers) to capture semantic details beyond pixel matching. In microscopy contexts, datasets often include simulated blinking patterns or real fluorescence stacks, allowing generalization to unseen samples. Recent advances from 2020-2025 integrate diffusion models, which iteratively denoise Gaussian noise to generate high-resolution images conditioned on low-resolution inputs; for instance, denoising diffusion probabilistic models (DDPMs) applied to microscopy achieve 4x spatial super-resolution with fewer artifacts than GANs, as shown in tutorials and implementations for fluorescence enhancement. Hybrid optical-deep learning systems further amplify gains, combining coded illumination or sparse sampling with neural reconstruction to yield 5-10x improvements in effective resolution or speed; a 2021 approach using deep learning for dense localization in localization microscopy accelerated imaging by up to 10-fold while maintaining 20 nm accuracy in live cells. These methods excel at handling complex artifacts like Poisson noise in low-light microscopy, offering flexibility across modalities without recalibrating optics, but require substantial computational resources for training and inference, alongside risks of hallucinations—plausible but inaccurate details—from overfitting to limited datasets. Validation on diverse biological samples remains essential to mitigate domain shifts.

Applications

Biological and Medical Imaging

Super-resolution imaging has revolutionized biological and medical fields by enabling visualization of subcellular structures and dynamics at nanometer scales, surpassing the diffraction limit of conventional light microscopy. In biology, techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have been pivotal for synaptic imaging, resolving vesicle distributions with approximately 20 nm precision to map neurotransmitter release sites and synaptic clefts in neural tissues. These methods rely on single-molecule localization to reconstruct high-resolution images from sparse fluorophore activations, providing insights into synaptic plasticity and disorders like Alzheimer's disease. In live-cell imaging, stimulated emission depletion (STED) microscopy allows real-time tracking of membrane proteins, achieving resolutions of 50-100 nm to observe receptor diffusion and clustering on cell surfaces without phototoxicity issues that plague other methods. This has been essential for studying dynamic processes like endocytosis and immune signaling in living neurons and immune cells. Complementing these, expansion microscopy (ExM) expands biological samples isotropically by up to 20-fold using swellable hydrogels, enabling brain connectomics by resolving axonal projections and dendritic spines at 50-70 nm effective resolution with standard microscopes, thus facilitating large-scale mapping of neural circuits in mouse brains. Medically, super-resolution optical coherence tomography (SR-OCT) enhances retinal imaging by resolving photoreceptor layers and choroidal structures at 1-2 μm lateral resolution, aiding early detection of macular degeneration through non-invasive depth profiling. In oncology, artificial intelligence-driven super-resolution (AI-SR) reconstructs MRI and PET scans to delineate tumor margins with sub-millimeter accuracy, improving radiotherapy planning by distinguishing malignant from healthy tissue in brain and prostate cancers. These advancements yield profound impacts, including mechanistic insights into neurodegeneration via amyloid plaque mapping in Alzheimer's models and cancer progression through tumor microenvironment visualization, with super-resolution techniques revealing immune cell infiltration patterns at unprecedented detail. Ongoing clinical translation includes 2020s trials for super-resolution endoscopy, where computational super-resolution algorithms process video-rate images to achieve 2-5 μm resolution for gastrointestinal polyp detection, potentially increasing diagnostic sensitivity by 20-30% over standard endoscopes in multicenter studies. Overall, these applications underscore super-resolution's role in bridging molecular biology with clinical diagnostics, fostering targeted therapies for neurological and oncological conditions.

Materials Science and Nanotechnology

In materials science, super-resolution imaging techniques have enabled the characterization of nanomaterial structures at scales below the diffraction limit, revealing defects and properties critical for advanced applications. Stimulated emission depletion (STED) microscopy and near-field scanning optical microscopy (NSOM) have been particularly effective for imaging defects in graphene and quantum dots, achieving resolutions around 10 nm. For instance, NSOM has been used to map photoluminescence from GaAs quantum dot arrays, providing nanoscale optical contrast that highlights spatial variations in emission due to dot size and arrangement. Similarly, NSOM facilitates the study of exciton distributions in quantum dots, allowing visualization of carrier localization with sub-wavelength precision. These methods are essential for identifying topological defects in graphene, such as vacancies or edges, which influence electronic properties in two-dimensional materials. Surface analysis benefits from super-resolution Raman spectroscopy (SR-Raman), which probes molecular vibrations in polymers with enhanced spatial resolution. SR-Raman, often implemented via stimulated Raman scattering (SRS) combined with structured illumination or expansion techniques, resolves chemical distributions at the nanoscale, distinguishing vibrational modes associated with polymer chain conformations and additives. For example, expansion SRS microscopy has achieved super-resolution imaging of polymer blends, enabling the mapping of phase separation and molecular orientation without labels. This capability is vital for understanding interfacial interactions in polymer composites, where traditional Raman spectroscopy is limited by diffraction to micron scales. Three-dimensional super-resolution computed tomography (SR-CT) has advanced the quantification of porosity in composite materials, providing volumetric insights into void distributions that affect mechanical integrity. By applying deep learning-based super-resolution to synchrotron CT data, researchers have enhanced resolution in fiber-reinforced composites, accurately segmenting pores as small as a few micrometers and analyzing their connectivity during tensile loading. Recent SR-CT implementations, including model-based iterative reconstruction, have improved porosity assessment in polymer-matrix composites, correlating void morphology with failure mechanisms. Recent advances from 2023 to 2025 in deep learning-based super-resolution (DL-SR) for electron microscopy have enabled the reconstruction of atomic arrangements in materials, surpassing conventional limits. DL-SR techniques, such as convolutional neural networks applied to scanning transmission electron microscopy (STEM) images, have denoised and upscaled data to visualize individual atomic columns in alloys and nanomaterials. For instance, neural network-enhanced atomic electron tomography has resolved 3D atomic positions in complex crystals with sub-angstrom precision, aiding defect analysis in semiconductors. These developments, including frequency-domain U-Net models, have extended to real-time processing, facilitating in-situ studies of material dynamics. The impacts of these super-resolution methods are profound in battery material optimization and semiconductor quality control. In lithium-ion batteries, super-resolving scanning electron microscopy (SEM) images of electrodes has revealed nanoscale degradation, such as particle cracking and solid-electrolyte interphase growth, guiding improvements in cathode design for longer cycle life. Machine learning SR applied to CT scans of all-solid-state batteries has quantified lithium distribution and porosity evolution, optimizing electrolyte interfaces to enhance energy density. In semiconductors, DL-SR deconvolution of SEM and X-ray images has improved defect detection in wafer inspection, achieving sub-10 nm resolution for optical critical dimension metrology and solder joint analysis, thereby reducing manufacturing yields losses.

Remote Sensing and Astronomy

In remote sensing, super-resolution techniques enhance the spatial resolution of satellite imagery to support applications such as urban mapping and environmental monitoring. Multi-frame fusion methods, which align and combine multiple low-resolution images to reconstruct higher-resolution outputs, have been particularly effective for platforms like Landsat. For instance, early algorithms fused complementary frames from Landsat to improve resolution, enabling finer details in urban structures like buildings and roads. More recent approaches achieve up to 4× super-resolution on Landsat datasets, yielding peak signal-to-noise ratios (PSNR) exceeding 27 dB and sharper edge preservation for object recognition in urban areas. These techniques leverage residual neural networks to extract multi-scale features, outperforming traditional interpolation in both objective metrics and visual quality. In astronomy, super-resolution methods address diffraction limits and instrumental blurring to reveal details of celestial objects. Lucky imaging, a multi-frame selection technique that discards distorted frames affected by atmospheric seeing, has been applied to high-contrast observations of exoplanet systems, achieving resolutions close to the diffraction limit in the i'-band for over 100 southern hemisphere targets. This approach enhances the detection of faint companions and circumstellar features, with effective angular resolutions down to arcseconds for transiting exoplanets. Deconvolution algorithms further refine images from telescopes like the James Webb Space Telescope (JWST), reducing the full width at half maximum (FWHM) of the point-spread function by factors of 1.6 to 2.4 across mid-infrared filters, thereby resolving extended structures in active galactic nuclei such as nuclear emission regions spanning hundreds of parsecs. Similar post-processing has been used on Hubble data to sharpen views of planetary atmospheres, though JWST's native capabilities amplify these gains. Deep learning-based upscaling has emerged as a computational focus for hyperspectral remote sensing, where spectral fidelity is crucial alongside spatial enhancement. These methods employ convolutional neural networks, often with 3D kernels to preserve spectral correlations, achieving superior performance over 2D alternatives on datasets like CAVE and Pavia Centre. Progressive upsampling frameworks, such as those integrating residual and attention mechanisms, enable single-image super-resolution for hyperspectral cubes, supporting tasks like crop monitoring with minimal computational overhead. A 2024 application of AI-driven super-resolution generated daily global CO2 datasets from coarse satellite observations, facilitating point-source emissions tracking for climate monitoring without new hardware. For example, diffusion-prior models have super-resolved 30 m Harmonized Landsat-Sentinel imagery to 10 m, improving normalized difference vegetation index (NDVI) accuracy and crop classification F1-scores to 0.86. A key challenge in both remote sensing and astronomy is mitigating atmospheric turbulence, which introduces blurring and aliasing. Adaptive optics integration, combined with super-resolution fusion of interpolated frames, counters these effects by modeling turbulence in the optical transfer function and applying Wiener filtering, yielding structural similarity index (SSIM) improvements under moderate turbulence conditions. This hybrid approach is robust for infrared and long-range imaging, tuning fusion weights to balance aliasing reduction and turbulence suppression.

Challenges and Limitations

Artifacts Including Aliasing

In super-resolution imaging, aliasing manifests as false high-frequency patterns that emerge when high spatial frequencies exceed the Nyquist sampling limit during reconstruction, causing these signals to fold into lower frequencies and produce artificial distortions. This artifact is particularly prevalent in techniques like structured illumination microscopy (SIM), where inadequate sampling of the modulated illumination patterns leads to undersampling of extended frequency content. In localization-based methods such as STORM or PALM, aliasing-like distortions can also arise from localization drift, where gradual sample or stage movement over extended acquisitions misaligns frames, blurring fine details and introducing spurious structures. Beyond aliasing, other common artifacts include photobleaching-induced bias and ringing in deconvolution-based approaches. Photobleaching causes progressive loss of fluorophore emission during multi-frame acquisitions, creating intensity gradients that bias super-resolved reconstructions toward brighter initial frames and distort quantitative measurements. Ringing artifacts appear as oscillatory halos or ripples around edges in deconvolved images, stemming from the amplification of noise and discontinuities during inverse Fourier filtering, especially when the point spread function mismatches the sample aberrations. Mitigation strategies for aliasing involve applying anti-aliasing filters, such as optical low-pass filters before sampling, to suppress frequencies above the Nyquist limit and prevent folding. Validation of artifact-free reconstructions often employs Fourier analysis techniques, like Fourier ring correlation (FRC), to quantify resolution and identify regions of aliasing by assessing frequency content consistency. For instance, in SIM, moiré patterns—interference fringes from overlapping periodic structures—can produce grid-like aliasing if scattering in thick samples (>3 µm) distorts illumination, as demonstrated in live-cell imaging of organelles. Studies from the 2010s, such as those developing NanoJ-SQUIRREL for quantitative artifact mapping, highlighted these issues by comparing super-resolved and diffraction-limited images, revealing artifacts including aliasing in SIM datasets without optimized sampling.

Practical Constraints and Trade-offs

Super-resolution imaging techniques, while revolutionary, impose significant practical constraints due to their reliance on intense illumination or extended data acquisition, particularly in live-cell applications. In stimulated emission depletion (STED) microscopy, high laser powers are required to deplete fluorescence outside the focal spot, leading to phototoxicity that can damage cellular structures and induce stress responses in live samples. This is exacerbated by photobleaching, where prolonged exposure irreversibly destroys fluorophores, limiting imaging duration and fidelity in biological specimens. For instance, STED's illumination intensities often exceed those of conventional confocal microscopy by orders of magnitude, necessitating careful optimization to preserve sample viability. Acquisition times represent another key limitation, varying widely across methods and impacting the feasibility of dynamic studies. Localization-based techniques, such as single-molecule localization microscopy (SMLM), require accumulating thousands of frames to localize individual emitters, often taking minutes to hours per image, which restricts their use in fast-moving processes. In contrast, structured illumination microscopy (SIM) achieves super-resolution in seconds by reconstructing from a limited set of patterned illuminations, offering faster throughput but at the cost of lower resolution gains. These temporal demands can introduce motion artifacts in live imaging, further complicating data interpretation. The high cost and technical complexity of super-resolution systems pose barriers to widespread adoption, particularly for resource-limited labs. STED setups demand custom optics, pulsed lasers, and precise alignment, with commercial systems typically costing around $500,000 or more due to these specialized components. This complexity extends to maintenance and operation, requiring expertise that contrasts with the relative simplicity of widefield microscopy. Inherent trade-offs further define the practical boundaries of these techniques, balancing resolution against field-of-view (FOV), speed, and signal-to-noise ratio (SNR). Achieving sub-100 nm resolution often narrows the FOV or slows acquisition, as seen in point-scanning methods like STED, where pixel dwell times must increase to maintain SNR under depleting beams. High SNR is crucial for accurate localization or reconstruction but demands brighter illumination or longer exposures, amplifying phototoxicity risks. Researchers must thus select parameters that prioritize the experiment's needs, such as favoring speed over ultimate resolution in time-lapse imaging. Emerging solutions aim to mitigate these constraints without sacrificing performance. Reversible saturable optical fluorescence transitions (RESOLFT) microscopy employs low-intensity light to switch fluorophores between states, reducing power requirements by approximately eight orders of magnitude compared to STED and minimizing phototoxicity for extended live imaging. Recent deep learning (DL) accelerations have also shown promise, enabling reconstruction from undersampled data to cut acquisition times by factors of 10 or more; for example, DL-enhanced STED reduces pixel dwell times by one to two orders of magnitude while preserving resolution, a trend gaining traction in 2024-2025 implementations. These advancements facilitate broader application in sensitive biological contexts.

Recent Advances and Future Directions

Integration of Artificial Intelligence

Artificial intelligence has significantly enhanced super-resolution imaging by integrating deep learning techniques for image reconstruction, particularly in microscopy applications where content-aware processing allows for faster and more accurate localization of fluorophores. For instance, ANNA-PALM employs artificial neural networks trained on paired sparse and dense localization data to reconstruct high-resolution images from rapidly acquired, low-density frames in photoactivated localization microscopy (PALM), achieving a 3- to 10-fold improvement in temporal resolution without sacrificing spatial precision or localization accuracy. This approach enables content-aware super-resolution by predicting missing localizations based on learned patterns, reducing acquisition times and phototoxicity in live-cell imaging. Recent adaptations, such as in DNA-PAINT, have extended similar neural network strategies to accelerate super-resolution by generating complete structural images from incomplete input frames. Generative models, particularly diffusion-based methods, have emerged as powerful tools for super-resolution in noisy environments, outperforming traditional denoising techniques by iteratively refining low-resolution inputs into high-fidelity outputs. These models excel in handling experimental noise inherent to microscopy, such as background fluorescence or irregular sampling, by learning probabilistic distributions of high-resolution features, thus enabling robust reconstruction even from suboptimal datasets. For example, in 2024, diffusion models were applied to denoise fluorescence microscopy images, preserving fine structural details like cellular ultrastructures. Another application in transmission electron microscopy used diffusion models to augment ultrastructural details, achieving enhanced resolution for nanoscale cellular components. AI-driven optimization further streamlines super-resolution workflows by automating parameter tuning in techniques like stimulated emission depletion (STED) microscopy. The pySTED framework, introduced in 2024, leverages machine learning simulations to optimize STED parameters—such as depletion laser power and pinhole size—for balancing resolution, signal-to-noise ratio, and photobleaching, allowing real-time adjustments without extensive manual calibration. End-to-end optical-deep learning pipelines, as explored in 2025 studies, integrate hardware control with neural networks to create calibration-free super-resolution systems; for example, deep learning approaches for single-frame fluorescence microscopy enable rapid super-resolution by jointly optimizing optical acquisition and computational reconstruction. These advancements facilitate seamless transitions from raw data to super-resolved outputs, enhancing efficiency in complex setups. The integration of AI has democratized super-resolution imaging through accessible open-source software tools, such as CARE and diffusion-based frameworks, which lower barriers for non-specialists and enable widespread adoption in research labs. Simulations powered by these AI methods have demonstrated theoretical resolutions down to 30 nm in localization microscopy while highlighting the potential for hybrid optical-AI systems. However, ethical considerations are paramount; rigorous validation against ground-truth data is essential to prevent over-interpretation of AI-generated features, which could lead to misattribution of biological structures, emphasizing the need for transparent model explainability and independent verification in publications.

Emerging Hybrid and Novel Techniques

Hybrid super-resolution techniques combine physical sample expansion with optical structured illumination to achieve isotropic resolutions approaching 20 nm. Expansion microscopy (ExM) physically enlarges biological specimens by embedding them in a swellable hydrogel, effectively scaling down the diffraction limit when imaged with conventional microscopes. When paired with structured illumination microscopy (SIM), this hybrid approach enhances lateral and axial resolutions to approximately 25-30 nm, enabling detailed visualization of nanoscale structures in fixed tissues while preserving isotropy. A notable advancement in hybrid localization and depletion methods is MINFLUX nanoscopy, introduced in 2023, which integrates single-molecule localization precision with stimulated emission depletion (STED)-like beam shaping to attain ~1 nm resolution in three dimensions. This technique minimizes photon usage for fluorophore localization, allowing tracking of molecular dynamics, such as kinesin-1 motor protein stepping, with sub-nanometer accuracy in live cells. By adaptively positioning a doughnut-shaped excitation beam around the fluorophore, MINFLUX reduces background noise and achieves molecular-scale insights without excessive photobleaching. Novel techniques in super-resolution photoacoustic imaging have progressed significantly in 2024, enabling deeper tissue penetration with resolutions surpassing traditional limits through acoustic wave manipulation. Advances include fluctuation-based localization of absorbers and wavefront engineering, achieving sub-micron lateral resolution at depths up to several millimeters, which is particularly useful for vascular and tumor imaging in vivo. These methods leverage the hybrid nature of optical excitation and ultrasonic detection to overcome scattering challenges in biological media. Adaptive optics (AO) integration with super-resolution modalities has facilitated high-fidelity in vivo imaging by correcting tissue-induced aberrations in real time. In two-photon structured illumination microscopy (2P-SIM) enhanced by AO, resolutions of ~150 nm laterally and ~735 nm axially have been demonstrated in deep brain structures of living mice, minimizing signal loss and distortion for dynamic neural imaging. This approach uses deformable mirrors to pre-compensate wavefront errors, enabling clearer observation of subcellular processes in scattering environments. Quantum-enhanced super-resolution leverages entangled photon pairs to surpass classical shot-noise limits, providing sub-shot-noise sensitivity and improved spatial resolution. Spatially entangled photons enable pixel-level super-resolution by measuring joint probability distributions, effectively doubling resolution in ghost imaging setups while reducing noise below the standard quantum limit. This non-classical correlation enhances contrast in low-light biological samples, with applications in label-free microscopy. As of 2025, trends in super-resolution include the development of portable endoscopes incorporating advanced optical designs for near-SR performance in clinical settings. Similarly, super-resolution terahertz (THz) imaging has advanced through virtual structured illumination, enabling far-field resolutions beyond the diffraction limit (~λ/2) for non-ionizing inspection of materials and concealed objects. These portable THz systems use computational reconstruction to achieve sub-wavelength detail at frequencies around 0.3-1 THz. These emerging hybrids and novel methods hold potential for atomic-scale imaging in live systems, as exemplified by MINFLUX's extension to Ångström-level localization in dynamic environments, paving the way for real-time observation of protein conformational changes and molecular interactions without fixation artifacts.

References

  1. [1]
    The Nobel Prize in Chemistry 2014 - Popular information
    When Betzig superimposed the images he ended up with a super-resolution image of the lysosome membrane. Its resolution was far better than Abbe's diffraction ...
  2. [2]
    Press release: The Nobel Prize in Chemistry 2014 - NobelPrize.org
    Oct 8, 2014 · In 1873, the microscopist Ernst Abbe stipulated a physical limit for the maximum resolution of traditional optical microscopy: it could never ...
  3. [3]
    Super-resolution microscopy: a brief history and new avenues - PMC
    Super-resolution microscopy (SRM) is a fast-developing field that encompasses fluorescence imaging techniques with the capability to resolve objects below ...
  4. [4]
    The Nobel Prize in Chemistry 2014 - NobelPrize.org
    The Nobel Prize in Chemistry 2014 was awarded jointly to Eric Betzig, Stefan W. Hell and William E. Moerner for the development of super-resolved fluorescence ...
  5. [5]
    A guide to choosing the right super-resolution microscopy technique
    Super-resolution microscopy has become an increasingly popular and robust tool across the life sciences to study minute cellular structures and processes.
  6. [6]
    Visualizing and discovering cellular structures with super-resolution ...
    Aug 31, 2018 · Super-resolution imaging methods have continually evolved and can now be used to image cellular structures in three dimensions, multiple colors, and living ...
  7. [7]
  8. [8]
    [PDF] SUPER-RESOLVED FLUORESCENCE MICROSCOPY - Nobel Prize
    Oct 8, 2014 · The Royal Swedish Academy of Sciences has decided to award Eric Betzig, Stefan W. Hell and. W. E. Moerner the Nobel Prize in Chemistry 2014 for ...
  9. [9]
    The Diffraction Barrier in Optical Microscopy | Nikon's MicroscopyU
    The diffraction-limited resolution theory was advanced by German physicist Ernst Abbe in 1873 (see Equation (1)) and later refined by Lord Rayleigh in 1896 ...Missing: paper | Show results with:paper
  10. [10]
    Microscope Resolution: Concepts, Factors and Calculation
    Ernst Abbe and 'Abbe's Diffraction Limit' (1873). Ernst Karl Abbe (1840-1905) was a German mathematician and physicist. In 1866 he met Carl Zeiss and ...Missing: original | Show results with:original
  11. [11]
    Limits to Resolution in the Electron Microscope
    If all aberrations and distortions are eliminated from the optical system, this will be the limit to resolution. If aberrations and distortions are present, ...
  12. [12]
    Super-Resolution Microscopy: Shedding Light on the Cellular ...
    Super-resolution microscopy enables the direct imaging of these structures, enabling their investigation in the context of intact, and often living, cells.
  13. [13]
    A microscopic view of the cell - Molecular Biology of the Cell (MBoC)
    Oct 13, 2017 · Superresolution microscopes push the spatial resolution from the organelle level to the macromolecule level, turning light microscopy into a ...
  14. [14]
    Enabling technologies in super-resolution fluorescence microscopy
    Jun 4, 2019 · Here, we review recent advances that have enabled entirely new types of experiments and greatly potentiated existing technologies. Introduction.Missing: drivers | Show results with:drivers
  15. [15]
    Review Fluorescent labeling strategies for molecular bioimaging
    Mar 12, 2025 · In this review, we discuss recent advances in fluorescent labeling strategies for molecular bioimaging, with a special focus on protein labeling.
  16. [16]
    Introduction to Confocal Microscopy - Evident Scientific
    During the late 1970s and the 1980s, advances in computer and laser technology, coupled to new algorithms for digital manipulation of images, led to a growing ...
  17. [17]
    The “Super-resolution” revolution - PMC - NIH
    In the early 1980s computational methods known as deconvolution were borrowed from other branches of science and applied to biological imaging in order to ...Missing: NSOM | Show results with:NSOM
  18. [18]
    Super resolution fluorescence microscopy - PMC - NIH
    The key to achieving super resolution is the nonlinear dependence of the depleted population on the STED laser intensity when the saturated depletion level is ...
  19. [19]
    Breaking the diffraction resolution limit by stimulated emission
    Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy. Stefan W. Hell and Jan Wichmann.
  20. [20]
    Subdiffractive microscopy: techniques, applications, and challenges
    The PSF size of STED is given by σ STED ≈ λ 2 NA 1 + I / I sat , where I is the light intensity of the depleting (donut) light spot and Isat is the intensity ...
  21. [21]
    Surpassing the lateral resolution limit by a factor of two using ...
    Dec 24, 2001 · Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy ... Gustafsson, M.G.L. (1999) Extended ...Missing: original | Show results with:original
  22. [22]
    Nonlinear structured-illumination microscopy: Wide-field ... - PNAS
    This article experimentally demonstrates saturated structured-illumination microscopy, a recently proposed method in which the nonlinearity arises from ...
  23. [23]
    Introduction to Superresolution Microscopy - Zeiss Campus
    Such a symmetrical focal spot should prove beneficial in obtaining high resolution optical sections from deep within biological cells and tissues. The complex ...
  24. [24]
    Optimal 3D single-molecule localization for superresolution ... - PNAS
    Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions. Sean Quirin, Sri Rama Prasanna ...
  25. [25]
    Super-resolution optical imaging: A comparison - ScienceDirect.com
    In our paper, we focus on showing in parallel the three branches of far-field, near-field and micro-object-based optical super-resolution microscopy.
  26. [26]
    Far-Field Super-Resolution Microscopy Using Evanescent Illumination
    Jun 1, 2024 · Far-field super-resolution microscopy uses evanescent waves to transfer high-frequency sample information to the low-frequency passband of ...
  27. [27]
    Towards a Mathematical Theory of Super‐resolution
    Apr 29, 2013 · This paper develops a mathematical theory of super-resolution. Broadly speaking, super-resolution is the problem of recovering the fine details of an object.
  28. [28]
    Bayesian Image Super-Resolution - NIPS papers
    In this paper we develop a Bayesian treatment of the super-resolution problem in which the likelihood function for the image registra(cid:173) tion parameters ...
  29. [29]
    Total variation super resolution using a variational approach
    In this paper we propose a novel algorithm for super resolution based on total variation prior and variational distribution approximations.
  30. [30]
    Localization Accuracy in Single-Molecule Microscopy - ScienceDirect
    One of the most basic questions in single-molecule microscopy concerns the accuracy with which the location of a single molecule can be determined.
  31. [31]
    [PDF] A Practical Approach to Super-Resolution
    SR is the term generally applied to the problem of transcending the limitations of optical imaging systems through the use of image processing algorithms, which ...<|control11|><|separator|>
  32. [32]
    The Race to Super-Resolution Microscopy: Is Deconvolution Enough?
    Oct 1, 2019 · Deconvolution algorithms work by reassigning out of focus photons back to their original positions, based on a theoretical or acquired point ...
  33. [33]
  34. [34]
    Improvement in Resolution of Multiphoton Scanning Structured ...
    We describe a multiphoton (mP)-structured illumination microscopy (SIM) technique, which demonstrates substantial improvement in image resolution compared with ...Missing: gain | Show results with:gain
  35. [35]
    Simple structured illumination microscope setup with high ...
    A programmable spatial light modulator (ferroelectric LCoS) in an intermediate image plane enables precise and rapid control of the excitation pattern in the ...
  36. [36]
    Fluorescence microscopy with diffraction resolution barrier broken ...
    S W Hell, J Wichmann Opt Lett 19, 780–782 (1994). Crossref · PubMed · Google Scholar. a [...] As a result, fluorescence can be entirely stopped; b [...] to ...
  37. [37]
    [PDF] Stefan W. Hell - Nobel Lecture: Nanoscopy with Focused Light
    Hell, S.W., “Improvement of lateral resolution in far-field light microscopy using two-photon excitation with offset beams.” Optics Communications, 1994. 106,.
  38. [38]
    Breaking the diffraction barrier in fluorescence microscopy at low ...
    ... microscopy at low light intensities by using reversibly photoswitchable proteins. Michael Hofmann, Christian Eggeling, Stefan Jakobs, and Stefan W. Hell ...
  39. [39]
    Single-molecule localization microscopy | Nature Reviews Methods ...
    Jun 3, 2021 · Single-molecule localization microscopy (SMLM) describes a family of powerful imaging techniques that dramatically improve spatial resolution.
  40. [40]
    Imaging Intracellular Fluorescent Proteins at Nanometer Resolution
    This technique, termed photoactivated localization microscopy (PALM), is capable of resolving the most precisely localized molecules at separations of a few ...
  41. [41]
    Three-dimensional, single-molecule fluorescence imaging beyond ...
    Mar 3, 2009 · In effect, the PSF appears as a double-helix along the z axis of the microscope; thus, we term it the double-helix PSF (DH-PSF) for convenience ...
  42. [42]
    PALM and STORM: Unlocking live‐cell super‐resolution - Henriques
    Jan 19, 2011 · STORM and dSTORM use synthetic fluorescent dyes and special buffers able to maintain photo-switching. These buffers are, in general, highly ...
  43. [43]
    Expansion microscopy | Science
    We discovered that by synthesizing a swellable polymer network within a specimen, it can be physically expanded, resulting in physical magnification.
  44. [44]
    Near-Field Optics: Microscopy, Spectroscopy, and Surface ... - Science
    BETZIG, E, COMBINED SHEAR FORCE AND NEAR-FIELD SCANNING OPTICAL MICROSCOPY, APPLIED PHYSICS LETTERS 60: 2484 (1992).
  45. [45]
    [PDF] Super-resolution image reconstruction: A technical overview
    The SR image reconstruction is proved to be useful in many prac- tical cases where multiple frames of the same scene can be obtained, including medical imaging, ...
  46. [46]
    Multiframe image restoration and registration - Semantic Scholar
    Multiframe image restoration and registration · R. Tsai, Thomas S. Huang · Published 1984 · Engineering, Computer Science.
  47. [47]
    Multiframe image restoration with the 2D4 algorithm - ResearchGate
    Aug 9, 2025 · PDF | We describe a new algorithm for combining multiple low-resolution images to obtain a high-resolution object estimate.
  48. [48]
    [PDF] Fast and Robust Multiframe Super Resolution
    In this paper, we propose a fast and robust super-resolution al- gorithm using the norm, both for the regularization and the data fusion terms. Whereas the ...Missing: redundancy seminal
  49. [49]
    Linear Reconstruction of the Hubble Deep Field - STScI
    Here we present a method which has the versatility of shift-and-add yet largely maintains the resolution and independent noise statistics of interlacing. The ...
  50. [50]
    Portable lensless wide-field microscopy imaging platform based on ...
    Oct 23, 2015 · To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame ...<|separator|>
  51. [51]
    [PDF] Super-Resolution from a Single Image
    Methods for SR can be broadly classified into two families of methods: (i) The classical multi-image super-resolution, and (ii) Example-Based super-resolution.
  52. [52]
    (PDF) Blind deconvolution using the Richardson-Lucy algorithm
    Aug 9, 2025 · The aim of this communication is to show how the Richardson-Lucy deconvolution algorithm can be applied to the blind deconvolution problem.
  53. [53]
    Deblur Images Using a Wiener Filter - MATLAB & Simulink Example
    Use Wiener deconvolution to deblur images when you know the frequency characteristics of the image and additive noise.
  54. [54]
    [PDF] Image restoration by sparse 3D transform-domain collaborative ...
    In this work, we propose an extension of the BM3D filter for colored noise, which we use in a two-step deblurring algorithm to improve the regularization after ...
  55. [55]
    Single image super-resolution using combined total variation ...
    Oct 22, 2014 · For this purpose, we propose a novel super-resolution (SR) method based on combined total variation regularization. In the first place, we ...
  56. [56]
    [PDF] Nonlinear total variation based noise removal algorithms* - UTK-EECS
    The total variation of the image is minimized subject to constraints involving the statistics of the noise. The constraints are imposed using Lagrange ...
  57. [57]
    Limitations of super resolution image reconstruction and how to ...
    Unfortunately, in practice, we rarely have a sufficient number of low-resolution images for SRR to work. Usually, there is only one (or a few) blurry images. On ...
  58. [58]
    [PDF] Limitations of Super Resolution Image Reconstruction and How to ...
    SRR is not a good way to improve the resolution of individual pho- tographs since it requires many low-resolution im- ages; typically, only one blurry ...
  59. [59]
    Image Super-Resolution Using Deep Convolutional Networks - arXiv
    We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution ...
  60. [60]
    Enhanced Super-Resolution Generative Adversarial Networks - arXiv
    Sep 1, 2018 · The proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR ...
  61. [61]
    super-resolution single-molecule microscopy by deep learning - arXiv
    Jan 29, 2018 · We present an ultra-fast, precise, parameter-free method, which we term Deep-STORM, for obtaining super-resolution images from stochastically-blinking emitters.
  62. [62]
    Diffusion models for super-resolution microscopy: a tutorial
    Jan 15, 2025 · In this tutorial, we provide a comprehensive guide to build denoising diffusion probabilistic models from scratch, with a specific focus on transforming low- ...Abstract · Introduction · Diffusion models for super... · Conclusion
  63. [63]
    Deep learning accelerates super-resolution microscopy by up to ten ...
    Sep 10, 2021 · An international team of researchers has devised a deep learning algorithm to accelerate the imaging speed of super-resolution microscopy by up to ten times.
  64. [64]
    Advancing biological super-resolution microscopy through deep ...
    In this brief review, we survey recent advances in using deep learning to enhance the performance of biological super-resolution microscopy.
  65. [65]
    Near-Field Scanning Optical Microscopy of Quantum Dot Arrays
    Near-field scanning optical microscope (NSOM) measurement of n-type GaAs quantum dot arrays is reported. Monochromatic photoluminescence (PL) images clearly ...
  66. [66]
    Calculation of near-field scanning optical images of exciton, charged ...
    Jul 12, 2007 · The resolution of NSOM can thus be much higher than 9 λ ∕ 2 , providing details on how the carriers are distributed in the quantum dots. In a QD ...
  67. [67]
    Direct electric field imaging of graphene defects - Nature
    Sep 24, 2018 · Here, we show atomic-resolution electric field imaging at dopants and topological defects in graphene by using DPC–STEM recorded with a second- ...
  68. [68]
    Super‐Resolution Vibrational Imaging Using Expansion Stimulated ...
    May 6, 2022 · This work develops a new technique called MAGNIFIERS that integrates stimulated Raman scattering (SRS) microscopy with expansion microscopy ...
  69. [69]
    Advances in Super-resolution Stimulated Raman Scattering ...
    Oct 4, 2024 · SRS is a form of coherent Raman scattering that uses two synchronized pulsed lasers, known as the pump and Stokes beam, which have different ...Introduction · Optical Engineering Strategies... · Expansion SRS Microscopy
  70. [70]
    Super-Resolution Processing of Synchrotron CT Images for ... - MDPI
    May 6, 2023 · We therefore investigate the possibility of increasing the quality of low-resolution in situ scans by means of super-resolution (SR) using 3D deep learning ...
  71. [71]
    Super-resolution model-based iterative reconstruction for lens ...
    May 10, 2023 · A super-resolution model-based iterative reconstruction (SR-MBIR) algorithm is proposed based on a lens-coupled high-resolution micro-CT system and a high- ...
  72. [72]
    Deep convolutional neural networks to restore single-shot electron ...
    Jan 9, 2024 · Quantitative atomic resolution mapping using high-angle annular dark field scanning transmission electron microscopy. Ultramicroscopy 109 ...Missing: arrangements | Show results with:arrangements
  73. [73]
    Advancing atomic electron tomography with neural networks
    Jun 19, 2025 · This review highlights recent advances in neural network-assisted AET, emphasizing its role in overcoming persistent challenges in 3D atomic imaging.
  74. [74]
    [2505.01789] Enhancing atomic-resolution in electron microscopy
    May 3, 2025 · We present a deep learning-based denoising approach that operates in the frequency domain using a convolutional neural network U-Net trained on simulated data.
  75. [75]
    Super-resolving microscopy images of Li-ion electrodes for fine ...
    Apr 13, 2022 · This indicates that super-resolving SEM image data of cathode materials can significantly support the analysis of battery aging processes. ...
  76. [76]
    Machine learning super-resolution of laboratory CT images in all ...
    This study explores a method for achieving comparable resolution in laboratory CT images of all-solid-state batteries to that of synchrotron radiation CT.
  77. [77]
    Super-resolution enhancement of X-ray microscopic images of ...
    This study explores the application of single-image super-resolution (SISR) to enhance 3D X-ray microscopy (XRM) images for solder joint inspection.
  78. [78]
  79. [79]
    High-resolution Imaging of Transiting Extrasolar Planetary systems ...
    Jun 14, 2025 · We obtained high-resolution, high-contrast optical imaging in the Sloan Digital Sky Survey i′ band with the LuckyCam camera mounted on the 2.56 ...Missing: deconvolution | Show results with:deconvolution
  80. [80]
    [PDF] Deconvolution of JWST/MIRI Images: Applications to an Active ...
    Dec 6, 2024 · This paper discusses the deconvolution of JWST/MIRI images, applying it to an active galactic nucleus model and GATOS observations of NGC 5728.
  81. [81]
  82. [82]
    Global High Resolution CO2 monitoring using Super Resolution
    We produce a daily high resolution global CO2 dataset that opens the door for globally consistent point source monitoring.
  83. [83]
  84. [84]
  85. [85]
    Resolution in super-resolution microscopy – facts, artifacts ...
    May 27, 2025 · This paper compiles expert opinions on crucial, yet often overlooked, aspects of SRM that are essential for maximizing its benefits and advancing the field.
  86. [86]
    Structured illumination microscopy artefacts caused by ... - Journals
    Apr 26, 2021 · Because aberrant interference light aggravates with the increase in sample thickness, the reconstruction of the 2D-SIM SR image degraded with ...
  87. [87]
    The effect of 3D drift on super-resolution localization microscopy...
    However, prolonged acquisition introduces drift between the sample and the imaging system, resulting in artifacts in the reconstructed super-resolution image.
  88. [88]
    Photoblueing of organic dyes can cause artifacts in super-resolution ...
    Here, we discuss the mechanisms of photoblueing of fluorophores and its impact on fluorescence imaging, and show how it can be prevented.
  89. [89]
    Artifacts and Aberrations in Deconvolution Analysis
    Nov 13, 2015 · In the deconvolved image, significant ringing occurs (dark outline around fluorescent dendrite), most likely because of spherical aberration in ...
  90. [90]
    Quantitative mapping and minimization of super-resolution optical ...
    Feb 19, 2018 · This paper reports an approach to map errors in super-resolution images, based on quantitative comparison to diffraction-limited equivalents ...
  91. [91]
  92. [92]
    Photobleaching in STED nanoscopy and its dependence on the ...
    Sep 12, 2017 · Controlled light-exposure microscopy reduces photobleaching and phototoxicity in fluorescence live-cell imaging. Nat. Biotechnol. 25, 249 ...
  93. [93]
    Light-induced cell damage in live-cell super-resolution microscopy
    Oct 20, 2015 · On the other hand, phototoxic effects at high irradiation intensities can be substantially reduced when exciting fluorophores at >600 nm, ...Missing: bleaching | Show results with:bleaching
  94. [94]
    Superresolution Structured Illumination Microscopy - Zeiss Campus
    Superresolution structured illumination microscopy employs patterned excitation near the diffraction limit to almost double the lateral and axial resolution ...
  95. [95]
    The price of a STED microscope - @abberior.rocks
    Prices for a decent but very basic confocal microscope start at around 100,000 US dollars, while list prices for high-end confocal systems sometimes reach a ...
  96. [96]
    Stimulated Emission Depletion Microscopy (STED)
    STED systems are significantly more expensive and technically complex than ... High system cost, technical complexity, photobleaching, phototoxicity ...
  97. [97]
    Resolution in super-resolution microscopy – facts, artifacts ...
    May 27, 2025 · C. et al. (. 2015. ). Super-resolution imaging reveals structurally distinct periodic patterns of chromatin along pachytene chromosomes .
  98. [98]
    Super‐Resolution Fluorescence Microscopy Methods for Assessing ...
    Aug 26, 2021 · No imaging system is perfect, and there will always be a trade-off between spatial resolution, imaging speed, and SNR. SNR is the intensity ...<|separator|>
  99. [99]
    Deep learning enables fast, gentle STED microscopy - Nature
    Jun 27, 2023 · We report that restoring STED images with deep learning can mitigate photobleaching and photodamage by reducing the pixel dwell time by one or two orders of ...Missing: bleaching | Show results with:bleaching
  100. [100]
    A neural network for long-term super-resolution imaging of live cells ...
    Jan 29, 2025 · Super-resolution (SR) neural networks transform low-resolution optical microscopy images into SR images. Application of single-image SR ...
  101. [101]
    Deep learning massively accelerates super-resolution localization ...
    Apr 16, 2018 · Accelerating PALM/STORM microscopy with deep learning allows super-resolution imaging of >1000 cells in a few hours.Missing: paper | Show results with:paper
  102. [102]
    Fast DNA-PAINT imaging using a deep neural network - Nature
    Aug 27, 2022 · The ANNA-PALM neural network predicts a complete super-resolved image from a small set of input frames with incomplete structural features.
  103. [103]
    Diffusion-based deep learning method for augmenting ultrastructural ...
    Jun 1, 2024 · Diffusion models also generate outputs with high-resolution details, which is critical for imaging intricate nanoscale cellular structures with ...
  104. [104]
    Development of AI-assisted microscopy frameworks through realistic ...
    Sep 26, 2024 · One drawback of STED microscopy is the photobleaching of fluorophores associated with increased light exposure at the sample. Photobleaching ...
  105. [105]
    Single-frame deep-learning super-resolution microscopy for ... - Nature
    May 18, 2023 · Here we develop a deep-learning based single-frame super-resolution microscopy (SFSRM) method which utilizes a subpixel edge map and a multicomponent ...
  106. [106]
    AI analysis of super-resolution microscopy: Biological discovery in ...
    Analysis of super-resolution data by artificial intelligence (AI), such as machine learning, offers tremendous potential for the discovery of new biology.Missing: arrangements | Show results with:arrangements
  107. [107]
    Nanoscale imaging of biological systems via expansion and super ...
    Apr 25, 2025 · For simplicity, we here refer to SIM as an SRM (super-resolution microscopy) technology. While classical SIM is still fundamentally bound by ...
  108. [108]
    MINFLUX dissects the unimpeded walking of kinesin-1 - Science
    Mar 9, 2023 · The super-resolution microscopy technique MINFLUX enables localization of fluorophores using a minimal number of photons.Minflux Dissects The... · Abstract · Atp Binds In One-Head-Bound...
  109. [109]
    Super-Resolution Photoacoustic Imaging at Depths - PubMed
    Recent advances in super-resolution (SR) PAI have opened new possibilities for achieving high imaging resolution at greater depths.
  110. [110]
    Deep tissue super-resolution imaging with adaptive optical two ...
    Dec 21, 2023 · Adaptive optics (AO) is an effective method to recover spatial resolution and signal-to-noise ratio (SNR) in deep tissues and complex ...
  111. [111]
    Pixel super-resolution with spatially entangled photons - Nature
    Jun 22, 2022 · EMCCD cameras are the most widely used devices for imaging photon correlations thanks to their high quantum efficiency, low noise and high pixel ...
  112. [112]
    Portable Endoscope Cameras – Bringing Precision and Flexibility to ...
    Aug 31, 2025 · Portable Endoscope Cameras – Bringing Precision and Flexibility to Modern Diagnostics. 2025 ... High-Resolution Imaging: Supports HD or 4K imaging ...
  113. [113]
    MINSTED nanoscopy enters the Ångström localization range - Nature
    Nov 7, 2022 · Here we report all-optical, room temperature localization of fluorophores with precision in the Ångström range.