Ptychography
Ptychography is a computational imaging technique that reconstructs the complex-valued transmission function (amplitude and phase) of a specimen from a series of far-field diffraction intensity patterns, obtained by scanning a localized coherent illumination probe across overlapping regions of the sample and applying iterative phase retrieval algorithms.[1][2]
The technique addresses the phase problem in coherent diffraction imaging by exploiting redundancy from the overlapping probe positions, enabling wavelength-limited resolution without relying on physical lenses, which are prone to aberrations.[1][2] Originating from proposals in electron crystallography by Walter Hoppe in 1969, ptychography was theoretically formalized for electron microscopy by Rodenburg and Bates in 1992 and for X-rays by Chapman in 1996, with practical implementations emerging in the early 2000s through algorithms like the difference map and ptychographic iterative engine (PIE) developed by Faulkner and Rodenburg in 2004.[1][2] In optics, key advancements include the first experimental demonstrations in 2007 and the introduction of Fourier ptychography in 2013, which extends the method to wide-field imaging by angularly varying the illumination.[2][3]
Ptychography's advantages include superior dose efficiency for radiation-sensitive samples, quantitative phase contrast for transparent specimens like biological cells, and applicability across wavelengths—from visible light to X-rays and electrons—allowing integration with synchrotron sources, transmission electron microscopes, and standard optical setups.[1][2] It overcomes traditional microscopy trade-offs between resolution and field-of-view, achieving sub-micron to atomic-scale details over millimeter-scale areas, and requires no reference beam or staining, making it ideal for live imaging.[2]
Notable applications span materials science, where it enables 3D tomography of nanostructures at synchrotron facilities; electron microscopy for atomic-resolution imaging of defects in crystals; and biomedicine, including quantitative phase imaging of cells for drug screening, pathology, blood analysis, and high-throughput detection of rare circulating tumor cells.[1][2] Recent developments, such as single-shot variants and integration with machine learning for faster reconstructions, continue to expand its utility in real-time and large-scale imaging tasks.[2]
Fundamentals
Definition and Basic Principles
Ptychography is a lensless computational microscopy technique that reconstructs the complex-valued transmission function—encompassing both amplitude and phase—of a specimen from a series of overlapping diffraction intensity patterns.[2] It operates without physical lenses by leveraging scanned coherent illumination to probe the sample, enabling high-resolution imaging beyond traditional optical limits.[4] This method addresses the limitations of direct imaging by computationally inverting far-field diffraction data collected across multiple overlapping regions.[1]
The basic principles of ptychography involve illuminating the sample with a localized coherent probe, such as a focused beam, and systematically scanning it across the specimen in a raster or grid pattern. At each probe position, the transmitted or scattered wave interferes in the far field, producing a diffraction pattern that is recorded by a detector, forming a dataset known as a ptychogram. The key to successful reconstruction lies in the spatial overlap between adjacent probe positions, which introduces redundancy in the measured intensities; this shared information across patterns allows iterative algorithms to recover the lost phase information and resolve ambiguities inherent in intensity-only measurements.[2] Significant spatial overlap between adjacent probe positions, typically more than 50%, is employed to ensure reconstruction stability and uniqueness, though lower overlaps as small as 30% can suffice with advanced algorithms and sufficient data quality.[4][5]
Central to the technique is the probe illumination function, which describes the complex wavefront of the incident beam—often modeled as a Gaussian or Airy disk—and interacts multiplicatively with the sample's transmission function to form the exit wave. In a standard experimental setup, a coherent source generates the probe, which is directed onto the sample; the sample is translated laterally relative to the probe (or vice versa) using a scanning stage, while a distant detector captures the resulting speckle-like diffraction fringes without imaging optics. This configuration exploits the Fourier relationship between the exit wave and the recorded intensities.[4]
Ptychography has been implemented across diverse modalities, including X-ray for nondestructive imaging of extended samples at synchrotron facilities, electron for atomic-scale resolution in transmission electron microscopy, and optical using visible light for quantitative phase imaging in biomedical applications.[2]
The Phase Problem and Retrieval
In diffraction-based imaging techniques, the phase problem arises because detectors measure only the intensity of the scattered wave, which corresponds to the squared magnitude of the Fourier transform of the object's exit wave, while the phase information—essential for reconstructing the spatial distribution via inverse Fourier transform—is lost.[6] This ambiguity prevents direct inversion to recover the complex-valued image of the sample, as multiple phases can yield the same intensity pattern.[7]
Ptychography overcomes this limitation by employing a scanning illumination probe that overlaps across adjacent regions of the sample, generating a set of diffraction patterns with built-in redundancy. These overlapping measurements impose consistency constraints on the shared portions of the sample, allowing iterative algorithms to enforce agreement between the modeled and measured intensities across all patterns, thereby retrieving the lost phase information.[6] The redundancy from overlaps, typically more than 50%, ensures uniqueness in the reconstruction under the weak object approximation, where scattering is predominantly forward and the probe is well-localized.[5]
Under the weak object approximation, the exit wave function at the k-th probe position is modeled as the product \psi_k(\mathbf{r}) = P(\mathbf{r} - \mathbf{R}_k) \cdot O(\mathbf{r}), where P(\mathbf{r}) is the complex probe illumination function, O(\mathbf{r}) is the complex object transmission function, and \mathbf{R}_k denotes the lateral shift of the probe for the k-th measurement.[6] The measured intensity for each position is then the modulus squared of the Fourier transform:
I_k(\mathbf{u}) = \left| \mathcal{F} \left\{ \psi_k(\mathbf{r}) \right\} (\mathbf{u}) \right|^2,
where \mathcal{F} denotes the Fourier transform and \mathbf{u} is the spatial frequency coordinate in the far-field diffraction plane.[7] This formulation assumes a thin sample and paraxial propagation, capturing the essential multiplicative interaction between probe and object.
Phase retrieval in ptychography proceeds iteratively by projecting the current estimate of the exit wave onto two constraint sets: in Fourier space, the amplitude is replaced by the square root of the measured intensity I_k(\mathbf{u}) while preserving the estimated phase; in real space, the object's support is enforced by the known overlap regions, ensuring the reconstructed object remains consistent with the probe positions across all illuminations. These projections exploit the diversity introduced by the shifted probes to resolve ambiguities, converging to a solution that simultaneously refines both the object and probe functions.[6]
Experimental Configurations
Far-Field Focused-Probe Ptychography
Far-field focused-probe ptychography employs a coherent illumination source, such as an X-ray or electron beam, that is focused onto the sample using a lens or Fresnel zone plate to form a localized probe. The probe is raster-scanned across the specimen using piezoelectric translation stages, with the scan typically following a rectangular or hexagonal grid pattern. At each probe position, the transmitted or scattered wavefront diffracts, and the resulting intensity pattern is captured in the far field—corresponding to the Fraunhofer diffraction regime—by a downstream detector, such as a charge-coupled device (CCD) or pixel array detector. This setup ensures that the diffraction patterns are recorded at a sufficient distance from the sample to approximate plane-wave propagation, enabling the collection of a ptychogram comprising multiple overlapping diffraction measurements.
The probe in this configuration generally features a Gaussian or Airy disk intensity profile, determined by the focusing optics and aperture, with the beam's finite size providing spatial confinement for localized illumination. Scanning overlaps between adjacent probe positions are crucial for data redundancy and are typically set to 60-80% to facilitate robust phase retrieval by linking information across measurements. Experimental parameters often include probe diameters of 10-100 nm, tailored to the wavelength and application—for instance, smaller probes for higher resolution in hard X-ray regimes—while detector pixel sizes are chosen to satisfy Nyquist sampling of the diffraction fringes, usually on the order of 10-50 μm per pixel to capture speckle details without aliasing.
This far-field approach offers significant advantages for high-resolution imaging, achieving spatial resolutions limited primarily by the probe size and wavelength rather than lens aberrations or detector limitations, which is particularly beneficial for X-ray and electron modalities where traditional optics suffer from chromatic dispersion or low numerical apertures. It supports quantitative phase and amplitude reconstruction of extended samples with tolerance to partial coherence and noise, enabling sub-10 nm resolutions in practice. For example, at synchrotron facilities, far-field focused-probe ptychography is routinely used for 2D imaging of nanostructured materials, such as integrated circuits, yielding 20 nm resolution with low radiation dose.[8]
Near-Field Ptychography
Near-field ptychography operates in the Fresnel diffraction regime, where intensity patterns are recorded at small distances from the sample, typically on the order of millimeters to centimeters for X-rays or micrometers for electrons, without the need for focusing lenses. In this inline holography-based setup, a coherent probe illuminates an extended area of the sample, and the resulting interference between the scattered and unscattered waves is captured directly on a detector positioned close to the specimen.[9][10] The sample is scanned laterally relative to the illumination, with overlapping probe positions providing redundancy for phase retrieval, enabling reconstruction of the complex-valued exit wave.[11]
This configuration is particularly sensitive to phase shifts induced by the sample due to the quadratic phase factors in the near-field propagation, which encode information about both amplitude and phase through self-interference.[12] Unlike far-field methods, near-field ptychography accommodates thicker or extended objects by capturing propagation effects that reveal depth information, making it suitable for 3D reconstructions via tomographic approaches or multi-distance measurements.[11] For instance, it has been applied to optically thick specimens, such as a 46 μm diameter uranium sphere, achieving quantitative phase imaging with resolutions approaching 1 μm.[11]
The forward model in near-field ptychography relies on the Fresnel propagation operator to relate the sample's exit wave to the measured intensities. The propagated field at distance z from the object can be expressed in the frequency domain as:
\psi'(u,v) = \mathcal{F}^{-1} \left\{ H(u,v) \mathcal{F} \{\psi(r)\} \right\}
where \mathcal{F} and \mathcal{F}^{-1} denote the Fourier transform and its inverse, respectively, \psi(r) is the exit wave in the spatial domain, and H(u,v) = \exp\left[i \pi \lambda z (u^2 + v^2)\right] is the transfer function with wavelength \lambda.[12][9] This kernel-based propagation facilitates efficient computational modeling of the diffraction process during iterative reconstruction.
Recent advances have extended near-field ptychography to electron microscopy, particularly through full-field structured illumination schemes that replace focused probes with patterned electron beams to enhance information encoding and reduce data acquisition times. In 2024, a configuration using a diffractive optical element to generate structured illumination demonstrated high-resolution phase imaging of biological samples with improved signal-to-noise ratios.[13] These developments incorporate propagation kernels to model electron wave diffraction accurately, enabling applications in low-dose imaging where radiation sensitivity is a concern, such as in cryo-electron microscopy of soft matter.[14][13]
Fourier Ptychography
Fourier ptychography (FP) is a computational imaging technique that extends the resolution of conventional bright-field microscopes by using structured illumination with angled plane waves, typically generated by an LED array positioned beneath the sample, to capture a series of low-resolution intensity images. These images are then processed to reconstruct a high-resolution, wide-field complex-valued image equivalent to that obtained with a high numerical aperture (NA) objective, without requiring mechanical scanning of the sample or probe. The setup replaces the standard light source in a microscope with a programmable illuminator, such as a densely packed LED array, where each LED emits a plane wave at a specific angle to illuminate the sample, and a low-NA objective lens collects the resulting diffraction patterns or defocused images. This approach achieves synthetic NA values up to 0.5 or higher, enabling sub-micron resolution over fields of view exceeding 100 times that of traditional high-NA systems.
The underlying principle of FP relies on filling the Fourier space of the sample's exit wave through overlapping spectral "windows" provided by each illumination angle, allowing iterative phase retrieval algorithms to stitch these components into a coherent high-resolution reconstruction. Under plane-wave illumination at angle \theta_\ell, the exit wave from the sample, denoted as O(\mathbf{r}), is modulated and Fourier-transformed by the objective pupil function P(\mathbf{u}), where \mathbf{u} represents spatial frequencies. The intensity image for the \ell-th illumination is given by
I_\ell(\mathbf{r}) = \left| \mathcal{F}^{-1} \left\{ P(\mathbf{u}) \cdot O(\mathbf{u} - \mathbf{u}_\ell) \right\} \right|^2,
with \mathbf{u}_\ell = (\sin\theta_\ell / \lambda) \mathbf{\hat{k}} as the Fourier shift corresponding to the illumination wavevector, \mathcal{F}^{-1} the inverse Fourier transform, and \lambda the wavelength. The overlaps between adjacent spectral windows, typically 60-70% for optimal reconstruction, enable robust recovery of the full object spectrum via phase retrieval methods that enforce consistency across measurements. This process circumvents the phase problem by leveraging redundancy in the Fourier domain, yielding quantitative phase and amplitude information.[15]
Recent developments in FP include neural pupil engineering for whole-field high-resolution imaging, as demonstrated in 2025 with the NePE-FPM method, which dynamically optimizes the pupil function using implicit neural representations and multi-resolution hash encoding to achieve continuous shifts and improved fidelity in off-axis regions, reducing artifacts in large-scale reconstructions. Applications in digital pathology have advanced with FP systems integrated into standard microscopes, enabling gigapixel-scale quantitative phase imaging of tissue samples for automated analysis, with resolutions approaching 0.5 \mum over fields of view up to 4 mm². Unlike scanning-based ptychography variants, FP enables parallel acquisition of multiple images simultaneously or in rapid sequence, making it particularly suitable for live-cell imaging and dynamic processes where temporal resolution is critical.[16][17]
Bragg and Reflection Ptychography
Bragg ptychography extends ptychographic imaging to crystalline samples by leveraging Bragg diffraction in reflection geometry, enabling the reconstruction of three-dimensional strain fields and lattice distortions with nanoscale resolution. This approach combines the overlap constraints of conventional ptychography with the sensitivity of Bragg coherent diffraction imaging to atomic-scale displacements in periodic structures.[18]
In the experimental setup, a focused coherent X-ray probe is raster-scanned across the sample surface in overlapping positions, with the sample oriented at the Bragg angle relative to the incident beam. Diffraction patterns are recorded on a far-field detector as the sample is angularly scanned through multiple positions along the rocking curve, typically in steps of 0.005° or finer, to capture a 3D dataset analogous to a rocking curve for each probe position.[18] This configuration allows for quantitative mapping of lattice strains down to $10^{-4} levels over fields of view up to several micrometers, with resolutions on the order of 40 nm in all dimensions.[18] The reflection mode is particularly suited for surface-sensitive imaging of opaque or thick crystalline materials, circumventing the limitations of transmission geometries.
The underlying principles rely on the phase retrieval of diffraction intensities to reconstruct the complex electron density and displacement field \mathbf{u}(\mathbf{r}) of the crystal lattice, where the retrieved phase \phi_{hkl}(\mathbf{r}) = \mathbf{Q}_{hkl} \cdot \mathbf{u}(\mathbf{r}) directly encodes distortions along the scattering vector \mathbf{Q}_{hkl}.[18] The scattering vector \mathbf{q} must satisfy the Bragg condition |\mathbf{q}| = \frac{4\pi \sin \theta}{\lambda}, with \theta as the scattering angle and \lambda the X-ray wavelength, ensuring selective diffraction from specific lattice planes.
Bragg ptychography has been widely applied in synchrotron X-ray imaging of nanomaterials, providing insights into strain evolution in structures like He-implanted tungsten foils.[18] For example, Bragg ptychography has revealed nanoscale defects and strains in semiconductor heterostructures, such as InGaN/GaN nanowires and silicon-on-insulator devices, aiding the study of radiation damage and epitaxial growth.[19][20]
Multislice and Vectorial Variants
Multislice ptychography extends conventional ptychographic imaging to three-dimensional reconstruction by modeling the probe wave's propagation through a series of thin sample slices, thereby incorporating multiple scattering effects prevalent in thicker specimens. This approach divides the specimen into discrete parallel planes, each characterized by a complex transmission function t_n, and iteratively simulates wave evolution across these layers during the forward modeling process. The propagation from the wave field \psi_n at slice n to \psi_{n+1} at the next slice is described by the equation
\psi_{n+1} = P \cdot \mathcal{F}^{-1} \left\{ \mathcal{F} \{ \psi_n \} \cdot t_n \right\},
where P represents the free-space propagation operator between slices, and \mathcal{F} and \mathcal{F}^{-1} denote the Fourier transform and its inverse, respectively. This multislice framework enables simultaneous recovery of multiple axial planes with substantially lower data demands than traditional ptychographic tomography, making it suitable for strongly scattering samples beyond the weak-phase approximation.[21][22]
In electron microscopy applications, multislice ptychography has demonstrated atomic-resolution limits set by thermal lattice vibrations, even in multilayer materials where multiple electron scattering alters probe shape and causes dechannelling. Recent advancements integrate generative priors, such as diffusion models, into multislice electron ptychography reconstructions via posterior sampling techniques, yielding enhanced structural fidelity and robustness for atomic-scale 3D imaging of crystals. These methods achieve super-resolution by correcting probe aberrations and provide depth-resolved information with axial resolutions down to 2 Å and transverse resolutions of 0.7 Å, representing a 13.5-fold improvement in information content over conventional approaches.[23][24][25]
Vectorial ptychography variants overcome the scalar wave assumption of standard methods by accounting for the full vectorial electromagnetic field, including probe polarization states and aberrations that arise in high-numerical-aperture or anisotropic imaging scenarios. These techniques utilize the Jones matrix formalism to describe non-scalar wave-sample interactions, representing the specimen's response as a 2×2 complex matrix that modulates incident field components. In vectorial Fourier ptychography, for example, variable-angle illumination facilitates the quantitative retrieval of the full Jones matrix, enabling high-resolution mapping of polarimetric properties like birefringence and dichroism in specimens. Extensions to circular polarization further broaden applicability by modeling chiral or magneto-optical responses within the same framework.[26][27]
Multislice and vectorial ptychography find key applications in imaging thick biological samples, where multiple scattering obscures phase signals in traditional setups, and in magnetic materials, where vectorial sensitivity reveals domain structures and polarization-dependent scattering.[21][23]
Reconstruction Algorithms
Classical Iterative Methods
Classical iterative methods for ptychographic reconstruction address the phase problem by enforcing consistency between measured intensity data in the Fourier domain and estimated object support or probe constraints in the real domain through repeated projections. These algorithms, originating from general phase retrieval techniques, were adapted to ptychography to exploit overlapping illumination regions for improved convergence and robustness.[28][29]
The Error Reduction (ER) algorithm alternates between enforcing the measured Fourier amplitudes on the current estimate of the exit wave and applying real-space support constraints to the inverse-transformed result. In each iteration, the Fourier transform of the current object-probe estimate is adjusted to match the square root of the measured intensities (modulo phase), followed by an inverse Fourier transform and masking to the known support region. This process reduces discrepancies iteratively but can stagnate in local minima without additional feedback mechanisms.[28][30]
To mitigate stagnation in ER, the Hybrid Input-Output (HIO) method introduces a feedback parameter to modify updates outside the support region. Developed by Fienup for general phase retrieval, HIO computes an output estimate \gamma after Fourier constraint enforcement, then updates the input estimate \psi as \psi' = \psi + \beta (\gamma - \psi), where \beta is a tunable feedback factor (typically 0.5–1.0) that blends the previous input with the projected output. Inside the support, the update directly replaces with \gamma; this hybrid approach promotes escape from stagnant solutions while maintaining consistency within constraints. In ptychography, HIO is often integrated sequentially with position-specific projections to handle overlapping data.[28][31]
The Difference Map (DM) algorithm projects between multiple constraint sets to ensure global consistency across all probe positions. It models the reconstruction as finding a fixed point in the intersection of real-space overlap constraints and Fourier modulus constraints, using the difference between successive projections to update estimates: \psi_{n+1} = \psi_n + \alpha (P_2 P_1 - I) \psi_n, where P_1 and P_2 are projectors onto the respective sets, \alpha is a relaxation parameter, and I is the identity. This formulation, introduced for ptychography by Thibault et al., enables parallel processing of diffraction patterns and simultaneous probe and object refinement, enhancing stability for extended specimens.[32][33]
The extended Ptychographical Iterative Engine (ePIE) iteratively updates both the object transmission function and the probe function by partitioning the overlap regions. Starting with an initial probe and object guess, for each scan position j, the exit wave \psi_j = O \cdot P_j (where O is the object and P_j the shifted probe) is Fourier-transformed, amplitudes enforced to match measurements, and inverse-transformed to yield an updated exit wave estimate \psi_j'. The object update in the overlap is then O'(\mathbf{r}) = O(\mathbf{r}) + \frac{\alpha_j}{P_j(\mathbf{r})} (\psi_j'(\mathbf{r}) - \psi_j(\mathbf{r})) for positions illuminated by probe j, with \alpha_j a step size (often 0.5–1.0); the probe is similarly refined across positions. This position-adaptive approach, building on the original PIE, allows self-consistent retrieval without prior probe knowledge.[29]
A pseudocode overview for ePIE illustrates the iterative structure:
Initialize: object O, probe P, measured intensities I_j for each position j
For iteration k = 1 to N:
For each scan position j:
Compute exit wave ψ_j = O * shift(P, position_j)
Compute [Fourier transform](/page/Fourier_transform) F(ψ_j)
Enforce amplitudes: F'(q) = sqrt(I_j(q)) * exp(i arg(F(ψ_j)(q)))
Inverse transform: ψ_j' = F^{-1}(F')
Update object in overlap: O_new(r) = O(r) + (α / |P(r)|^2) * (ψ_j'(r) - ψ_j(r)) * P^*(r) [for r in overlap_j]
Update probe: P_new = average over j of [ (ψ_j'(r) - O * shift(P, j))(r) / O^*(r) ] [normalized]
O ← O_new; P ← P_new / norm(P_new)
End
Output: reconstructed O and P
Initialize: object O, probe P, measured intensities I_j for each position j
For iteration k = 1 to N:
For each scan position j:
Compute exit wave ψ_j = O * shift(P, position_j)
Compute [Fourier transform](/page/Fourier_transform) F(ψ_j)
Enforce amplitudes: F'(q) = sqrt(I_j(q)) * exp(i arg(F(ψ_j)(q)))
Inverse transform: ψ_j' = F^{-1}(F')
Update object in overlap: O_new(r) = O(r) + (α / |P(r)|^2) * (ψ_j'(r) - ψ_j(r)) * P^*(r) [for r in overlap_j]
Update probe: P_new = average over j of [ (ψ_j'(r) - O * shift(P, j))(r) / O^*(r) ] [normalized]
O ← O_new; P ← P_new / norm(P_new)
End
Output: reconstructed O and P
These methods typically converge to sufficient accuracy within 50–200 iterations, depending on overlap fraction (60–80% recommended) and noise levels, with error metrics like mean squared error dropping below 1% of initial values.[34]
Advanced Computational Techniques
Advanced computational techniques in ptychography have evolved to address challenges in noise, computational efficiency, and data scarcity, incorporating regularization strategies, machine learning integrations, and optimized hardware utilization. Regularization methods, such as total variation (TV) priors, promote smoothness in reconstructions while preserving edges, effectively reducing noise in oversampled datasets. For instance, anisotropic TV regularization has been applied to sparsely sampled Fourier ptychography, improving robustness to noise by jointly estimating object and pupil functions. Similarly, sparsity priors enforce low-rank representations of the object or probe, aiding reconstruction from undersampled data where overlap between scans is limited. These approaches build on classical iterative methods by adding constrained optimization to stabilize convergence in noisy environments.
Neural networks have advanced phase retrieval through physics-informed deep learning frameworks, which embed forward models like the Fourier transform directly into network architectures to ensure physical consistency. For example, unsupervised physics-informed neural networks (PINNs) accelerate reconstructions by 100- to 1000-fold compared to traditional methods, while maintaining high fidelity in quantitative phase imaging. Recent developments from 2023 to 2025 emphasize fine-tuning pre-trained models, such as adapting large convolutional networks to ptychographic data, which enhances generalization and reduces training data needs; one such strategy achieved superior reconstruction quality on diverse datasets by leveraging transfer learning from ImageNet-pretrained backbones.
Real-time reconstruction is facilitated by on-the-fly GPU processing, enabling immediate feedback during experiments to adjust scan parameters dynamically. Multi-GPU implementations of algorithms like the multi-mode difference map (DM) support distributed computing, scaling to large datasets with minimal communication overhead and achieving reconstruction times of tens of seconds for high-resolution images. These techniques distribute gradient computations across nodes, outperforming single-GPU baselines in accuracy and speed for synchrotron-scale problems.
Low-dose methods leverage Bayesian approaches to handle photon-limited regimes, using non-convex optimization to minimize required electron or X-ray doses by two orders of magnitude while preserving structural details in cryo-samples. In multislice ptychography, generative priors from diffusion models serve as regularizers, improving atomic-resolution reconstructions; 2025 benchmarks demonstrate that integrating such priors via diffusion posterior sampling yields sub-nanometer 3D maps from sparse data, with error rates reduced by up to 30% over unregularized baselines.
A common formulation for these regularized reconstructions minimizes a loss function that balances data fidelity and prior constraints:
L = \sum ||I_{\text{meas}} - |\mathcal{F}\{\psi\}|^2||^2 + \lambda R(\psi),
where I_{\text{meas}} are measured intensities, \mathcal{F} denotes the Fourier transform, \psi is the exit wave, \lambda is a regularization parameter, and R(\psi) is a prior such as TV or sparsity.
Software tools like Ptychography 4.0 facilitate these advancements by providing efficient coordinate transforms for detector data to reconstruction space, supporting on-the-fly processing in GPU environments.
Advantages
Lensless Imaging and Resolution Enhancement
Ptychography operates as a lensless imaging modality, reconstructing the complex-valued sample transmission function directly from far-field diffraction patterns recorded by a detector, without the need for objective lenses that introduce aberrations such as spherical or chromatic distortions in conventional microscopy.[23] This approach circumvents the limitations imposed by imperfect optics, where lens aberrations typically degrade resolution beyond the diffraction limit defined by the numerical aperture.[35] Instead, the ultimate spatial resolution in ptychography is governed by the illuminating wavelength and the positional stability of the scanning probe, enabling diffraction-limited performance limited only by these fundamental factors.[36]
In electron ptychography, this lensless configuration has facilitated sub-nanometer resolutions, with recent demonstrations achieving below 0.1 nm (sub-Ångström) for atomic-scale imaging in scanning transmission electron microscopy setups.[37] Computational post-processing plays a central role in resolution enhancement, iteratively refining both the incident probe wavefunction and the sample's phase and amplitude to recover high-frequency information that exceeds the capture limits of the detector or probe aperture.[23] This process effectively inverts the forward diffraction model, extending resolution beyond the classical diffraction limit by leveraging redundant overlapping measurements across scan positions.[38]
Compared to traditional lens-based microscopy, ptychography offers superior performance for radiation-sensitive specimens, as it distributes the illumination dose over multiple overlapping probes while computationally reconstructing a high-resolution image from low-dose per-position data, minimizing beam-induced damage.[39] For example, atomic-resolution imaging of two-dimensional materials such as graphene has been realized at room temperature without cryogenic cooling, achieving information-limited resolutions that surpass those of aberration-corrected electron microscopes under similar conditions.[23]
Tolerance to Incoherence and Noise
Ptychography exhibits significant tolerance to partial incoherence in the illuminating beam, enabling reliable reconstructions even when the probe coherence is imperfect. Algorithms model this partial coherence by propagating the mutual intensity function through the imaging system, where the measured intensity in the diffraction plane is given by the convolution of the fully coherent intensity with the Fourier transform of the complex degree of coherence \hat{\gamma}(q), expressed as I_{pc}(q) = I_{fc}(q) \otimes \hat{\gamma}(q).[40] This approach accounts for the statistical stationarity of the mutual intensity J(r_1, r_2), which depends only on the separation \Delta r = r_1 - r_2, allowing the degree of coherence \mu(\Delta x, \Delta y) to be modeled as a Gaussian function \mu(\Delta x, \Delta y) = \exp\left[- ((\Delta x)^2 + (\Delta y)^2) / 2\sigma^2\right].[40] The overlapping scan positions in ptychography provide redundant constraints across multiple diffraction patterns, enhancing robustness to reduced coherence (e.g., when the coherence length \sigma is less than 25% of the probe size), with reconstruction quality improving for overlaps exceeding 40%.[40]
For scenarios involving a simple coherence factor \mu between field components, the intensity can be approximated as I = |\psi|^2 (1 + |\mu|^2 - 2 \operatorname{Re}(\mu \psi^* \psi)), where \psi represents the complex field amplitude, highlighting how partial coherence modulates the observed signal.[40] This modeling enables ptychographic algorithms to jointly retrieve the object and the coherence properties without prior knowledge of the beam's spatial or temporal coherence, distinguishing it from traditional coherent imaging techniques that degrade sharply under similar conditions.
Ptychography also demonstrates robust handling of noise, particularly Poisson-distributed shot noise prevalent in photon- or electron-limited measurements. Reconstruction algorithms employ statistical methods, such as maximum-likelihood estimation, to optimize the likelihood of observed intensities under Poisson statistics, minimizing the impact of noise on phase retrieval convergence. This approach outperforms least-squares methods by directly incorporating the variance equal to the mean intensity, leading to more accurate amplitude and phase estimates even at low signal levels. For instance, in electron ptychography, low-dose imaging at doses of approximately 35–49 electrons per square angstrom has achieved resolutions of 5.8–8.4 Å for biological samples like apoferritin and viral sheaths, using iterative refinements that leverage the full 4D dataset.[41][42]
These noise-tolerant strategies enable operando imaging of beam-sensitive specimens, such as proteins in cryo-electron microscopy, where cumulative radiation damage is minimized by distributing the dose across overlapping probes (e.g., 0.5 electrons per square angstrom per position), preserving structural integrity during dynamic processes.[41] Advanced computational techniques, including those with Poisson maximum-likelihood objectives, further enhance this robustness by adapting to varying noise regimes.
Self-Calibration and Multiple Scattering Inversion
Ptychography's self-calibration capability arises from its ability to jointly reconstruct the illumination probe function and the sample object function without requiring prior knowledge of either, thereby correcting for aberrations and instabilities inherent in the imaging system. This process leverages the redundancy in overlapping diffraction patterns to iteratively refine both components, ensuring that errors in the probe—such as wavefront distortions or focus shifts—are compensated during the reconstruction. Seminal work demonstrated that this joint estimation enhances reconstruction fidelity by adapting to experimental imperfections, such as probe aberrations, without external calibration references.
A key implementation of this self-calibration is the extended ptychographic iterative engine (ePIE), which updates the probe and object estimates in a coupled manner to minimize discrepancies between measured and simulated diffraction data. The probe update rule in ePIE is expressed as
P' = P + \alpha \left( \frac{\psi_\mathrm{meas}}{O} - P \right),
where P is the current probe estimate, \alpha is an adjustable step size parameter controlling the update strength, \psi_\mathrm{meas} represents the measured exit wave derived from the diffraction pattern after modulus projection, O is the object estimate, and the calculated exit wave is \psi_\mathrm{calc} = P \cdot O. This formulation allows the algorithm to dynamically correct probe variations across scan positions, improving overall image quality and resolution in configurations involving strong scattering. The benefits include robustness to experimental instabilities, such as vibrations or thermal drifts, which would otherwise degrade reconstructions in traditional imaging methods.
To address multiple scattering in thick samples, where forward propagation involves complex interactions beyond the single-scattering approximation, ptychography employs multislice models that divide the specimen into thin parallel slices and simulate wave propagation through each successive layer. This approach inverts the multiple-scattering forward model by iteratively optimizing the transmission functions of all slices against the measured data, accurately recovering phase and amplitude even in volumetrically extended objects. Pioneering demonstrations showed that multislice ptychography resolves features in thick samples at resolutions limited only by the probe size, overcoming depth-of-field constraints in conventional techniques.
The integration of multislice propagation enables 3D tomographic reconstructions in ptychography by combining multi-slice inversions with angular sampling, yielding high-fidelity volumetric images with reduced data requirements compared to full tomographic scans. This capability adapts to scattering-dominated regimes, such as in electron or X-ray imaging of dense materials, and supports quantitative analysis of internal structures without destructive sectioning. Overall, these self-calibration and multiple-scattering inversion strategies make ptychography particularly suited for imaging dynamic or aberrant systems, enhancing its utility in high-resolution volumetric studies.[21]
Limitations
Computational and Data Requirements
Ptychography reconstruction involves processing high-dimensional datasets, particularly in 4D-STEM modalities where each scan position yields a full diffraction pattern, resulting in datasets significantly larger than those from conventional STEM imaging.[43] For large-scale scans, such as those covering extended fields of view at high resolution, data volumes routinely exceed 100 GB, demanding substantial storage and computational resources.[44] Iterative algorithms, essential for phase retrieval and enforcing overlap constraints, require accelerated hardware like multi-GPU systems to handle the matrix operations and Fourier transforms across thousands of scan positions.[45]
A primary challenge arises from memory demands for storing overlap regions between adjacent illuminations, where consumption scales quadratically with the number of scan points in certain implementations, limiting reconstructions to smaller datasets without distributed computing.[46] Real-time processing, desirable for experimental feedback, remains constrained even with 2025 advancements in GPU-accelerated algorithms, as full iterations on large datasets can take minutes to hours depending on overlap ratios and hardware.[47] These limitations often necessitate compromises, such as reducing scan density or overlap to fit within available memory, which can trade off reconstruction fidelity for feasibility.
To mitigate these burdens, data reduction techniques preprocess diffraction patterns by discarding redundant or low-information pixels prior to reconstruction, as demonstrated by a 2023 Argonne National Laboratory algorithm that achieves up to 80% data compression while preserving image quality.[48] Such methods not only lower storage needs but also accelerate convergence in iterative solvers, enabling processing on standard high-performance computing clusters.[49] Overall, ptychography's computational trade-offs require balancing higher resolution—demanding denser scans and more overlaps—against processing speed, often guided by application-specific priorities like throughput in synchrotron experiments.[50]
Experimental and Practical Challenges
Ptychography experiments demand exceptional mechanical stability to achieve sub-nanometer scan precision, as even minor vibrations between the beam and sample can introduce incoherent blurring in diffraction patterns, severely limiting spatial resolution. For instance, random high-frequency vibrations degrade pattern visibility and cause crosstalk, reducing achievable resolutions to tens of nanometers in affected setups. This requirement for nanometer-scale stability in scanning stages and environmental isolation poses significant hurdles in laboratory implementations, particularly for long-duration scans at synchrotron facilities.[51][52]
Beam damage represents a critical constraint, especially for biological specimens, where high-energy illumination induces structural degradation that compromises sample integrity and resolution. In cryo-ptychography, radiation damage from X-ray or electron beams limits exposures, often necessitating cryogenic conditions to preserve frozen-hydrated states, yet low coherent flux still weakens signal-to-noise ratios in diffraction data. This trade-off forces researchers to balance dose efficiency with imaging quality, as insufficient flux leads to noisy patterns that hinder phase retrieval.[53][54]
Experimental setups in ptychography are inherently complex, requiring precise alignment for variants like Bragg or multislice configurations to ensure accurate interslice distances and focus-to-sample positioning. At synchrotrons, achieving high numerical apertures with optics such as multilayer Laue lenses demands coherent illumination and beam cleaning via pinholes, while the high cost and competitive access to these facilities restrict widespread adoption. Misalignments by even 10% can distort the effective Fresnel number, degrading reconstruction fidelity in thick or in-situ samples.[55][56]
Recent advancements have highlighted ongoing issues with partial coherence in emerging sources, particularly in terahertz ptychography, where low signal-to-noise ratios and source instabilities challenge phase retrieval under low-overlap conditions. In 2025 experiments, terahertz setups using quantum cascade lasers faced difficulties with convergent illumination and noise, limiting resolutions to around 3.75 wavelengths despite partial coherence lengths of tens of micrometers.[57][58]
To mitigate these hurdles, hybrid experimental setups combining multiple illumination modes or optics have been developed, enhancing coherence and stability in multislice configurations without relying solely on synchrotron access. Additionally, AI-assisted design tools like Ptychoscopy, introduced in 2025, streamline parameter optimization—such as probe convergence angles and defocus values—via user-friendly interfaces that incorporate microscope calibrations, guiding setups toward higher dose efficiency and resolution. These tools evaluate sampling impacts in real and reciprocal space, reducing trial-and-error in complex alignments.[59]
Applications
X-ray and Synchrotron Imaging
Ptychography has emerged as a powerful technique for high-resolution imaging in X-ray and synchrotron facilities, enabling nanoscale structural and chemical analysis of materials without the limitations of traditional lenses. At synchrotron sources, the coherent X-ray beams produced facilitate lensless phase retrieval, achieving resolutions below 10 nm while penetrating samples up to several micrometers thick, which is essential for studying bulk materials like semiconductors and energy storage devices. This penetration depth arises from the high energy of hard X-rays (typically 5-20 keV), allowing non-destructive imaging of extended, heterogeneous samples that would be challenging with electron-based methods.[60][61]
In battery research, operando X-ray ptychography provides dynamic nanoscale insights into electrochemical processes, such as phase transformations and ion diffusion in lithium-ion cells. For instance, hard X-ray ptychography has been applied to thin-film all-solid-state batteries, revealing morphological changes and strain evolution during charging cycles with sub-20 nm resolution, enabling real-time monitoring under operational conditions without significant beam-induced damage. This approach supports the design of improved electrodes by visualizing nanoscale degradation mechanisms in situ.
Hyperspectral 3D ptychotomography extends these capabilities by combining ptychographic reconstruction with tomographic scanning and energy-dispersive detection, yielding volumetric chemical maps of complex materials. Recent implementations at synchrotron beamlines have demonstrated 3D hyperspectral imaging of battery cathode particles, resolving elemental distributions and oxidation states across volumes exceeding 10 μm³ with resolutions around 15 nm. Such techniques leverage broadband illumination to capture multi-energy diffraction patterns in a single acquisition, reducing scan times for thick samples.[62]
To address imaging speed for larger fields of view, multibeam X-ray ptychography employs nano-lithographically patterned apertures to generate multiple coherent probes simultaneously, accelerating data collection by factors of 10-100 while maintaining nanoscale resolution. Demonstrated at energies up to 20 keV, this method has imaged extended semiconductor structures, such as lithography test patterns, in minutes rather than hours, making it suitable for time-resolved studies of dynamic processes in thick samples.[63]
Key examples include strain mapping in semiconductor heterostructures, where Bragg ptychography quantifies lattice distortions in silicon-on-insulator devices with 5-10 nm spatial resolution, revealing relaxation patterns at interfaces critical for device performance. Similarly, for defect analysis, ptychographic tomography uncovers buried 3D voids and dislocations in perovskite films, providing quantitative metrics on defect densities that influence optoelectronic properties. These applications highlight ptychography's role in materials science for non-destructive quality control.[64][65][66]
Recent developments in broadband spectroscopic ptychography setups further enable elemental mapping by integrating energy-dispersive detectors with focused beams, allowing simultaneous retrieval of phase, amplitude, and X-ray absorption spectra. Optimized optics, such as Kirkpatrick-Baez mirrors tuned for polychromatic beams, have achieved hyperspectral elemental contrast in battery materials, distinguishing transition metal distributions with sub-20 nm precision and probing chemical speciation across energy ranges of 1-2 keV. This advancement supports comprehensive analysis of multi-component systems in synchrotron environments.[62][67]
Electron Microscopy
Electron ptychography in transmission electron microscopy (TEM) leverages four-dimensional scanning TEM (4D-STEM) data to achieve sub-angstrom resolution imaging of materials, enabling the reconstruction of complex transmission functions without relying on aberration-corrected lenses. This technique scans a focused electron probe across overlapping regions of a sample, capturing diffraction patterns with pixelated detectors, and computationally retrieves both amplitude and phase information to surpass the resolution limits of conventional TEM. Unlike X-ray methods, electron ptychography operates in vacuum with high-energy electrons, providing unparalleled atomic-scale detail for solid-state materials while navigating challenges like multiple scattering.[35]
A primary application is atomic-scale imaging on conventional TEM setups, where electron ptychography has demonstrated resolutions down to 0.5 Å on standard instruments, democratizing high-end capabilities for materials characterization. In 4D-STEM configurations, it excels at visualizing defects such as single-atom vacancies in two-dimensional materials like monolayer transition metal dichalcogenides, revealing strain fields and lattice distortions with atomic precision. For instance, direct observation of sulfur vacancies in MoS₂ has highlighted local charge redistribution and electronic structure perturbations around defects. These capabilities are particularly valuable in materials science for studying nanoscale imperfections that influence properties like conductivity and mechanical strength.[68][69]
Advances in electron ptychography include low-dose protocols tailored for beam-sensitive materials, such as metal-organic frameworks (MOFs) and covalent organic frameworks (COFs), where electron doses below 10 electrons per Ų preserve fragile structures during imaging. This approach has enabled atomic-level visualization of linker orientations and pore architectures in beam-sensitive samples, minimizing radiation damage that plagues traditional TEM. Recent innovations incorporate multislice propagation models augmented by diffusion-based generative priors, as developed at Cornell University, to enhance 3D reconstructions of thick specimens by iteratively refining atomic positions through learned crystal structure distributions. These methods improve depth resolution to ~2.5 nm while accounting for multiple scattering effects.[70]
Exemplary uses encompass crystal structure reconstruction, where ptychographic phase retrieval has mapped buried heterointerfaces in van der Waals materials like hBN-graphene stacks, resolving atomic layering and twist angles with 0.57 Å lateral accuracy. In magnetic materials, it facilitates domain mapping by integrating Lorentz effects into phase reconstructions, imaging nanoscale magnetic textures in thin cobalt films (~4 atoms thick) to reveal domain walls and skyrmion-like configurations. The practical implementation relies on high-speed pixelated detectors, such as hybrid direct electron detectors, which capture full diffraction patterns at rates exceeding 1000 frames per second, making electron ptychography feasible on routine TEM platforms and enabling real-time data acquisition for dynamic studies.[71][72][37]
Optical and Emerging Modalities
Optical ptychography, particularly through Fourier ptychography microscopy (FPM), has advanced biomedical imaging by enabling high-resolution, wide-field visualization without the need for high numerical aperture (NA) lenses, which traditionally limit the field of view in conventional microscopy.[17] This technique synthesizes a high-resolution image from multiple low-resolution images captured under varying illumination angles, making it suitable for applications like digital pathology where large tissue samples must be scanned efficiently. A 2025 review from the University of Strathclyde highlights FPM's role in pathology, demonstrating its ability to achieve sub-micron resolution over millimeter-scale fields, facilitating automated analysis of unstained slides for cancer detection and reducing the need for costly high-NA objectives.[17]
In live cell imaging, optical ptychography provides label-free, quantitative phase contrast that reveals cellular dynamics without phototoxicity from stains or high-intensity light. Early demonstrations showed ptychography enabling high-contrast imaging of unstained live cells, such as neurons and stem cells, by reconstructing both amplitude and phase from diffraction patterns, achieving resolutions down to 400 nm.[73] This approach supports real-time monitoring of cellular processes like migration and division, as seen in commercial systems that use ptychography for automated tracking in time-lapse studies.[74]
Emerging modalities extend ptychography to terahertz (THz) frequencies, leveraging the non-ionizing nature of THz waves for non-destructive testing of materials and biological samples. THz ptychography reconstructs complex-valued images from intensity measurements, offering penetration depths of millimeters in non-conductive materials, ideal for inspecting composites or layered structures without damage. A 2022 study demonstrated THz ptychography for imaging breast cancer tissues on digital pathology slides, achieving efficient large field-of-view reconstructions with sub-wavelength resolution using optimized scanning strategies.[75]
Recent developments incorporate neural networks to create generalizable models for ptychography reconstruction across optical and emerging modalities, reducing computational demands and improving adaptability to diverse experimental conditions. A probe-centric deep learning framework, introduced in 2025, trains a single physics-informed neural network to handle unseen datasets from multiple setups, enabling real-time feedback and steering during imaging experiments in both visible and THz regimes.[76] This approach enhances reconstruction fidelity for dynamic samples, such as live cells or THz-inspected materials, by generalizing beyond specific illumination or probe configurations.[76]
History
Origins in Crystallography
Ptychography originated as a method to address the phase problem in crystallography, where the phases of diffracted waves cannot be directly measured, only their intensities, hindering the reconstruction of atomic structures via Fourier synthesis. In 1969, Walter Hoppe proposed using a tightly focused coherent electron beam to illuminate overlapping regions of a crystalline sample, producing diffraction patterns with broadened and interfering Bragg peaks that provide the necessary redundancy for phase retrieval.[77] This approach aimed to solve the phase problem by exploiting the interference from translated beam positions, where each pattern shares common structural information with its neighbors.[77]
The term "ptychography," derived from the Greek word for "folding," was coined by Rainer Hegerl and Walter Hoppe in 1970 to describe this technique, emphasizing the "folding" of overlapping diffraction data to enable phase evaluation in generalized diffraction scenarios for electron microscopy. Their work built on Hoppe's initial idea by formalizing the dynamic theory of crystal structure analysis through electron diffraction in an inhomogeneous primary wave field, highlighting how spatial overlap creates data redundancy to uniquely determine the phase.
Despite these theoretical foundations, practical implementation was hindered by the era's computational limitations, including insufficient memory and processing power for the iterative algorithms required to handle the highly redundant datasets from multiple overlapping patterns. Hoppe himself later described the concept as a "nearly forgotten old idea" due to these challenges.[78] Key milestones in the 1970s included theoretical papers exploring the redundancy in ptychographic data, such as extensions by Hoppe and collaborators that analyzed how overlap ratios ensure convergence in phase reconstruction, laying groundwork for future algorithmic developments.
Development of Reconstruction Algorithms
Building on earlier theoretical work, including formalizations by Rodenburg and Bates in 1992 for electron microscopy and Chapman in 1996 for X-rays, John Rodenburg and collaborators in the 1990s pioneered algorithmic developments for ptychography in coherent imaging, adapting the error reduction (ER) and hybrid input-output (HIO) phase retrieval methods—originally formulated by Fienup for single diffraction patterns—to accommodate the redundant, overlapping data from scanned probes in ptychographic setups. These adaptations addressed stagnation issues in ER by leveraging HIO's feedback mechanism outside the object support, enabling initial theoretical and experimental demonstrations in electron microscopy that exceeded conventional resolution limits. A key early milestone was the 1998 experimental validation using Wigner distribution deconvolution alongside these iterative methods, which recovered both specimen structure and illuminating wave from electron diffraction patterns of crystalline samples.
The transition from theoretical frameworks to practical implementations accelerated in the early 2000s, culminating in the introduction of the ptychographical iterative engine (PIE) algorithm in 2004 by Rodenburg and colleagues.[79] This novel phase retrieval approach extended ER principles to multiple overlapping illuminations, iteratively updating the object estimate across probe positions while enforcing measured intensities, thus overcoming limitations of prior non-iterative deconvolution techniques and achieving lensless transmission microscopy with a movable aperture.[79] The PIE method demonstrated robustness to noise and partial coherence, marking a shift toward broader applicability beyond crystalline electron imaging.
A significant advancement came in 2009 with the extended ptychographical iterative engine (ePIE) developed by Maiden and Rodenburg, which explicitly handled unknown probe functions by simultaneously refining both the specimen transmission and illumination parameters during iterations.[80] This refinement improved convergence for noisy datasets and incomplete overlaps, building on PIE by incorporating probe self-correction via error metrics in the overlap regions.[80] These algorithmic innovations facilitated proofs-of-concept in X-ray and electron modalities, such as the 2007 hard X-ray demonstration that imaged extended objects at synchrotron sources without lenses.
Modern Advances and Adoption
In the 2010s, ptychography experienced significant growth through the introduction of Fourier ptychography, which extended the technique to visible light microscopy for wide-field, high-resolution imaging without specialized optics.[81] This boom was catalyzed by Zheng et al.'s 2013 demonstration of Fourier ptychographic microscopy (FPM), achieving resolutions beyond the diffraction limit of conventional microscopes using iterative phase retrieval on low-resolution images under variable illumination.[81] Concurrently, electron ptychography gained traction in scanning transmission electron microscopy (STEM) via 4D-STEM, enabling phase contrast and structural mapping at the atomic scale, with early adoptions in materials science for strain and orientation analysis by the late 2010s.[82]
The 2020s marked a shift toward integration of machine learning, particularly neural networks, to accelerate reconstructions and handle noisy or incomplete data in ptychographic imaging.[76] Physics-informed deep neural networks have enabled generalizable reconstructions across diverse experimental conditions, reducing computation times from hours to seconds while maintaining high fidelity.[76] Real-time ptychography emerged as a key advance, with edge-computing frameworks allowing on-the-fly inversions during data acquisition, as shown in 2023 workflows for X-ray imaging that process streams at synchrotron facilities.[83] In electron microscopy, low-dose protocols advanced in 2024, achieving sub-nanometer resolution in cryo-samples with minimal beam exposure to preserve beam-sensitive biological structures like proteins.[42]
Adoption has expanded from research laboratories to broader scientific infrastructure, facilitated by open-source tools that democratize experimental design and analysis. The 2025 release of Ptychoscopy, a Python-based software, streamlines parameter optimization for electron ptychography, aiding users in achieving optimal overlap and probe conditions for high-quality reconstructions.[43] Hyperspectral X-ray ptychography has seen increased use in synchrotron-based materials characterization, enabling simultaneous structural and chemical mapping in 3D with broadband detectors.[62]
Key milestones include routine sub-nanometer resolutions in electron ptychography by 2024, applied to diverse samples from semiconductors to biomolecules, surpassing traditional STEM limits without aberration correction.[42] Extensions to terahertz frequencies have further broadened applicability, with untrained neural networks in 2025 enabling phase retrieval in non-visible regimes for imaging complex media like textiles or concealed objects.[57]