Fact-checked by Grok 2 weeks ago

Wiener deconvolution

Wiener deconvolution is a technique that applies the to reverse the effects of in the presence of additive noise, aiming to recover an original signal from a degraded by minimizing the between the estimate and the true signal. It operates primarily in the , where the observed signal is modeled as the convolution of the desired signal with a known (or system response) plus noise, and the deconvolution filter balances restoration against noise amplification. The method originates from the foundational work of on optimal linear filtering of , initially developed during for anti-aircraft fire control and published in his 1949 book Extrapolation, Interpolation, and Smoothing of Time Series. Wiener's approach provided a statistical framework for estimating signals under noise, assuming wide-sense stationarity and using power spectral densities to derive the filter. The specific application to emerged later, particularly in , where Enders Robinson and Sven Treitel adapted it in the to address seismic data processing challenges, such as compressing wavelets and enhancing resolution. Mathematically, for a discrete-time model where the observed signal x relates to the desired signal y via x = y * g + v (with g as the system and v as ), the Wiener deconvolution filter in the z-domain is given by H(z) = \frac{S_{yx}(z)}{S_{xx}(z)}, or more specifically for , H(z) = \frac{S_{yy}(z) G(1/z)}{S_{vv}(z) + S_{yy}(z) G(z) G(1/z)}, where S_{yy} and S_{vv} are the power spectral densities of the desired signal and , respectively, and G(z) is the of g. This formulation incorporates a term to regularize the inverse operation, preventing division by small values that would amplify . In the continuous-time , the filter simplifies to H(j\omega) = \frac{S_{yx}(j\omega)}{S_{xx}(j\omega)}, assuming uncorrelated signal and . Wiener deconvolution finds broad applications in fields requiring restoration of blurred or distorted data, including seismic exploration for wavelet compression and reflectivity estimation, image processing for deblurring photographs or microscopic images, and enhancement. Despite its optimality under and stationarity assumptions, limitations such as sensitivity to model mismatches have led to extensions like non-stationary variants (e.g., Gabor deconvolution) and integration with modern techniques like for improved performance in real-world scenarios.

Overview

Definition

Wiener deconvolution is the application of the Wiener filter to reverse the effects of convolution in signals corrupted by additive noise, aiming to recover an estimate of the original signal from an observed degraded version. In this context, deconvolution serves as the inverse operation to convolution, where the observed signal y(t) is modeled as the convolution of the original signal x(t) with a known impulse response h(t), plus additive noise n(t): y(t) = (h * x)(t) + n(t). The Wiener deconvolution estimates \hat{x}(t) by passing y(t) through an inverse filter g(t), producing \hat{x}(t) = (g * y)(t). This technique relies on several key assumptions about the underlying processes. The signals x(t) and y(t) are treated as wide-sense stationary random processes, meaning their statistical properties, such as means and autocorrelations, remain invariant under time shifts. Additionally, the n(t) is assumed to be uncorrelated with the original signal x(t), ensuring that the estimation process can separate the degradation effects from the without between them. The primary objective of Wiener deconvolution is to design the inverse filter g(t) such that the mean square error (MSE) between the estimated signal \hat{x}(t) and the true signal x(t) is minimized, defined as E[(x(t) - \hat{x}(t))^2]. This optimization yields the optimal linear estimator under the given assumptions, balancing the restoration of the convolved signal against noise amplification. In a basic block diagram, the observed signal y(t) is input to the Wiener inverse filter g(t), whose output is the restored estimate \hat{x}(t).

Historical Background

The Wiener deconvolution method is named after the American mathematician , who developed the foundational concepts of optimal linear filtering during the as part of wartime efforts to address in systems and communication channels for anti-aircraft fire control. This work emerged from Wiener's collaboration with engineers at the , where the challenge of predicting trajectories amid noisy radar signals necessitated a statistical approach to filtering stationary time series. The core ideas were initially classified but were declassified and formalized in Wiener's influential 1949 book, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, which introduced the as a means to minimize mean-squared error in signal estimation. Parallel to Wiener's efforts, Soviet mathematician independently derived equivalent results for discrete-time processes in 1941, contributing to what is now known as the Wiener-Kolmogorov filtering theory; this work was motivated by similar prediction problems in control systems during the early years of . Following the war, the publication of Wiener's book sparked a surge in research during the and early , with the filter finding early applications in communications for channel equalization and in for stabilizing systems in emerging cybernetic technologies. These developments laid the groundwork for deconvolution techniques, as the Wiener filter's ability to invert convolutional distortions while suppressing noise became central to restoring degraded signals. By the 1970s, advances in digital computing enabled the adaptation of Wiener filtering for image processing, marking a shift toward two-dimensional deconvolution in fields like astronomy and , where blurred or noisy visuals required restoration. A key milestone in this evolution was the integration of deconvolution into during the 1980s, where researchers applied it to correct aberrations in partially coherent systems, enhancing in optical reconstruction tasks. This period solidified the method's role in bridging with , influencing subsequent high-impact contributions in inverse problems.

Theoretical Foundations

Convolution and Deconvolution Basics

In , a linear time-invariant (LTI) is characterized by its output being a of scaled and shifted versions of the input, independent of time. For such systems, the output y(t) is given by the of the input signal x(t) with the system's h(t), expressed as y(t) = (h * x)(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau. Deconvolution seeks to recover the original input x(t) from the observed output y(t) when the h(t) is known. This is inherently ill-posed, as small perturbations in y(t), such as , can lead to large errors in the estimated x(t), and the operator defined by may not be invertible due to its smoothing effects. In the , the states that convolution in the corresponds to multiplication of the transforms: Y(f) = H(f) X(f), where uppercase letters denote the Fourier transforms of the respective time-domain signals. The deconvolution would thus invert this as X(f) = Y(f) / H(f), but this amplifies significantly at frequencies where |H(f)| is small, exacerbating the ill-posed nature of the problem. Common degradation models in deconvolution include blurring, where h(t) acts as a that attenuates high frequencies; additive , which corrupts the signal independently; and combinations thereof, such as a blurred signal plus , leading to compounded restoration challenges.

Wiener Filter Principles

The represents a linear time-invariant (LTI) approach to estimating a desired signal x(t) from noisy observations y(t), specifically tailored for wide-sense (WSS) random processes, where the goal is to minimize the (MSE) defined as E[(x(t) - \hat{x}(t))^2]. This minimization ensures the filter provides the best linear unbiased estimate in the MSE , balancing and variance under the of stationarity, which implies constant and depending only on time lag. The estimated signal \hat{x}(t) is produced by convolving the observation y(t) with the filter's h(t), yielding \hat{x}(t) = \int_{-\infty}^{\infty} h(\tau) y(t - \tau) \, d\tau. Central to deriving the Wiener filter is the orthogonality principle, which states that the optimal error e(t) = x(t) - \hat{x}(t) must be uncorrelated with the entire observation process y(s) for all relevant s, i.e., E[e(t) y(s)] = 0. For non-causal filters, this leads to the integral equations in the time domain: \int_{-\infty}^{\infty} h(\tau) R_{yy}(t - \tau) \, d\tau = R_{xy}(t), where R_{yy}(\cdot) is the autocorrelation of y(t) and R_{xy}(\cdot) is the cross-correlation between x(t) and y(t). For causal filters, the condition applies for s \leq t, resulting in the more complex Wiener-Hopf equations, which are typically solved using spectral factorization in the frequency domain. These equations form a set of normal equations whose solution yields the optimal h(t), though direct solution in the time domain can be computationally intensive due to the infinite-dimensional integral. For WSS processes, the frequency domain offers a simplification using power spectral densities (PSDs). The PSD of the signal S_x(f), noise S_n(f), and cross-PSD S_{xy}(f) capture the second-order statistics in the frequency domain via Fourier transforms of the respective correlations. The optimal filter transfer function for the non-causal case is then given by H(f) = S_{xy}(f) / S_{yy}(f), where S_{yy}(f) is the PSD of the observations, often expressible as S_{yy}(f) = S_{xx}(f) + S_{nn}(f) - 2 \operatorname{Re}\{S_{xn}(f)\} if signal and noise exhibit cross-correlation. This spectral formulation avoids solving the time-domain equations directly, leveraging the convolution theorem for efficient computation in stationary settings. For causal filters, the transfer function involves additional spectral factorization to ensure causality.

Mathematical Formulation

Continuous-Time Model

In the continuous-time formulation of Wiener deconvolution, the observed signal y(t) is modeled as the convolution of an unknown zero-mean wide-sense random process x(t) with a known deterministic h(t), corrupted by additive zero-mean wide-sense noise n(t) that is uncorrelated with x(t): y(t) = \int_{-\infty}^{\infty} h(\tau) x(t - \tau) \, d\tau + n(t). This setup assumes the convolution represents the blurring or process, as detailed in the foundational of linear filtering for processes. The objective is to design a linear time-invariant filter with impulse response g(t) that produces an estimate \hat{x}(t) = \int_{-\infty}^{\infty} g(\tau) y(t - \tau) \, d\tau, minimizing the mean squared error \epsilon = E[(x(t) - \hat{x}(t))^2]. This error metric quantifies the average power of the estimation residual, leveraging the stationarity to ensure time-invariance of the criterion. Taking Fourier transforms yields the frequency-domain observation model Y(f) = H(f) X(f) + N(f), where H(f), X(f), and N(f) are the transforms of h(t), x(t), and n(t), respectively. Due to the uncorrelation between x(t) and n(t), the cross-power spectral density S_{xn}(f) = 0. The mean squared error can then be expressed in the frequency domain using the filter transfer function G(f): \epsilon = \int_{-\infty}^{\infty} \left[ S_{xx}(f) - G(f) S_{yx}(f) - G^{*}(f) S_{xy}(f) + |G(f)|^2 S_{yy}(f) \right] df, where S_{yx}(f) = H(f) S_{xx}(f), S_{xy}(f) = H^{*}(f) S_{xx}(f), S_{yy}(f) = |H(f)|^2 S_{xx}(f) + S_{nn}(f), and the asterisks denote complex conjugates. This integral form arises from Parseval's theorem applied to the error autocorrelation under wide-sense stationarity.

Frequency-Domain Derivation

The frequency-domain derivation of the Wiener deconvolution filter begins with the expression for the (MSE) between the desired signal x(t) and its estimate \hat{x}(t) = g(t) * y(t), where y(t) is the observed signal given by the of x(t) with the system h(t) plus additive n(t). Assuming wide-sense processes, the error power is expressed in the frequency as \epsilon = \int_{-\infty}^{\infty} \left[ |1 - G(f) H(f)|^2 S_x(f) + |G(f)|^2 S_n(f) \right] \, df, where S_x(f), S_n(f), H(f), and G(f) are the power spectral densities (PSDs) of the signal and noise, and the Fourier transforms of h(t) and g(t), respectively. This formulation relies on Parseval's theorem, which equates the time-domain energy of the error to an integral over its frequency-domain representation, justifying the minimization directly in the frequency domain for stationary processes. To find the optimal G(f), differentiate \epsilon with respect to the complex conjugate G^*(f) (treating G(f) and G^*(f) as independent for analytic purposes) and set the result to zero. This yields the condition that the between the error and the observed signal is zero in the (), leading to the explicit form of the : G(f) = \frac{H^*(f) S_x(f)}{|H(f)|^2 S_x(f) + S_n(f)}. This solution minimizes the MSE by balancing signal recovery and noise suppression. In this expression, the numerator H^*(f) S_x(f) represents the ideal inverse filter scaled by the signal PSD, which would perfectly recover x(t) in the absence of noise. The denominator |H(f)|^2 S_x(f) + S_n(f) acts as a regularization term, incorporating the noise PSD to attenuate frequencies where noise dominates; specifically, when S_n(f)/S_x(f) \gg |H(f)|^2, G(f) \approx 0, effectively suppressing those components to avoid amplifying noise. Special cases simplify the filter further. For , where S_n(f) is constant, the filter becomes G(f) = \frac{H^*(f)}{|H(f)|^2 + K}, with K = S_n(f)/S_x(f) constant, emphasizing low-pass characteristics to favor signal bands over noise. When the h(t) is fully known, H(f) is directly computed from its , enabling precise construction of G(f) provided the PSDs are estimated or assumed.

Discrete Implementation

Digital Signal Processing Adaptation

In , the continuous-time Wiener deconvolution is adapted to discrete-time signals by modeling the observed sequence as a finite-length of the desired signal with a known , corrupted by additive . The discrete model is given by y = \sum_{m=0}^{M-1} h x[k - m] + n, for k = 0, 1, \dots, N-1, where x is the desired discrete-time signal, h is the finite of length M, n is zero-mean , and periodic boundary conditions or zero-padding are assumed to handle finite-length sequences. This formulation preserves the additive structure from the continuous case while accounting for the discrete nature of sampled signals. To approximate the continuous , the (DFT) is employed, transforming the time-domain equation into the as Y(\omega_k) = H(\omega_k) X(\omega_k) + N(\omega_k), where \omega_k = 2\pi k / N for k = 0, 1, \dots, N-1, and f_s is the sampling frequency relating the discrete frequencies to continuous ones via \omega = 2\pi f / f_s. The DFT enables efficient computation through the (FFT) algorithm, which reduces the complexity from O(N^2) to O(N \log N). The discrete Wiener deconvolution in the is derived as G(\omega_k) = \frac{H^*(\omega_k) S_x(\omega_k)}{|H(\omega_k)|^2 S_x(\omega_k) + S_n(\omega_k)}, where H^*(\omega_k) is the of the DFT of h, S_x(\omega_k) is the power (PSD) of the signal x, and S_n(\omega_k) is the PSD of the n. This minimizes the mean-square between the desired signal and its estimate \hat{x} = \mathcal{IDFT}\{ G(\omega_k) Y(\omega_k) \}, with the inverse DFT (IDFT) computed via inverse FFT for practicality. The PSDs are estimated from the data or assumed known, such as constant S_n(\omega_k) = \sigma_n^2 for . The inherent periodicity of the DFT can introduce circular convolution artifacts, mimicking infinite periodic extensions of finite signals and potentially causing in the restored output. To mitigate this, zero-padding is applied by extending the sequences y and h with zeros to length N \geq M + L - 1 (where L is the effective signal length), ensuring linear convolution equivalence and reducing . Alternatively, windowing functions like the Hann or Hamming window can taper the signals to suppress from discontinuities. Adherence to the sampling theorem is essential to preserve continuous-time properties in the discrete adaptation; the sampling frequency f_s must satisfy the , f_s \geq 2 f_{\max}, where f_{\max} is the highest frequency in the signal and noise spectra, preventing that could distort the estimates and filter performance. Below this rate, high-frequency components fold into lower frequencies, degrading the accuracy.

Numerical Computation Methods

The numerical computation of the Wiener deconvolution is typically performed in the using the (FFT) for efficiency, adapting the discrete equation to signals. The process begins by computing the FFT of the observed signal y to obtain Y(\omega), and the FFT of the known h to get H(\omega). The power spectral densities (PSDs) S_x(\omega) for the signal and S_n(\omega) for the noise are then estimated from available data. The is formed as G(\omega) = \frac{H^*(\omega) S_x(\omega)}{|H(\omega)|^2 S_x(\omega) + S_n(\omega)}, the estimated is \hat{X}(\omega) = G(\omega) Y(\omega), and the time-domain estimate \hat{x} is recovered via inverse FFT. PSD estimation is crucial when true spectra are unknown, as the filter's performance depends on accurate separation of signal and noise components. A basic approach is the periodogram method, which estimates the as \hat{S}(\omega) = \frac{1}{N} | \sum_{k=0}^{N-1} z e^{-j \omega k} |^2, where z is the signal or noise segment and N is the length; this is computed efficiently via FFT but suffers from high variance due to lack of averaging. To mitigate this, Welch's method divides the data into overlapping segments (typically 50% overlap), applies a (e.g., Hamming) to each, computes the for each segment, and averages the results, reducing variance at the cost of slightly increased bias while maintaining consistency. This averaged estimate is particularly useful for signals in Wiener applications, where S_x(\omega) might be derived from clean signal segments and S_n(\omega) from noise-only periods. When PSDs cannot be directly estimated from data (e.g., limited samples), parametric models are employed, assuming the signal follows an autoregressive (AR) process of order p, with PSD S_x(\omega) = \frac{\sigma_x^2}{|1 - \sum_{m=1}^p a_m e^{-j m \omega}|^2}, where coefficients a_m and variance \sigma_x^2 are found via methods like Yule-Walker equations or Levinson-Durbin recursion; noise is often modeled as white with constant PSD S_n(\omega) = \sigma_n^2. Adaptive estimation can iteratively refine these parameters using techniques like least mean squares on successive data blocks. (Kay, 1988, for AR estimation in spectral analysis). For numerical stability, especially where |H(\omega)|^2 S_x(\omega) + S_n(\omega) \approx 0, regularization is applied by adding a small \epsilon > 0 (e.g., $10^{-6} scaled to signal power) to the noise PSD S_n(\omega), modifying the denominator to |H(\omega)|^2 S_x(\omega) + S_n(\omega) + \epsilon to suppress noise amplification without overly biasing the estimate; this is a practical variant of the ideal Wiener form. The overall computational complexity is O(N \log N) per filter application, dominated by the FFT operations for signals of length N, making it scalable for large datasets in digital signal processing.

Applications

Image Restoration

Wiener deconvolution is widely applied to restore degraded two-dimensional by extending the one-dimensional model to the spatial domain, where the observed image y(u,v) is given by the convolution of the original image x(u,v) with the point spread function (PSF) h(u,v), plus additive noise n(u,v):
y(u,v) = h(u,v) \ast x(u,v) + n(u,v).
The restoration is efficiently computed in the via the two-dimensional (2D FFT), yielding the estimate
\hat{X}(f_u, f_v) = \frac{H^*(f_u, f_v) Y(f_u, f_v)}{|H(f_u, f_v)|^2 + \frac{S_n(f_u, f_v)}{S_x(f_u, f_v)}},
where H(f_u, f_v), Y(f_u, f_v), and S_x, S_n denote the transforms and power spectral densities (PSDs) of the PSF, observed image, signal, and noise, respectively.
Common image degradations addressed by this approach include , for which the PSF h(u,v) is modeled as a rect function representing a along the motion ; defocus blur, approximated by a Gaussian PSF h(u,v) = \frac{1}{2\pi\sigma^2} \exp\left( -\frac{u^2 + v^2}{2\sigma^2} \right); and atmospheric turbulence, which produces a seeing-limited PSF often fitted to a Gaussian or profile in astronomical contexts. For effective implementation, the signal PSD S_x(f_u, f_v) is typically estimated using a power-law model S_x(f_u, f_v) \propto 1/|(f_u, f_v)|^\beta with \beta \approx 2 to capture the scale-invariant statistics of natural images, while the noise PSD S_n(f_u, f_v) is modeled as constant for white or following a in low-light scenarios like photon-counting . In astronomical imaging, Wiener deconvolution removes seeing blur from ground-based observations, revealing fine structures obscured by atmospheric effects. In , it corrects optical aberrations to enhance in images, outperforming inverse filtering by reducing through built-in noise regularization. results often yield significant improvements in metrics such as (PSNR) and structural similarity index (SSIM) compared to degraded inputs, establishing its practical utility in these domains.

Signal and Audio Processing

In communications systems, Wiener deconvolution serves as an effective method for equalization, where the h represents multipath distortion and (AWGN) corrupts the received signal, enabling the recovery of transmitted symbols by inverting the effects while minimizing noise amplification. This approach is particularly valuable in environments, as demonstrated by Wiener deconvolution-based equalizers that reduce in high-data-rate transmissions over dispersive s. In audio processing, Wiener deconvolution facilitates dereverberation by estimating and inverting the room impulse response (RIR) to recover clean speech from reverberant recordings, preserving perceptual quality in enclosed spaces. For speech enhancement, it separates the desired voice signal from by applying a frequency-domain that suppresses additive , improving intelligibility in single-channel scenarios. These techniques are often integrated into systems, where multichannel Wiener s further mitigate reverberation while maintaining spatial cues. Seismic signal processing employs Wiener deconvolution to remove the source from recorded traces, sharpening subsurface reflections and enhancing in exploration data. This compresses the embedded wavelet into a spike-like response, attenuating multiples and reverberations to better delineate geological layers, as seen in multichannel applications for vertical seismic profiles. Variants like sparsity-enhanced Wiener methods further improve wavelet estimation under noisy conditions, yielding higher-fidelity reflectivity series. For real-time applications in streaming audio, Wiener deconvolution is adapted via block-processing techniques, where input segments are filtered in the and reconstructed using overlap-add methods to minimize boundary artifacts and ensure seamless playback. This enables low-latency dereverberation and enhancement in devices like hearing aids, leveraging efficient discrete implementations for continuous signal flows. Case studies highlight SNR improvements from Wiener-based processing in noisy recordings; for instance, multichannel Wiener filters in hearing aids achieve significant gains in adverse environments, enhancing speech clarity for users with hearing impairments. In forensic audio analysis, similar Wiener enhancements recover intelligible speech from degraded evidence in low-quality surveillance tapes to aid identification tasks.

Limitations and Extensions

Practical Challenges

One significant practical challenge in Wiener deconvolution arises from its sensitivity to estimates of the power spectral densities (PSDs) of the signal S_x(f) and noise S_n(f). Inaccurate PSD estimates can lead to over-smoothing of the restored signal when the noise PSD is overestimated, or to excessive noise amplification when the signal PSD is underestimated, thereby degrading the overall restoration quality. For short-duration signals, PSD estimates are particularly noisy due to limited data, often necessitating additional smoothing techniques to stabilize the filter, though this introduces further approximation errors. The problem is inherently ill-posed when the degradation transfer function H(f) contains zeros or approaches zero at certain frequencies, as direct inversion in the Wiener filter W(f) = \frac{H^*(f)}{|H(f)|^2 + \frac{S_n(f)}{S_x(f)}} becomes unstable, amplifying noise infinitely in those regions even with the regularization term. This sensitivity persists despite the filter's built-in regularization, making it vulnerable to small perturbations in H(f) estimates from real-world measurements. Computational challenges emerge in practice, particularly with high noise levels in estimates for finite-length signals, which can cause the to produce erratic results without prior or windowing of the . Additionally, violations of the underlying assumptions—such as signal and noise non-stationarity, colored noise correlated with the signal, or a time-varying h(t)—compromise the 's performance, as the method relies on wide-sense stationarity and uncorrelated signal-noise processes to derive the optimal form. In image processing, these issues manifest as artifacts like Gibbs ringing near sharp edges, where high-frequency oscillations are introduced due to the filter's imperfect handling of discontinuities. Similarly, in , pre-echo artifacts can occur, producing audible precursors to transients as a result of the filter's response and assumption violations in reverberant environments.

Alternative and Advanced Methods

Blind deconvolution addresses scenarios where the point spread function (PSF) is unknown, extending beyond the standard approach by estimating both the original signal and the blur kernel simultaneously. One prominent variant employs higher-order statistics to exploit non-Gaussian properties of the signal, enabling recovery without prior knowledge of the PSF, as demonstrated in applications to impacting signals where third-order cumulants help separate the source from noise. Iterative methods like the Richardson-Lucy algorithm have been adapted for blind settings, iteratively updating both the image estimate and PSF to maximize likelihood under noise models, particularly effective in astronomical imaging where blur and noise are intertwined. Nonlinear extensions of Wiener deconvolution incorporate regularization to mitigate artifacts like ringing while preserving structural details. Total variation (TV) regularization promotes piecewise smoothness by penalizing variations in the , improving edge preservation in deblurred outputs compared to linear Wiener filtering, and is solved efficiently via algorithms like split Bregman for cases. Sparse priors, drawn from frameworks, assume the signal is sparse in a transform domain (e.g., wavelets), enabling robust reconstruction under subsampling and blur; these priors outperform Wiener in scenarios with structured sparsity, such as , by reducing overfitting to . Frequency-domain alternatives adapt to specific noise models where Wiener's Gaussian assumption falls short. The Lucy-Richardson algorithm, an iterative maximum-likelihood estimator for Poisson-distributed noise common in photon-limited , iteratively refines the estimate by back-projecting residuals, yielding sharper restorations than Wiener in low-count regimes like fluorescence without amplifying Gaussian-like artifacts. Tikhonov regularization serves as a deterministic to the by adding an L2 penalty to the least-squares objective, effectively damping high-frequency noise amplification; it converges to Wiener solutions in settings with known signal-to-noise ratios, offering computational simplicity for large-scale problems. Machine learning integrations have advanced Wiener deconvolution for complex, non-stationary cases post-2010. Neural networks trained to approximate the , such as the Deep Wiener Deconvolution Network (DWDN), embed classical frequency-domain operations within convolutional layers, achieving artifact-free deblurring on benchmarks like datasets by learning adaptive regularization from data. Deep priors, including unsupervised networks like Deep Image Prior guided by Wiener losses, handle non-stationary (e.g., motion-varying PSFs) by leveraging network architectures as implicit priors, outperforming traditional methods in blind settings with varying noise levels. More recent advancements, such as the INFWIDE network (2023) that integrates Wiener deconvolution in both and spaces, and eigenCWD (2025) for handling spatially varying , further enhance performance in complex scenarios. Comparisons highlight trade-offs between Wiener and least-squares deconvolution. Least-squares methods, essentially inverse filtering in the frequency domain, are computationally faster and provide unbiased estimates under perfect conditions but amplify noise severely in ill-posed scenarios, leading to poorer signal-to-noise ratios than Wiener's regularized approach. Wiener is preferred when noise statistics are known or estimable, offering optimal mean-square error minimization, whereas least-squares suits low-noise, high-fidelity applications like seismic processing despite its sensitivity.