Fact-checked by Grok 2 weeks ago

Nonlinear filter

A nonlinear filter is a signal processing device or algorithm whose output is not a linear function of its input, thereby violating the superposition and homogeneity principles that define linear filters. This nonlinearity enables such filters to achieve performance unattainable by linear methods, such as robust removal of impulse noise, preservation of sharp edges in images, and handling of non-Gaussian signal distributions. In digital signal processing, nonlinear filters are often characterized using frameworks like the Volterra series, which expand the output as a sum of multidimensional convolutions involving higher-order input terms, allowing for frequency-dependent distortions and intermodulation effects. Common examples in image and signal processing include order-statistic filters, such as the median filter, which replaces each data point with the median value from a local neighborhood to suppress outliers while maintaining structural features like edges. Other notable types encompass morphological filters for shape-based operations, weighted median filters for adaptive , and α-trimmed mean filters that exclude extreme values before averaging. These filters are particularly effective in applications like biomedical for denoising electrocardiograms, satellite enhancement, and acoustic echo cancellation, where linear filters may blur details or fail against . In the domain of state estimation and , nonlinear filters address dynamic systems with nonlinear evolution or observation models, extending the linear paradigm. Key methods include the Extended Kalman Filter (EKF), which linearizes nonlinear functions via approximations around the current estimate, and the Unscented Kalman Filter (UKF), which uses sigma-point sampling to propagate mean and more accurately without explicit . Particle filters, based on sequential sampling, offer a nonparametric Bayesian approach for highly nonlinear or non-Gaussian scenarios, though they face challenges like particle degeneracy. These estimation techniques underpin real-time applications in , , , and neuroscience signal analysis. The development of nonlinear filters traces back to the mid-20th century, building on Rudolf E. Kálmán's 1960 linear filtering work, with nonlinear extensions emerging in the 1960s through contributions like the continuous-time formulations by Harold J. Kushner and the Zakai equation. Ongoing challenges include , the "curse of dimensionality" in high-dimensional spaces, and ensuring stability in recursive designs, driving research into hybrid linear-nonlinear architectures and optimization under constraints.

Fundamentals

Linear Filters

A is a system in which the output results from a of the input signals, satisfying the principles of superposition—where the response to a of inputs equals the of the individual responses—and homogeneity, where scaling an input by a constant scales the output by the same constant. This linearity ensures that the filter does not introduce new frequency components or distortions beyond those inherent to the input. In the context of linear time-invariant (LTI) systems, which form the core of most practical linear filters, several key properties hold: time-invariance, meaning a time shift in the input produces an identical shift in the output; , where the output at any instant depends solely on current and past inputs; and , ensuring that bounded inputs yield bounded outputs, preventing amplification of or errors. These properties enable efficient analysis and implementation, particularly in applications. A fundamental representation of a discrete-time finite impulse response (FIR) linear filter is given by the convolution sum: y = \sum_{k=0}^{M-1} h \, x[n - k] where y is the output at time n, x is the input, h is the filter's impulse response, and M is the filter order. In the frequency domain, the convolution theorem transforms this operation into a multiplication: the Fourier transform of the output Y(\omega) equals the product of the input's Fourier transform X(\omega) and the filter's transfer function H(\omega), allowing designers to shape frequency responses directly. The conceptual and mathematical foundations of linear filters emerged in early 20th-century signal processing, building on 19th-century Fourier analysis while advancing through works on optimal estimation, such as those by Kolmogorov in the 1930s and Wiener in the 1940s, which formalized least-squares filtering for noisy signals. Common examples include low-pass filters, which attenuate high frequencies to smooth signals and suppress noise, as in averaging adjacent samples; high-pass filters, which boost high frequencies for differentiation or highlighting abrupt changes; and band-pass filters, which isolate a specific frequency band for applications like audio equalization or seismic analysis.

Definition of Nonlinear Filters

A nonlinear filter in is a whose output is not a of its input, thereby violating the principles of superposition and homogeneity that define linear filters. Specifically, if two input signals x_1(t) and x_2(t) produce outputs y_1(t) and y_2(t) respectively, a a x_1(t) + b x_2(t) (where a and b are constants) would yield a y_1(t) + b y_2(t), but a nonlinear filter does not satisfy this property. In general, the output can be expressed as y(t) = f(x(t)), where f denotes a nonlinear that may involve operations such as thresholding, , or transformations, without adhering to additive or scalar-multiplicative . Nonlinearity in filters often emerges from inherent , such as or clipping in components, where the response flattens beyond a certain input , or from intentional design choices to achieve targeted effects like preserving sharp transitions in data. For instance, nonlinearity limits the output to a fixed , introducing behavior that distorts higher- components, while clipping abruptly truncates signals exceeding predefined thresholds. These nonlinear behaviors contrast with linear filters, which assume ideal proportionality and can blur such features indiscriminately. Compared to linear filters, nonlinear filters offer advantages in managing non-Gaussian noise environments, including impulsive noise and outliers, by avoiding the amplification of extremes that linear methods might exacerbate, and they excel at processing non-stationary signals where statistical properties vary over time. However, they present challenges, including higher due to the absence of efficient convolution-based implementations, lack of a closed-form for analysis, and risks of in adaptive scenarios where loops amplify errors. The recognition of nonlinear filters as a distinct class gained prominence in the mid-20th century, notably with John W. Tukey's introduction of nonsuperposable smoothing methods, including the , in 1974, marking a shift toward robust techniques for real-world data irregularities.

Types of Nonlinear Filters

Order Statistic Filters

Order statistic filters constitute a fundamental class of deterministic nonlinear filters in signal and image processing, where the output at each position is determined by selecting the r-th from a ranked set of input samples within a local sliding window of size M = 2k + 1. This ranking operation introduces nonlinearity by prioritizing the position in the sorted sequence rather than a of values, making these filters robust to outliers without relying on arithmetic means. Prominent examples include the minimum filter, defined as y = \min \{ x[n-k], \dots, x[n+k] \}, which suppresses positive impulses and enhances dark features; the maximum filter, y = \max \{ x[n-k], \dots, x[n+k] \}, which removes negative impulses and brightens regions; and the , y = median of the window (corresponding to r = (M+1)/2 for odd M), widely used for balanced smoothing. These filters excel in preserving sharp edges and textures compared to linear alternatives, as the selected tends to align with the dominant local signal level rather than being distorted by extremes. Additionally, the exhibits , where repeated applications produce no further change, ensuring stable convergence in iterative processing. A key advantage of filters is their efficacy in removing impulsive , such as salt-and-pepper artifacts, by isolating and discarding values through , while maintaining structural details like edges that linear filters might blur. The performance hinges on window size selection: smaller windows (e.g., 3x3) provide minimal and preserve fine details but may inadequately suppress , whereas larger windows (e.g., 7x7 or more) amplify at the cost of potential blurring of subtle features and increased computational demand. Implementation of order statistic filters generally requires sorting the M samples in each window, yielding a time complexity of O(M log M) per output sample using standard algorithms like quicksort. For the median specifically, optimizations such as quickselect— an average-case O(M) selection algorithm based on partitioning similar to quicksort—reduce complexity by avoiding full sorting, making it suitable for real-time applications. The median filter, a cornerstone of this family, was popularized in the 1970s for image processing following early statistical foundations by John Tukey in 1972 and initial applications by Pratt in 1975.

Volterra Filters

Volterra filters represent a class of nonlinear filters derived from the , which extends the concept of to higher-order terms to model nonlinear systems with memory in . The series originates from 's work on functional expansions in 1887, providing a polynomial approximation for nonlinear operators, and was adapted by in 1942 for practical applications in nonlinear circuit analysis and prediction. In the context of discrete-time , the output y of a Volterra filter is expressed as a sum of terms up to a desired order P: y = \sum_{p=1}^{P} y_p, where the p-th order component is y_p = \sum_{k_1=0}^{M-1} \cdots \sum_{k_p=0}^{M-1} h_p[k_1, \dots, k_p] \prod_{i=1}^{p} x[n - k_i], with h_p[\cdot] denoting the p-th order kernel and M the memory length. This formulation generalizes linear finite impulse response (FIR) filters, which correspond to the first-order term (p=1), to capture interactions among multiple input samples. The first-order kernel h_1 yields the standard linear convolution y_1 = \sum_{k=0}^{M-1} h_1 x[n-k], equivalent to a linear filter. Higher-order terms, particularly the second-order h_2[k_1, k_2], introduce nonlinearities that model phenomena such as frequency mixing and intermodulation , where the output depends on products of past inputs. For instance, in the second-order case, y_2 = \sum_{k_1=0}^{M-1} \sum_{k_2=0}^{M-1} h_2[k_1, k_2] x[n-k_1] x[n-k_2], enabling the filter to approximate systems exhibiting cross-term effects not possible with linear models. These filters became prominent in during the and , following Wiener's orthogonalization techniques that facilitated kernel estimation for Gaussian inputs. Identification of Volterra filter kernels involves estimating the multidimensional coefficients h_p, often using adaptive algorithms to handle unknown or time-varying systems. A common approach is the least mean squares (LMS) method, which updates kernels iteratively to minimize the between desired and filter outputs, with variants like the Volterra filtered-X LMS addressing specific applications such as . Block-based LMS adaptations reduce complexity by processing inputs in batches, making them suitable for implementation while exploiting symmetries in kernels to lower parameter count. These adaptive techniques, rooted in , converge under mild conditions on input statistics, enabling practical deployment in dynamic environments. In practice, Volterra filters find application in audio distortion modeling, where they approximate nonlinear behaviors in amplifiers and loudspeakers by capturing and products. For nonlinear equalization, they compensate for distortions in communication channels or acoustic systems, such as pre-distortion in power amplifiers or post-equalization in room acoustics, improving signal fidelity under fixed-point constraints. These uses leverage the series' ability to represent fading memory systems, though typically truncated to low orders (e.g., second or third) for computational feasibility. A primary limitation of filters is of dimensionality, where the number of coefficients grows exponentially with the filter order P and memory M, specifically as O(M^P / P!) for the p-th order term due to the symmetric . This rapid increase hampers identification and implementation for high-order or long-memory systems, often necessitating , sparsity exploitation, or low-rank approximations to mitigate . Despite these challenges, the remains foundational for analyzing and approximating a broad class of nonlinear time-invariant systems in .

Energy Transfer Filters

Energy transfer filters (ETFs) constitute a specialized class of nonlinear filters that leverage generation and effects to redistribute signal energy across different bands, enabling applications such as frequency focusing or selective suppression. These filters exploit the inherent property of nonlinear systems to produce output frequencies that are sums and differences of input frequencies, thereby shifting energy from undesired bands to targeted ones without relying on traditional linear filtering techniques. The mathematical foundation of ETFs is rooted in nonlinear system models, often represented in discrete time using nonlinear autoregressive with exogenous inputs (NARX) structures. The output y(k) is expressed as y(k) = \sum_{n=1}^{N} y_n(k), where linear terms correspond to n=1 and higher-order nonlinear terms for n \geq 2 involve products of delayed inputs, such as y_n(k) = \sum_{l_1,\dots,l_n=1}^{K_{nu}} c_{0n}(l_1,\dots,l_n) \prod_{i=1}^{n} u(k-l_i). In the , the output spectrum Y(j\omega) incorporates nonlinear frequency response functions (NFRFs) that facilitate energy transfer through integrals over multiple frequencies, altering the spectrum via terms akin to \cos(\omega t) \cdot \cos(2\omega t), which generate sum and difference frequencies like $3\omega and \omega. This mechanism ensures that input energy in one band, say around \omega_1, contributes to output components at \omega_1 + \omega_2 or |\omega_1 - \omega_2|, modeled by nonlinear differential equations in continuous-time equivalents. Design methodologies for ETFs typically involve optimizing the order and coefficients of the nonlinear terms to achieve the desired energy transfer. An initial approach uses a three-step procedure: determining the minimum nonlinearity order N_0 and maximum order N, estimating parameters via least squares, and synthesizing the linear filter component. More advanced techniques employ orthogonal least squares (OLS) algorithms to simultaneously select model structure and parameters, addressing limitations in lag lengths and improving accuracy for complex transfers. These methods focus on minimizing the error in the targeted frequency bands while optimizing nonlinear coefficients for efficient energy redirection. ETFs exhibit unique properties, including non-reciprocal flow, where transfer is directional and not symmetric between input and output, making them suitable for adaptive in dynamic systems. This non-reciprocity arises from the asymmetric nature of nonlinear interactions, allowing to be concentrated in passbands or dispersed to stopbands for enhanced rejection. Introduced in the early 2000s as part of analysis in , ETFs were first proposed by Billings and to extend frequency-domain techniques beyond linear paradigms. Subsequent developments refined design procedures for broader applicability. A representative example involves transferring from a low-frequency band, such as (2.351, 7.054) /s, to a higher band (20.4, 30.2) /s, effectively suppressing interference in the original band through nonlinear .

Stochastic Nonlinear Filtering

Kushner–Stratonovich Equations

The Kushner–Stratonovich equations provide the foundational framework for solving nonlinear filtering problems, describing the evolution of the density function of a hidden state process given noisy observations. These equations address the challenge of estimating the state x_t of a nonlinear dynamic system driven by , where the system evolves according to the () dx_t = f(x_t, t) \, dt + g(x_t, t) \, dW_t, with W_t a Wiener process, and partial observations y_t satisfy dy_t = h(x_t, t) \, dt + dV_t, where V_t is an independent Wiener process representing measurement noise, often with covariance \sigma^2 dt in scalar cases or a matrix R dt more generally. The goal is to compute the posterior density p(x_t \mid y_{0:t}), which encodes all statistical information about the state given the observation history up to time t. The core of the framework is the Kushner equation, a (SPDE) governing the time evolution of this conditional p(x, t): dp(x, t) = L^*[p(x, t)] \, dt + \left[ h(x, t) - \hat{h}(t) \right] p(x, t) \left( dy_t - \hat{h}(t) \, dt \right) / \sigma^2, where L^* is the adjoint generator (Fokker-Planck operator) of the state process, L^* p = -\nabla \cdot (f p) + \frac{1}{2} \nabla \cdot \nabla \cdot (g g^\top p), \hat{h}(t) = \int h(x, t) p(x, t) \, dx is the of the observation function, and the term involving dy_t - \hat{h}(t) \, dt represents the innovation process. In vector notation for multidimensional cases, the correction term generalizes to p(x, t) \left( h(x, t) - \hat{h}(t) \right)^\top R^{-1} \left( dy_t - \hat{h}(t) \, dt \right). This equation combines a step, which propagates the density via the system's , with an update step that incorporates new observations to refine the posterior. An equivalent Stratonovich interpretation recasts the Kushner equation as a using the Fisk-Stratonovich integral, which offers advantages in for simulations by avoiding certain Itô-specific corrections in discretization schemes. This form is particularly useful when interpreting the filtering in terms of pathwise stochastic integrals. The of the Kushner–Stratonovich equations stems from Bayesian updating principles combined with the Fokker-Planck equation: the prediction phase evolves the prior density forward using the forward Kolmogorov equation for the Markov state process, while the filtering (correction) phase applies a likelihood-based Bayes' rule to adjust for observations, leading to the SPDE form through or martingale representations. These equations were developed independently in the context of and : Ruslan Stratonovich introduced foundational ideas on conditional Markov processes in the late 1950s and early 1960s, with a key contribution in 1960, while provided a rigorous derivation using Itô stochastic calculus in 1964. Despite their exactness, the Kushner–Stratonovich equations are infinite-dimensional and computationally intractable for high-dimensional state spaces, as solving the SPDE requires propagating the full density function, which grows exponentially complex without or techniques.

Approximate Methods

Approximate methods for nonlinear filtering provide computationally tractable solutions to the intractable equations, such as the Kushner–Stratonovich equations, by introducing assumptions or sampling techniques to handle nonlinearity and non-Gaussianity. These approaches are essential in practical applications where real-time estimation is required, balancing accuracy with efficiency. The primary methods include extensions of the linear , sigma-point transformations, and Monte Carlo-based techniques. The (EKF) is a foundational approximation that linearizes the nonlinear state transition function f and measurement function h using first-order Taylor expansions around the current estimate. This allows propagation of the state mean and in a manner analogous to the linear : the predicted state is \hat{x}_{k|k-1} = f(\hat{x}_{k-1|k-1}), and the predicted is P_{k|k-1} = F_{k-1} P_{k-1|k-1} F_{k-1}^T + Q_{k-1}, where F_{k-1} is the of f with respect to the state at \hat{x}_{k-1|k-1}, and Q_{k-1} is the process noise . Similarly, the measurement update uses the H_k of h. Developed in the 1960s, the EKF assumes local and Gaussian noise, making it suitable for mildly nonlinear systems but prone to in highly nonlinear or non-Gaussian cases due to neglected higher-order terms. The Unscented Kalman Filter (UKF), introduced in the late 1990s, addresses EKF limitations by avoiding explicit through the unscented transformation. It propagates a set of carefully chosen sigma points—deterministic samples representing the and of the state distribution—through the nonlinear functions f and h directly. These points are generated using scaling parameters that capture up to third-order statistics for Gaussian distributions, and the transformed points are used to compute the predicted and without s. This approach provides better handling of nonlinearity compared to the EKF, particularly for systems where computation is difficult or inaccurate, while maintaining computational efficiency similar to the EKF. Particle Filters, also known as Sequential (SMC) methods, emerged in the as a nonparametric for non-Gaussian and highly nonlinear filtering problems. They represent the posterior p(x_t \mid y_{0:t}) using a set of weighted particles \{x_t^{(i)}, w_t^{(i)}\}_{i=1}^N, where each particle is a state sample drawn from a proposal via importance sampling, and weights are updated based on the likelihood of observations. Resampling steps, such as the systematic or multinomial methods, are employed to mitigate particle degeneracy and maintain diversity. Unlike analytic approximations, particle filters can capture multimodal posteriors but require a large number of particles for accuracy, leading to higher computational cost. Comparisons among these methods highlight their trade-offs: the EKF excels under and mild nonlinearity due to its simplicity and low computational overhead, assuming local linearity holds; the UKF improves upon this by better approximating nonlinear transformations without derivatives, offering superior performance in moderately nonlinear Gaussian settings; particle filters provide the most flexibility for non-Gaussian multimodal problems but at the expense of increased variance and computation, often requiring thousands of particles for reliable estimates. Convergence and of these approximate methods depend on specific conditions in nonlinear settings. For the EKF, requires the linearization error to be bounded and of the linearized , with asymptotic under detectability assumptions on the Jacobians, though can occur if nonlinearity causes large biases. The UKF converges under similar conditions on f and h, with mean-squared error guarantees for finite sigma-point sets, provided the unscented transformation accurately captures the distribution moments. Particle filters achieve to the true posterior as the number of particles N \to \infty, with ensured by proper and resampling to avoid weight collapse, though practical implementations may suffer from filter impoverishment in high dimensions. These properties underscore the need for method selection based on characteristics and validation through .

Applications

Noise Removal in Signals and Images

Nonlinear filters play a crucial role in denoising signals and images by effectively suppressing noise while preserving important structural features such as edges and impulses. In , these filters address impulsive noise, which manifests as sudden spikes in one-dimensional data. The , a prominent filter, excels at removing such noise by replacing each sample with the median value of neighboring samples, thereby mitigating outliers without distorting the underlying waveform. For instance, in electrocardiogram (ECG) signals, the suppresses impulsive artifacts from muscle activity or motion, maintaining the fidelity of QRS complexes essential for clinical diagnosis. In image processing, nonlinear filters target , characterized by random white and black pixels, which linear filters like Gaussian smoothing exacerbate by blurring edges and fine details. Order statistic filters, such as the , counteract this by sorting pixel values in a window and selecting the median, effectively isolating and removing impulses while retaining sharp boundaries. This edge-preserving property stems from the filter's ability to adapt to local statistics, unlike linear methods that apply uniform averaging and inadvertently smooth discontinuities. Advanced nonlinear techniques build on these foundations by incorporating adaptability. The adaptive median filter dynamically adjusts window sizes based on local noise density to optimize impulse detection and restoration, reducing over-smoothing in homogeneous regions. Similarly, the combines spatial proximity with intensity similarity in its weighting scheme, enabling that attenuates noise gradients without halo artifacts. These methods leverage principles alongside distance-based nonlinearities for robust performance across varying noise levels. Quantitative evaluations highlight their efficacy: nonlinear filters like the and bilateral variants typically yield significant (PSNR) improvements over linear counterparts in impulse-corrupted images. Edge retention is assessed post-filtering using the , which computes gradient magnitudes; nonlinear approaches generally preserve edges better than linear filters, as measured by edge preservation indices in spatial restorations. Case studies from the demonstrate practical impact in specialized domains. In astronomical , nonlinear diffusion filters enhanced faint point-source detection by suppressing noise in data, improving signal-to-noise ratios for extragalactic surveys without compromising resolution. For audio signals, median-based nonlinear filters remove spikes from speech recordings corrupted by clicks or pops, preserving phonetic transients in one-dimensional waveforms akin to ECG processing. The superiority of nonlinear filters arises because linear filters, operating in the , amplify high-frequency components during low-pass , leading to residual artifacts. In contrast, nonlinear filters employ —such as medians from ordered samples—to selectively suppress outliers, avoiding uniform spectral boosting and better handling non-Gaussian distributions prevalent in real-world signals and images.

State Estimation and Tracking

Stochastic nonlinear filters play a crucial role in state estimation and tracking by providing robust methods to infer the hidden states of dynamic systems from noisy, nonlinear measurements in real-time applications such as and . These filters, including extensions of the and particle-based approaches, address the limitations of linear methods when dealing with dynamics, such as those encountered in maneuvering vehicles or multi-target scenarios. By approximating the posterior distribution of states, they enable accurate prediction and correction, essential for systems where precise localization and trajectory estimation directly impact safety and performance. In , the (EKF) and Unscented Kalman Filter (UKF) are widely used for fusing (GPS) and (INS) data in nonlinear , particularly to handle maneuvers like sharp turns or changes. The EKF linearizes the nonlinear models around the current to propagate states and update with measurements, improving positioning accuracy during dynamic conditions. Similarly, the UKF employs sigma-point sampling to better capture the mean and of nonlinear transformations without explicit , offering superior performance in GPS/INS integration for land vehicles. These methods enable robust state estimation in environments with high maneuverability, such as automotive . For tracking applications, particle filters excel in multi-target radar scenarios involving occlusions or non-Gaussian clutter, where traditional Kalman variants struggle with multimodal distributions. By representing the state posterior with a weighted set of particles, these filters sequentially sample and resample to track multiple interacting targets, accommodating nonlinear motion models and irregular measurement noise from radar returns. Seminal work has demonstrated their effectiveness in handling data association challenges in cluttered environments, such as urban radar tracking. In control systems, nonlinear state estimates provide for tasks like robotic path planning and stability, where accurate knowledge of , , and is required to generate corrective actions. For instance, in , these estimates integrate with to plan collision-free paths under uncertain dynamics, while in , they support augmentation by compensating for nonlinear during flight maneuvers. Approximate stochastic methods, such as those based on sequential , serve as enablers for these feedback loops. Practical examples include autonomous driving systems from the 2010s onward, where nonlinear filters like UKF and EKF fuse sensor data for vehicle state estimation in complex urban environments, and localization, often using particle filters to navigate in GPS-denied settings with terrain-aided measurements. These applications highlight the filters' ability to maintain localization amid sensor limitations and environmental variability. Recent advances as of 2025 include their integration in AI-enhanced navigation systems and diffusion model-based ensemble filtering for . Nonlinear filters offer significant benefits over linear Kalman filters in these regimes, with simulations showing substantial reductions in estimation errors—such as up to 70% in some tasks involving strong nonlinearities like maneuvers or multi-target interactions. However, challenges persist in computation, particularly for particle filters, which require extensive sampling; solutions like on multi-core architectures mitigate this by distributing particle updates, enabling deployment in time-critical systems.

References

  1. [1]
    Nonlinear filter - EPFL Graph Search
    In signal processing, a nonlinear (or non-linear) filter is a filter whose output is not a linear function of its input.
  2. [2]
    Nonlinear Filters - NI
    ### Summary of Nonlinear Filters from NI LabWindows/CVI
  3. [3]
    Analysis of Nonlinear Filters | Introduction to Digital Filters
    A nonlinear system with memory can be quite surprising. In particular, it can emit any output signal in response to any input signal.
  4. [4]
    Nonlinear Filter - an overview | ScienceDirect Topics
    A nonlinear filter is defined as a filtering technique that considers the ordering of pixels within a specified window and can be used to select a ...
  5. [5]
    The Hitchhiker's guide to nonlinear filtering - ScienceDirect
    Nonlinear filtering is used in online estimation of a dynamic hidden variable from incoming data and has vast applications in different fields.
  6. [6]
    [PDF] Lecture 2 Linear filters - mit csail
    A filter is linear translation invariant (LTI) if it is linear and when we translate the input signal by m samples, the output is also translated by m samples.Missing: definition | Show results with:definition
  7. [7]
    Linear Time-Invariant Filters - Stanford CCRMA
    A filter in the audio signal processing context is any operation that accepts a signal as an input and produces a signal as an output. Most practical audio ...
  8. [8]
    [PDF] A View of Three Decades of Linear Filtering Theory - EE@IITM
    This paper outlines three decades of linear least-squares estimation, also known as linear filtering, including Wiener and Kalman filtering problems.
  9. [9]
    What is the difference between Linear and non-linear filters?
    Jul 4, 2024 · Non-linear filters can be defined as signal or image processing which does not consist of superposition and homogeneity. This means that ...
  10. [10]
  11. [11]
    Nonlinear filter design: methodologies and challenges
    **Summary of Abstract and Introduction (Nonlinear Filters)**
  12. [12]
    Nonlinear (nonsuperposable) methods for smoothing data
    Tukey}, year={1974}, url={https://api.semanticscholar.org/CorpusID:118989976} }. J. Tukey; Published 1974; Mathematics. No Paper Link Available. Save to Library ...
  13. [13]
    [PDF] Order statistics in digital image processing - UTK-EECS
    Statistical analysis presents the statistical properties of the median filter output. Deterministic analysis presents certain structural properties (e.g. the ...
  14. [14]
    Nonlinear order statistic filters for image filtering and edge detection
    It is shown that these filters can be used for the reduction of additive white noise, signal-dependent noise, and impulse noise. It is also shown that they ...
  15. [15]
  16. [16]
    [PDF] Effect of Different Window Size on Median Filter Performance with ...
    Median filter performs well with different window sizes, but larger sizes cause blurring. Noise effect decreases with increasing window size.
  17. [17]
    Optimized Algorithms and Hardware Implementation of Median Filter ...
    Apr 17, 2023 · Like quicksort algorithm, quickselect is in general realized as an in-place technique. ... A. Eric, FPGA implementation of median filter using an ...
  18. [18]
    (PDF) Timeline of Median filter - ResearchGate
    and M-estimation. Turkey' gave the so-called theory Median-Median line (1974),. which states that some certain odd matrix or series of.
  19. [19]
    Volterra and Wiener series - Scholarpedia
    Oct 18, 2011 · Wiener (Wiener, 1958) who re-arranged the Volterra series such that it could be applied much more easily to practical signal processing problems ...<|control11|><|separator|>
  20. [20]
    [PDF] Volterra Series: Introduction & Application
    Volterra Series: History. ∎ In 1887, Vito Volterra : “Volterra Series” as a model for. nonlinear behavior. ∎ In 1942, Norbert Wiener: applied Volterra Series ...
  21. [21]
    [PDF] Analytical Foundations of Volterra Series - Stanford University
    In this paper we carefully study the analysis involved with Volterra series. We address system-theoretic issues ranging from bounds on the gain and ...
  22. [22]
    The Volterra and Wiener theories of nonlinear systems
    Oct 5, 2022 · The Volterra and Wiener theories of nonlinear systems ; Publication date: 1980 ; Topics: System analysis, Linear operators, Nonlinear theories.
  23. [23]
    Adaptive least mean squares block Volterra filters - Haweel - 2001
    Jul 4, 2001 · Adaptive filtering has found many applications in situations where the underlying signals are changing or unknown. While linear filters are ...
  24. [24]
    [PDF] on the use of volterra series for real-time simulations
    In this paper, we show that the Volterra series formalism can be used to represent weakly nonlinear analog audio devices as input- output systems, from which ...
  25. [25]
    [PDF] Volterra Filter Equalization: A Fixed Point Approach
    Abstract—One important application of Volterra filters is the equalization of nonlinear systems. Under certain conditions, this.
  26. [26]
    Kernel-based methods for Volterra series identification - ScienceDirect
    Their identification is challenging due to the curse of dimensionality: the number of model parameters grows exponentially with the complexity of the input– ...Missing: filters | Show results with:filters
  27. [27]
    [PDF] Sparsity-Aware Estimation of Nonlinear Volterra Kernels
    Simulated tests demonstrate that the novel batch and recursive estimators can cope with the curse of dimensionality present when identifying. Volterra kernels, ...
  28. [28]
  29. [29]
  30. [30]
  31. [31]
    A suppression of an impulsive noise in ECG signal processing
    Second objective of this paper is an application of a family of M-filters to suppression an impulsive noise in biomedical signals (ECG signals). The reference ...Missing: removal seminal
  32. [32]
    Review of noise removal techniques in ECG signals - IET Journals
    Dec 1, 2020 · ECG signal denoising is a major pre-processing step which attenuates the noises and accentuates the typical waves in ECG signals.
  33. [33]
    An Adaptive Median Filter for Image Denoising - IEEE Xplore
    In our method, a threshold and the standard median is used to detect noise and change the original pixel value to a newer that is closer to or the same as the ...
  34. [34]
    Nonlinear filtering based on 3D wavelet transform for MRI denoising
    Feb 21, 2012 · Using the peak signal-to-noise ratio (PSNR) to quantify the amount of noise of the MR images, we have achieved an average PSNR enhancement of ...
  35. [35]
    (PDF) An Edge Preservation Index for Evaluating Nonlinear Spatial ...
    Aug 7, 2025 · The proposed index is robust to noise level and useful for optimizing the performance of non-linear spatial filters.
  36. [36]
    Astronomical point-source detection based on nonlinear image ...
    A new nonlinear diffusion filtering scheme based on a nonlinear diffusion equation with a variable scale parameter is developed to preserve faint point ...
  37. [37]
    Comparison of the performance of linear and nonlinear filters in the ...
    The nonlinear filters always perform better than linear filters when the power spectra of particular noise realizations differ significantly from the combined ...Missing: removal | Show results with:removal
  38. [38]
    A Review of Nonlinear Filtering Algorithms in Integrated Navigation ...
    Oct 19, 2025 · In high-dimensional systems, the computational complexity of CKF significantly increases, making it difficult to ensure real-time performance ...
  39. [39]
    IMM-EKF based Road Vehicle Navigation with Low Cost GPS/INS
    The IMM-EKF solution presented in this paper allows the exploitation of highly dynamic models just when required, avoiding the impoverishment of the solution.Missing: nonlinear | Show results with:nonlinear
  40. [40]
    [PDF] Error State Extended Kalman Filter Multi-Sensor Fusion for ... - arXiv
    Sep 10, 2021 · The proposed solution is to use an error state extended Kalman filter (ES -EKF) in the context of multi-sensor fusion. Its implementation is ...
  41. [41]
    Performance of GPS and IMU sensor fusion using unscented ...
    Using the Unscented Kalman filter (UKF), sensor fusion was carried out based on the state equation defined by the dynamic and kinematic mathematical model of ...
  42. [42]
    Constrained unscented Kalman filter based fusion of GPS/INS/digital ...
    In this paper, a constrained unscented Kalman filter (CUKF) algorithm is proposed to fuse differential global position system (DGPS), inertial navigation system ...
  43. [43]
    A Survey of Recent Advances in Particle Filters and Remaining ...
    Abstract. We review some advances of the particle filtering (PF) algorithm that have been achieved in the last decade in the context of target tracking, ...Missing: seminal | Show results with:seminal
  44. [44]
    (PDF) A particle filter to track multiple objects - ResearchGate
    PDF | We address the problem of tracking multiple objects encountered in many situations in signal or image processing. We consider stochastic dynamic.
  45. [45]
    [PDF] Tracking Multiple Objects with Particle Filtering - l'IRISA
    We address the problem of multitarget tracking (MTT) encountered in many situations in signal or image processing. We consider stochastic dynamic systems ...
  46. [46]
    [PDF] Modeling, Control, State Estimation and Path Planning Methods for ...
    A comprehensive de- scription of a set of methods that enable automated flight control, state estimation in GPS–denied environments, as well as path planning ...
  47. [47]
    Model Predictive Control in Aerospace Systems: Current State and ...
    The MPC paradigm is now made more tangible in a stepwise manner within the setting of state feedback stabilization of constrained nonlinear, discrete-time, and ...
  48. [48]
    Vehicle State Estimation Based on Sparse Identification of Nonlinear ...
    To address this issue, we propose a novel vehicle state estimation approach that integrates Sparse Identification of. Nonlinear Dynamics (SINDy) with an ...
  49. [49]
    Vehicle State Estimation and Prediction for Autonomous Driving in a ...
    This paper presents methods for vehicle state estimation and prediction for autonomous driving. A round intersection is chosen for application of the methods.
  50. [50]
    Sensor Modeling for Underwater Localization Using a Particle Filter
    This paper presents a framework for processing, modeling, and fusing underwater sensor signals to provide a reliable perception for underwater localization ...
  51. [51]
  52. [52]
    An Improved Unscented Kalman Filter Applied to Positioning and ...
    The RHAUKF is an improved adaptive Unscented Kalman Filter for AUV navigation, reducing errors and improving stability compared to traditional UKF.
  53. [53]
    Parallelisation of the particle filtering technique and application to ...
    As a counterpart, this technique suffers from an heavy computation cost and cannot always satisfy the real time constraints of applications. A data parallel ...
  54. [54]
    On the performance of parallelisation schemes for particle filtering
    May 25, 2018 · The advantage of parallel computation is the drastic reduction of the time needed to run the PF. Let the running time for a PF with K particles ...