Fact-checked by Grok 2 weeks ago

Matching pursuit

Matching pursuit is an iterative for signal , introduced by Stéphane Mallat and Zhifeng Zhang in , that represents a signal as a sparse of atoms selected from a redundant of waveforms, such as time-frequency functions, by repeatedly choosing the atom most correlated with the current and subtracting its . The algorithm begins with the original signal as the initial and proceeds in steps, at each iteration computing inner products between the and all atoms to identify the one maximizing the absolute correlation, then updating the by removing that atom's contribution, continuing until a stopping criterion like a maximum number of atoms or energy threshold is met. In its basic form, matching pursuit produces a non-orthogonal decomposition, which can lead to slower convergence, but it excels in providing adaptive, localized representations for non-stationary signals by leveraging overcomplete dictionaries that capture diverse signal structures like transients or oscillations. A key extension, orthogonal matching pursuit (OMP), developed concurrently by Y. C. Pati, Ramin Rezaiifar, and P. S. Krishnaprasad in 1993, enforces orthogonality among selected atoms through least-squares projection, ensuring faster convergence and exact recovery under certain conditions like restricted isometry properties in sparse settings. The algorithm's properties include sub-optimal selection, which guarantees approximation error bounds related to the dictionary's , and computational efficiency for large dictionaries when implemented with fast transforms like Gabor or frames. Matching pursuit and its variants promote sparsity, making them robust to noise and suitable for overcomplete representations where traditional orthogonal bases fall short. Applications span signal processing, including time-frequency analysis of audio and seismic data, sparse coding in machine learning, and compressed sensing for efficient data acquisition and recovery. In neuroscience, it decomposes brain signals into time-frequency atoms to compute power spectra with high temporal resolution. Further uses include image denoising, radar signal processing, and biomedical applications such as electrocardiogram analysis, where OMP variants recover sparse signals from incomplete measurements.

Fundamentals

Definition and Motivation

Matching pursuit (MP) is an iterative technique designed to decompose a signal f into a sparse of atoms selected from an overcomplete D. In this framework, the signal f(t) is represented as f = \sum_{n=1}^{m} a_n g_{\gamma_n} + R^m f, where g_{\gamma}(t) are the dictionary atoms parameterized by \gamma, a_n are the coefficients, and R^m f is the after m iterations, aiming to minimize the number of non-zero coefficients for sparsity. The D = \{ g_{\gamma}(t) \}_{\gamma \in \Gamma} consists of a redundant family of unit-norm waveforms in a , often time-frequency atoms like Gabor functions generated by scaling, , and of a base function. The primary motivation for arises from the challenges in sparse signal using overcomplete dictionaries, which provide more flexible representations than orthogonal bases such as or transforms by better capturing localized structures in diverse signals. Traditional orthogonal bases often fail to efficiently approximate signals with varying time-frequency content, leading to dense expansions that obscure underlying patterns. Overcomplete dictionaries address this by offering a larger set of potential atoms, but selecting the optimal sparse subset becomes computationally intractable due to the in searching over all possible combinations. Specifically, targets the NP-hard problem of minimizing \|f - D x\|_2^2 subject to \|x\|_0 \leq N, where x is the coefficient and \| \cdot \|_0 counts non-zero entries, as exact solutions require evaluating an number of subsets, rendering exhaustive search infeasible for large dictionaries. By greedily selecting the atom most correlated with the current at each step, provides an efficient suboptimal that achieves sparsity while adapting to signal structures, making it practical for applications requiring adaptive representations.

Historical Development

The origins of matching pursuit trace back to earlier techniques in statistical modeling and adaptive signal representation during the 1970s and 1980s. Projection pursuit methods were first introduced by and Tukey in as a tool for exploratory analysis of multivariate data, seeking low-dimensional projections that reveal interesting structures in high-dimensional spaces. This concept evolved into projection pursuit , formalized by and Stuetzle in 1981, which extended additive models by iteratively selecting one-dimensional projections to approximate nonlinear relationships while minimizing residual error. These statistical approaches emphasized selection of projections to capture data variability, laying conceptual groundwork for later algorithms, though they were primarily applied to tasks rather than signal . In signal processing, pre-1993 developments were more limited but included incremental greedy strategies for parameter estimation. A notable precursor was the Incremental Multi-Parameter Algorithm, proposed by John Mather in 1990 at the Asilomar Conference on Signals, Systems, and Computers, which iteratively refines multi-parameter models by adding parameters that best reduce prediction error in a sequential manner. Despite these innovations, prior works lacked a unified framework for overcomplete dictionaries in time-varying signals, highlighting a gap that matching pursuit would address by adapting projection pursuit ideas to redundant waveform bases. Matching pursuit was formally introduced in 1993 by Stéphane Mallat and Zhifeng Zhang in their seminal paper "Matching Pursuits with Time-Frequency Dictionaries," published in IEEE Transactions on . The algorithm provided a greedy iterative procedure to decompose signals into expansions of dictionary atoms, selected via inner product maximization, specifically tailored for time-frequency analysis using dictionaries like Gabor functions and wavelets. Following its introduction, matching pursuit gained early adoption in the mid-1990s for time-frequency and adaptive representations with dictionaries, enabling sparse decompositions of non-stationary signals in applications such as seismic processing and neural signal analysis. This period saw extensions to overcomplete wavelet frames, building directly on the original framework to improve localization in joint time-frequency domains. A key milestone in the 2000s was the integration of matching pursuit variants into , where the technique supported sparse recovery from underdetermined measurements. Notably, Tropp and Gilbert's paper demonstrated that orthogonal matching pursuit could reliably reconstruct k-sparse signals from m = O(k log(n/k)) random measurements, bridging the algorithm to high-impact recovery guarantees in compressive sampling. Overall, matching pursuit's evolution reflects a synthesis of statistical projection pursuit with needs, evolving from limited precursors to a foundational tool in sparse modeling.

The Algorithm

Step-by-Step Procedure

The matching pursuit algorithm is an iterative, greedy procedure that decomposes a signal f into a linear combination of dictionary elements g_\gamma selected from a redundant dictionary D, where each g_\gamma has unit norm (\|g_\gamma\|_2 = 1). The process begins with initialization and proceeds through successive iterations that maximize the correlation between the current residual and dictionary atoms, followed by updates to the approximation and residual without orthogonalization against prior selections, ensuring a non-orthogonal expansion. Initialization
Set the initial residual R_0 = f and the initial \hat{a}_0 = 0. This establishes the starting point where the full signal serves as the unexplained component to be decomposed.
Iteration
For each n = 1 to N:
  • Select the index \gamma_n = \arg\max_\gamma |\langle R_{n-1}, g_\gamma \rangle|, which identifies the atom most correlated with the current via the absolute value of the inner product (equivalent to the normalized inner product since atoms are unit ).
  • Compute the a_n = \langle R_{n-1}, g_{\gamma_n} \rangle.
  • the \hat{a}_n = \hat{a}_{n-1} + a_n g_{\gamma_n}.
  • the R_n = R_{n-1} - a_n g_{\gamma_n}.
This selection prioritizes the element that best matches the at each step, building the additively while subtracting the from the . The following outlines the core , highlighting its iterative and non-orthogonal nature:
Input: Signal f, dictionary D (with unit norm atoms), max iterations N, tolerance ε
R ← f  // Initial residual
â ← 0  // Initial approximation
n ← 0
While n < N and ||R||_2 ≥ ε:
    γ_n ← argmax_γ |⟨R, g_γ⟩|  // [Greedy](/page/Greedy) selection
    a_n ← ⟨R, g_{γ_n}⟩  // Coefficient
    â ← â + a_n g_{γ_n}  // Update [approximation](/page/Approximation)
    R ← R - a_n g_{γ_n}  // Update residual
    n ← n + 1
Output: [Approximation](/page/Approximation) â, coefficients {a_n}, indices {γ_n}
Stopping Criteria
The algorithm terminates when the residual norm \|R_n\|_2 < \epsilon (a predefined small threshold indicating sufficient decomposition) or when the iteration count reaches the maximum N.
Computational Considerations
Each iteration requires an exhaustive search over the entire dictionary to find the maximizing atom, leading to a computational complexity of O(N M L), where N is the number of iterations, M is the dictionary size, and L is the signal length; this high cost arises from the repeated inner product evaluations and motivates approximations or structured dictionaries in practice.

Mathematical Formulation

Matching pursuit (MP) provides a greedy framework for sparsely approximating a signal f from a redundant dictionary of atoms. The core goal is to decompose f \in \mathcal{H}, where \mathcal{H} is a Hilbert space (e.g., L^2(\mathbb{R}) for continuous-time signals), as an N-term approximation \hat{f}_N = \sum_{n=1}^N a_n g_{\gamma_n}, with coefficients a_n \in \mathbb{R} and atoms g_{\gamma_n} selected from the dictionary \mathcal{D} = \{g_\gamma : \gamma \in \Lambda\}. This expansion aims to capture the signal's structure efficiently using a minimal number of dictionary elements, leveraging the overcompleteness of \mathcal{D}, which contains more atoms than the dimension of \mathcal{H} (or the signal length in discrete cases), allowing for redundancy such as in or . Formally, MP approximates the solution to the sparse optimization problem: \min_{\mathbf{x}} \|\mathbf{f} - D \mathbf{x}\|_2^2 \quad \text{subject to} \quad \|\mathbf{x}\|_0 \leq N, where \mathbf{f} is the vectorized signal, D is the dictionary matrix with columns as atom vectors, and \mathbf{x} has at most N non-zero entries corresponding to the selected atoms and coefficients. In the continuous setting, the inner product is defined as \langle f, g \rangle = \int_{-\infty}^{\infty} f(t) g(t) \, dt; for discrete signals of length M, it becomes \langle \mathbf{f}, \mathbf{g} \rangle = \sum_{m=1}^M f_m g_m. Atoms are assumed normalized, i.e., \|g_\gamma\| = 1 for all \gamma, ensuring the inner product magnitude directly measures correlation strength. The algorithm proceeds iteratively, starting with the initial residual R_0 = f. At each step n = 1, \dots, N, the index \gamma_n is selected to maximize the absolute inner product with the current residual: \gamma_n = \arg\max_{\gamma \in \Lambda} |\langle R_{n-1}, g_\gamma \rangle|. The coefficient is then computed as a_n = \langle R_{n-1}, g_{\gamma_n} \rangle, and the residual is updated via orthogonal projection onto the selected atom: R_n = R_{n-1} - a_n g_{\gamma_n}. This process yields the approximation \hat{f}_N = f - R_N = \sum_{n=1}^N a_n g_{\gamma_n}.

Theoretical Properties

Convergence and Error Analysis

In matching pursuit, the approximation error decreases monotonically with each iteration. The \ell_2-norm of the residual R_n satisfies \|R_n\|_2 \leq \|R_{n-1}\|_2 for all n, with equality holding only if the coefficient a_n = 0. This property arises because the update subtracts the energy of the projection onto the selected dictionary atom g_{j_n}, specifically \|R_n\|_2^2 = \|R_{n-1}\|_2^2 - \frac{|\langle R_{n-1}, g_{j_n} \rangle|^2}{\|g_{j_n}\|_2^2}. The algorithm converges to the exact signal representation \hat{f} = f as the number of iterations N \to \infty if f lies in the closed linear span of the dictionary D; otherwise, it converges to the best \ell_2-approximation P_D f in that span. This convergence is ensured when D is complete (dense) in the underlying Hilbert space, as the residuals form a decreasing sequence bounded below by zero, and the partial sums remain in the convex hull of the unit ball in \ell_1(D). For finite N, error bounds can be derived using the dictionary coherence \mu = \max_{j \neq k} |\langle g_j, g_k \rangle|, assuming unit-norm atoms. The derivation relies on inductively bounding the deviation from the optimal projection using the maximum correlation \mu, which controls the interference between selected atoms. Despite these guarantees, matching pursuit is suboptimal for highly coherent dictionaries (\mu close to 1), where the greedy selection may repeatedly choose correlated atoms, leading to slow progress and poor approximations. In the worst case, convergence can be exponential in the number of iterations required, demanding N on the order of the dictionary size for near-optimal recovery, unlike more robust methods. Improved bounds emerged in the 2000s from compressed sensing literature, leveraging coherence for exact recovery conditions. These results emphasize the role of cumulative coherence \mu_1(s) \leq s \mu in ensuring stability.

Key Invariants

In matching pursuit, a key invariant analogous to Parseval's identity preserves the total energy of the signal across iterations, expressed as \|f\|_2^2 = \|R_{N+1}\|_2^2 + \sum_{n=1}^N |a_n|^2, where f is the original signal, R_{N+1} is the residual after N iterations, and a_n are the coefficients for the selected atoms. This relation holds for any N because each residual R_n is orthogonal to the previously selected atom g_{\gamma_{n-1}}, ensuring that the subtracted projection does not alter the energy accounting in prior steps. The proof proceeds by induction: for the base case N=1, the orthogonality of R_2 to g_{\gamma_1} directly yields \|f\|_2^2 = |a_1|^2 + \|R_2\|_2^2; assuming it holds for N=k, the next step's orthogonality extends the sum without overlap. Unlike orthogonal bases, the selected atoms g_{\gamma_1}, \dots, g_{\gamma_N} in matching pursuit are generally not mutually orthogonal, which introduces potential redundancy in the approximation as inner products between distinct atoms may be nonzero. This non-orthogonality arises from the greedy selection process, which prioritizes correlation with the current residual over global orthogonality, allowing the algorithm to adapt to signal structures but at the cost of possible linear dependencies among the atoms. The energy invariant implies that the signal's total energy is strictly partitioned between the squared magnitudes of the coefficients (capturing the approximation) and the residual's energy, providing a foundation for sparsity assessment in the decomposition. Practically, this partitioning supports stopping criteria, such as halting iterations when the residual energy falls below a threshold, ensuring a sparse representation that concentrates most energy in few coefficients without over-decomposing the signal. For the invariant to hold in this direct form, the dictionary must consist of normalized atoms satisfying \|g_\gamma\|_2 = 1 for all \gamma, as unnormalized atoms would scale the coefficients and disrupt the energy equality. This normalization requirement underscores the algorithm's reliance on unit-norm elements to maintain the clean energy split, facilitating convergence guarantees tied to the residual's decay.

Applications

Signal and Image Processing

Matching pursuit has been widely applied in audio and speech coding, particularly using to extract time-frequency atoms that capture non-stationary signal structures. In this approach, the algorithm iteratively selects that best match local signal features, enabling sparse representations suitable for compression at low bit rates. This method outperforms traditional -based coding in the 1990s by providing better perceptual quality and energy compaction for harmonic and transient components in speech signals. In image compression, matching pursuit facilitates adaptive atom selection from overcomplete dictionaries, such as wavelet or Gabor frames, to generate sparse approximations that reduce blocking artifacts common in fixed-basis schemes like . By iteratively pursuing the dictionary element with the highest correlation to the residual image, the technique achieves efficient encoding of textured regions while preserving edges and details. Seminal work demonstrated that this adaptive strategy yields bit-rate savings and improved peak signal-to-noise ratio (PSNR) compared to , particularly for natural images at moderate compression levels. For video and shape analysis, matching pursuit supports tracking non-stationary features across frames by decomposing motion and geometric structures into dictionary atoms, aiding 3D object coding. The iterative pursuit allows for selective updates to model evolving shapes, enhancing compression efficiency in dynamic scenes. Applications include video coding where MP-based decompositions reduce redundancy in temporal sequences, outperforming block-based methods at low bit rates. In structural health monitoring, matching pursuit detects anomalies in vibration signals using wavelet dictionaries to isolate transient events like impacts or cracks. The algorithm decomposes sensor data into sparse wavelet atoms, highlighting deviations from baseline structures for fault diagnosis. This has proven effective for non-stationary vibration analysis, enabling early detection with minimal computational overhead.

Emerging Fields

In machine learning, matching pursuit has found application in feature extraction through sparse coding, particularly in dictionary learning for neural networks such as autoencoders. Post-2010 developments have integrated matching pursuit into unsupervised learning pipelines to identify sparse, interpretable representations from high-dimensional data, enhancing model efficiency and generalization. For instance, in sparse autoencoders, matching pursuit algorithms facilitate the construction of overcomplete dictionaries by iteratively selecting basis elements that best approximate input features, leading to improved performance in tasks like image recognition where sparsity reduces overfitting. This approach has been re-contextualized to extract latent features in large-scale models, demonstrating superior reconstruction accuracy compared to traditional thresholding methods in dictionary learning scenarios. In neuroscience, matching pursuit enables modeling of neural spikes using overcomplete bases, aiding analysis of population coding in cortical networks. By decomposing spike trains into sparse combinations of basis functions, such as Gabor-like wavelets, the algorithm reveals temporal and spatial patterns in neural activity, supporting hypotheses on efficient coding strategies in the visual cortex. Empirical studies have applied matching pursuit to gamma rhythm oscillations (30–80 Hz), showing longer durations in spiking events that correlate with attentional processing, thus providing insights into how populations of neurons encode sensory information with minimal redundancy. This method outperforms density-based estimators in resolving fine-grained spike timings from noisy electrophysiological data. Recent advancements in quantum dynamics leverage matching pursuit for simulating time-dependent Schrödinger equations via adaptive basis pursuit on coherent-state dictionaries. In the 2020s, this has enabled efficient propagation of multidimensional wave functions by greedily selecting non-orthogonal basis elements that capture evolving quantum states, reducing computational complexity for systems like molecular vibrations. For example, matching-pursuit methods integrated with tensor-train formats have achieved high-fidelity simulations of nonadiabatic processes, with error bounds scaling favorably for up to 10-dimensional problems, outperforming fixed-basis approaches in accuracy and speed. These techniques address challenges in quantum chemistry by adaptively refining bases to match dynamical correlations. Matching pursuit serves as a decoder in compressed sensing recovery for undersampled signals, with notable empirical success in MRI reconstruction. By iteratively identifying sparse coefficients from incomplete k-space measurements, it enables high-quality image recovery at sampling rates as low as 20–30% of , minimizing artifacts like aliasing in clinical scans. This application highlights matching pursuit's robustness in biomedical contexts where data acquisition is constrained. To address scalability gaps, post-2021 implementations have introduced linear-time approximations of for large-scale data processing. Parallel variants distribute the dictionary search across multiple processors, achieving near-linear runtime scaling for datasets with millions of atoms, as demonstrated in simulations reducing iteration times by factors of 10–50 on GPU clusters. These advances enable real-time sparse recovery in streaming applications, such as big data analytics, without sacrificing approximation guarantees. Recent GPU-accelerated variants, such as those using continuous dictionaries, further improve efficiency for high-dimensional tasks like medical imaging reconstruction as of 2024.

Extensions and Variants

Orthogonal Matching Pursuit

Orthogonal matching pursuit (OMP) is a greedy algorithm that extends the standard by incorporating an orthogonalization step at each iteration, ensuring that the residual is orthogonal to all previously selected dictionary atoms. This modification leads to a more accurate sparse approximation by projecting the signal onto the subspace spanned by the selected atoms via least-squares optimization. Introduced in the seminal work by Pati, Rezaiifar, and Krishnaprasad, OMP iteratively builds a support set while minimizing the reconstruction error over the accumulated atoms. The algorithm begins with the initialization: set the initial residual R_0 = y, where y is the input signal, and the support set \Lambda_0 = \emptyset. At iteration n = 1, 2, \dots, K, where K is the desired sparsity level, the next atom index is selected as \gamma_n = \arg\max_{\gamma \notin \Lambda_{n-1}} \left| \langle R_{n-1}, g_\gamma \rangle \right|, with g_\gamma denoting the dictionary atoms and \Lambda_{n-1} = \{ \gamma_1, \dots, \gamma_{n-1} \}. The support is then updated to \Lambda_n = \Lambda_{n-1} \cup \{ \gamma_n \}, and the matrix G_n = [g_{\gamma_1}, \dots, g_{\gamma_n}] spans the selected subspace. The coefficients a are computed by solving the least-squares problem a = \arg\min_{b} \| y - G_n b \|_2^2, which yields the orthogonal projection of y onto the span of G_n. The residual is updated as R_n = y - G_n a, guaranteeing orthogonality: \langle R_n, g_{\gamma_i} \rangle = 0 for all i = 1, \dots, n. The process terminates after K iterations, producing the sparse approximation \hat{x} with support \Lambda_K and coefficients from the final least-squares solve. Compared to standard matching pursuit, OMP achieves a smaller approximation error for the same number of selected atoms N, as the orthogonalization prevents error accumulation from non-orthogonal projections and ensures the residual lies in the orthogonal complement of the selected subspace. This results in a strictly decreasing error norm at each step, with \| R_n \|_2 \leq \| R_{n-1} \|_2, providing better convergence properties for sparse signal representation. In the context of compressed sensing, Tropp and Gilbert established that OMP enables exact recovery of k-sparse signals from m = O(k \log(n/k)) random measurements with high probability for sub-Gaussian measurement matrices, with the number of measurements scaling as m \geq C k \log(n/k) for some constant C \approx 4, and success probability exceeding $1 - e^{-m/2}. Later analyses using the restricted isometry property (RIP) provide conditions for uniform recovery, such as \delta_{k+1} < 1/(\sqrt{k} + 1). A primary drawback of OMP is its higher computational complexity relative to matching pursuit: while matching pursuit requires O(M N) operations per iteration for inner products with M dictionary atoms and signal dimension N, OMP demands additional O(k^2 N) for the least-squares solve over the growing support of size k, leading to a total complexity of O(K^2 N + K M N) assuming naive implementations without QR updates. This increased cost makes OMP less suitable for very large-scale problems without optimized solvers, though it remains efficient for moderate sparsity levels.

Other Advanced Methods

Multichannel matching pursuit extends the standard algorithm to handle vector-valued signals, such as those from multichannel recordings in electroencephalography (EEG). Unlike the scalar case, atom selection here relies on joint inner products computed across all channels simultaneously, enabling the capture of correlations between channels during decomposition. This variant facilitates applications like EEG inverse problems, where a computationally efficient, suboptimal implementation decomposes signals to estimate underlying neural sources. In compressed sensing, several advanced variants of matching pursuit enhance recovery performance for sparse signals from underdetermined measurements. Compressive sampling matching pursuit (CoSaMP), introduced in , operates through iterative steps of identifying potential support atoms via maximum correlations, merging with the previous support set, performing least-squares estimation on the combined support, and pruning to the desired sparsity level; these steps ensure faster convergence and exact recovery for signals satisfying the restricted isometry property. Generalized orthogonal matching pursuit (gOMP), developed in 2012, modifies the orthogonal matching pursuit by selecting the multiple (N > 1) atoms with the highest absolute correlations at each iteration, rather than just one, which accelerates the algorithm while preserving strong recovery guarantees under incoherence conditions. Relaxed orthogonal matching pursuit (ROMP), also from , employs a threshold-based selection after identifying candidate atoms: it retains only those whose inner products are within a factor of the maximum, mitigating sensitivity to high coherence and improving robustness in noisy or correlated settings. Recent advances in the have addressed the scalability of matching pursuit for massive dictionaries by introducing linear-time approximations that leverage fast search structures, such as hierarchical indexing or approximate nearest neighbor algorithms, to expedite the maximization of inner products without exhaustive computation. For example, acceleration methods tailored to multi-Gabor dictionaries reduce the of atom selection, enabling sparse approximations in audio and tasks. More recent developments include the Randomized Orthogonal Matching Pursuit with Adaptive Partial Selection (AROMP) in 2024, which mitigates computational overhead by adaptively selecting partial atoms in randomized iterations for high-dimensional sparse recovery, and the Group Forward–Backward Orthogonal Matching Pursuit (Group-FoBa-OMP) in 2024, designed for sparse in general linear models by combining forward and backward greedy steps on grouped atoms. The following table compares key variants against standard matching pursuit (MP) and orthogonal matching pursuit (OMP), highlighting trade-offs in performance and complexity (where N is dictionary size, K is sparsity level):
VariantProsConsComplexity Example (for recovery)
Standard MPSimple implementation; low per-iteration costAccumulates approximation error due to non-orthogonality; slower overall O(N K)
OMPOrthogonal updates yield better accuracy per stepHigher cost from repeated least-squares projectionsO(N K^2)
CoSaMPRapid ; uniform recovery guarantees even with noiseMore intricate logicO(N \log^2 N) for exact under RIP
gOMPBalances speed and accuracy via multi-atom selection; fewer iterations than OMPRisk of selecting correlated atoms if N is too largeO(N K \log K)
ROMPThresholding enhances against coherent dictionariesAdditional / overheadO(N K \log K)

References

  1. [1]
    Matching pursuits with time-frequency dictionaries - IEEE Xplore
    Matching pursuits with time-frequency dictionaries. Abstract: The authors introduce an algorithm, called matching pursuit, that decomposes any signal into a ...
  2. [2]
    Matching Pursuit Algorithms - MATLAB & Simulink - MathWorks
    Matching pursuit is a greedy algorithm that computes the best nonlinear approximation to a signal in a complete, redundant dictionary.
  3. [3]
    Orthogonal matching pursuit: recursive function approximation with ...
    We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of elementary ...
  4. [4]
    Comparison of Matching Pursuit Algorithm with Other Signal ...
    Mar 23, 2016 · In this review, we describe a multiscale decomposition technique based on an over-complete dictionary called matching pursuit (MP), and show ...
  5. [5]
    [PDF] Matching pursuits with time-frequency dictionaries
    We introduce an algorithm called matching pursuit, that decomposes any signal into a linear expansion of wave- forms that belong to a redundant dictionary of ...
  6. [6]
    Sparse Approximate Solutions to Linear Systems | SIAM Journal on ...
    It is shown that the problem is NP-hard, but that the well-known greedy heuristic is good in that it computes a solution with at most ⌈ 1 8 ⁢ O p t ⁡ ( 𝜖 / 2 ) ...
  7. [7]
    THE INCREMENTAL MULTI-PARAMETER ALGORITHM
    THE INCREMENTAL MULTI-PARAMETER ALGORITHM · J. Mather · Published in Conference Record Twenty… 5 November 1990 · Engineering, Physics · 1990 Conference Record ...
  8. [8]
    [PDF] Matching pursuits with time-frequency dictionaries
    Matching pursuits with time-frequency dictionaries · S. Mallat, Zhifeng Zhang · Published in IEEE Transactions on Signal… 1 August 1993 · Computer Science, ...
  9. [9]
    Signal Recovery From Random Measurements Via Orthogonal ...
    Dec 31, 2007 · This paper demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m ...Missing: compressed sensing
  10. [10]
    Revisiting matching pursuit: Beyond approximate submodularity
    We study the problem of selecting a subset of vectors from a large set to obtain the best signal representation over a family of functions.Missing: historical | Show results with:historical
  11. [11]
    [PDF] A Mathematical Introduction to Compressive Sensing
    May 7, 2012 · matching pursuit was then introduced [294] (although it had appeared ... However, the lower bound on the coherence in Theorem 5.7 limits the.
  12. [12]
    [PDF] On the exponential convergence of Matching Pursuits in quasi ...
    Dec 11, 2010 · The main contribution is a detailed analysis of the approximation and stability properties of MP with quasi-incoherent dictionaries, and a bound ...
  13. [13]
  14. [14]
    [PDF] Matching pursuit of images
    We introduce a fast algorithm to compute the Matching Pursuit decomposition for images with a complexity of ¥(N log2 N) per iteration for an image of N2 pixels.
  15. [15]
    [PDF] The Importance of Encoding Versus Training with Sparse Coding ...
    One alternative to VQ that has served in this role is sparse coding, which has con- sistently yielded better results on benchmark recogni- tion tasks (Yang et ...Missing: post- | Show results with:post-
  16. [16]
    Duration analysis using matching pursuit algorithm reveals longer ...
    The gamma rhythm (30–80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, ...
  17. [17]
    Constrained Backtracking Matching Pursuit Algorithm for Image ...
    Feb 5, 2021 · Image reconstruction based on sparse constraints is an important research topic in compressed sensing. Sparsity adaptive matching pursuit ...
  18. [18]
    Parallel matching pursuit algorithm and analysis - ScienceDirect.com
    Jun 15, 2023 · In this paper, we propose a new sparse recovery algorithm, called Parallel Matching pursuit (PMP), which searches the signal support set in both depth and ...Missing: 2021+
  19. [19]
    [PDF] Orthogonal Matching Pursuit - Engineering Information Technology
    In this paper we describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries.
  20. [20]
    [PDF] Signal Recovery from Random Measurements via Orthogonal ...
    Abstract—This article demonstrates theoretically and empiri- cally that a greedy algorithm called Orthogonal Matching Pursuit. (OMP) can reliably recover a ...
  21. [21]
    [PDF] Comparison of Orthogonal Matching Pursuit Implementations
    OMP is simple and straightforward to implement in a naive manner, and its computational complexity can be re- duced for a region of problem dimensions using ...
  22. [22]
    Multichannel matching pursuit and EEG inverse solutions
    As the first step, EEG recordings are decomposed by multichannel matching pursuit algorithm—in this study we introduce a computationally efficient, suboptimal ...
  23. [23]
    Iterative signal recovery from incomplete and inaccurate samples
    Mar 17, 2008 · This paper describes a new iterative recovery algorithm called CoSaMP that delivers the same guarantees as the best optimization-based approaches.
  24. [24]
    [1111.6664] Generalized Orthogonal Matching Pursuit - arXiv
    Nov 29, 2011 · In this paper, we introduce an extension of the OMP for pursuing efficiency in reconstructing sparse signals. Our approach, henceforth referred ...
  25. [25]
    [2202.12380] Fast Matching Pursuit with Multi-Gabor Dictionaries
    Feb 16, 2022 · In this work, we present an acceleration technique and an implementation of the matching pursuit algorithm acting on a multi-Gabor dictionary.Missing: large linear 2020