Fact-checked by Grok 2 weeks ago

Signal separation

Signal separation is a core technique in that involves recovering individual source signals from observed mixtures, typically without prior knowledge of the mixing process or the source characteristics, by exploiting properties such as statistical independence or time-frequency differences. This process is essential for disentangling complex signals in scenarios where multiple sources overlap, such as in arrays capturing concurrent emissions. (BSS), a prominent subset, models the mixture as a \mathbf{x}(t) = \mathbf{A} \mathbf{s}(t), where \mathbf{s}(t) represents the independent sources and \mathbf{A} is the unknown mixing matrix, aiming to estimate a demixing matrix to recover \mathbf{s}(t). The field originated from early work in and models, with foundational contributions including the Hérault-Jutten in 1985 for adaptive separation and Pierre Comon's 1994 formalization using higher-order statistics. Key methods include , which maximizes statistical independence through measures like or , and nonnegative matrix factorization (NMF), particularly effective for audio separation by decomposing spectrograms into additive components. Other approaches, such as time-frequency masking and , address challenges in underdetermined systems where the number of sources exceeds the number of observations. These techniques often incorporate preprocessing steps like whitening to simplify the separation problem by decorrelating the mixture. Applications of signal separation span diverse domains, including biomedical signal processing for artifact removal in (EEG) or isolating fetal electrocardiograms (ECG) from maternal signals to detect cardiac conditions early. In audio and speech processing, it enables the separation of individual voices in noisy environments, emulating the human , while in communications, it mitigates . Emerging methods, such as W-Net architectures combining autoencoders, enhance performance in complex mixtures like synthetic ECG datasets, though they face limitations in high-noise conditions. Overall, signal separation continues to evolve, integrating to handle nonlinear and convolutive mixtures with greater accuracy.

Fundamentals

Definition and Principles

Signal separation refers to the computational process of recovering individual source signals from observed mixtures of those signals, typically without prior knowledge of either the sources or the mixing mechanism that produced the mixtures. This task arises in scenarios where multiple signals overlap or interfere, and the goal is to disentangle them to reveal the underlying components. Central principles underpinning signal separation include the assumption of in the mixing model, where observed signals are formed as linear combinations of the source signals. Statistical among the source signals is another key assumption, enabling separation by exploiting differences in their statistical properties rather than mere . Additionally, the mixtures are captured through multiple sensors or channels, which provide the multidimensional observations essential for estimating the sources. Signal separation methods are broadly categorized as supervised or (blind). Supervised approaches leverage datasets that include both mixtures and their corresponding clean signals to learn separation models. In contrast, blind separation operates without such auxiliary data, relying instead on inherent properties of the signals and mixtures to perform the . Representative examples illustrate these concepts: recovering two overlapping audio tracks from a combined recording demonstrates disentangling mixed temporal signals, while isolating distinct features in a composite image highlights spatial separation of embedded components. These principles form the foundation for more advanced techniques, often building on linear models as a starting point.

Historical Development

The origins of signal separation can be traced to the late and 1980s, when researchers began exploring techniques in communications and to recover signals from unknown convolutive mixtures without prior knowledge of the sources or s. These early efforts focused on sparsity-promoting methods, such as minimum deconvolution, which were applied to seismic and channel equalization in systems. Concurrently, the problem emerged as a motivating for separating mixed auditory signals in reverberant environments, highlighting the need for robust blind techniques. In the mid-1980s, foundational work on neural network-based blind source separation was pioneered by Jeanny Hérault and Christian Jutten, who proposed adaptive architectures inspired by biological sensory processing to unmix independent signals. Their 1986 contribution, presented at a neural networks conference, introduced iterative algorithms for separation, marking a shift toward statistical independence assumptions. Building on this, the 1990s brought a major breakthrough with the formalization of (ICA) by Pierre Comon in 1994, who defined it as a linear transformation minimizing statistical dependence among components, and Jean-François Cardoso, who developed contrast functions for practical estimation. Aapo Hyvärinen further advanced ICA in the late 1990s and early 2000s through efficient algorithms like , emphasizing non-Gaussianity for convergence. These developments were influenced by array signal processing techniques, including the MUSIC algorithm introduced by Ralph Otto Schmidt in the early 1980s, which was adapted in the 1990s for source separation tasks beyond direction-of-arrival estimation. The 2000s expanded signal separation to nonnegative constraints with the introduction of non-negative matrix factorization (NMF) by Daniel D. Lee and H. Sebastian Seung in 1999, enabling part-based decomposition of signals like audio spectrograms. This method gained traction for its interpretability in underdetermined mixtures. The 2010s and 2020s shifted toward deep learning paradigms, with deep clustering proposed in 2017 by John R. Hershey and colleagues to embed and cluster time-frequency representations for permuting sources in audio mixtures. In 2018, Yi Luo and Nima Mesgarani introduced Conv-TasNet, a fully convolutional time-domain network that outperformed time-frequency masking baselines for speech separation. Post-2020 advancements integrated transformer models, as in the Dual-Path Transformer Network by Jingjing Chen et al. in 2020, which captured long-range dependencies for improved end-to-end separation. Key contributors like Comon, Hyvärinen, and Luo have driven these evolutions, bridging statistical and neural approaches.

Mathematical Foundations

Signal Models and Mixtures

Signal separation techniques rely on mathematical models that describe how unobserved source signals combine to form observed mixtures. The most fundamental model is the instantaneous linear mixture, where the observed signals \mathbf{x}(t) \in \mathbb{R}^m at time t are expressed as a linear transformation of the source signals \mathbf{s}(t) \in \mathbb{R}^n, given by \mathbf{x}(t) = \mathbf{A} \mathbf{s}(t), with \mathbf{A} \in \mathbb{R}^{m \times n} as the unknown mixing matrix. This model assumes that the mixing occurs without delays or filtering, making it suitable for scenarios where sources are captured synchronously by sensors. In real-world applications, particularly in acoustics, signals often propagate through media with delays and reverberations, leading to the convolutive . Here, the observed signals are \mathbf{x}(t) = \sum_{\tau=0}^{L-1} \mathbf{A}(\tau) \mathbf{s}(t - \tau), where L is the length of the mixing filters, \mathbf{A}(\tau) \in \mathbb{R}^{m \times n} represents the mixing coefficients at \tau, and the summation captures temporal dependencies. This formulation is prevalent in audio processing due to in environments. The dimensionality of the mixing varies depending on the number of sources n and mixtures m. In undercomplete scenarios (m < n), fewer observations are available than sources, complicating separation but possible under sparsity assumptions. Complete mixtures occur when m = n, allowing square invertible mixing matrices in the instantaneous case. Overcomplete representations (m > n) provide redundant observations, enhancing robustness but requiring methods to handle excess dimensions. Practical mixtures often include additive noise, extending the instantaneous model to \mathbf{x}(t) = \mathbf{A} \mathbf{s}(t) + \mathbf{n}(t), where \mathbf{n}(t) \in \mathbb{R}^m denotes the vector, typically assumed Gaussian and independent of the sources. Similar noise terms can be incorporated into convolutive models for noisy environments. For identifiability in these models, sources are commonly assumed to be zero-mean, ensuring the mixing matrix is defined without bias shifts, and stationary in a wide-sense, with constant statistical properties over time. Crucially, non-Gaussianity of the sources is required, as Gaussian distributions lead to ambiguity in linear mixtures due to the .

Problem Formulation and Assumptions

The signal separation problem, particularly in the context of blind source separation (BSS), aims to recover unknown source signals \mathbf{s}(t) = [s_1(t), \dots, s_n(t)]^T from observed mixtures \mathbf{x}(t) = [x_1(t), \dots, x_m(t)]^T, where typically m \geq n, though underdetermined cases with m < n are also addressed using additional constraints such as sparsity. The primary objective is to estimate a separation matrix \mathbf{W} \in \mathbb{R}^{n \times m} such that the estimated sources \mathbf{y}(t) = \mathbf{W} \mathbf{x}(t) \approx \mathbf{s}(t), often by minimizing measures of statistical dependence like mutual information between the components of \mathbf{y}(t) or higher-order cross-cumulants. A key condition for in linear is Comon's theorem, which states that the sources are separable up to and if they are statistically and at most one is Gaussian. This theorem ensures that the mixing process can be uniquely inverted under these constraints, though extensions exist for more general cases. Central assumptions enabling solutions include the statistical of sources, linear instantaneous mixing as modeled by \mathbf{x}(t) = \mathbf{A} \mathbf{s}(t) (where \mathbf{A} is the unknown mixing matrix), and often stationarity of the sources to allow consistent estimation over time. Challenges arise with Gaussian sources, where alone does not suffice for due to the rotational invariance of the Gaussian , or with nonlinear mixing, which violates the and requires alternative approaches. Solutions inherently suffer from permutation ambiguity, where the order of recovered sources can be arbitrary, and scaling ambiguity, where each source can be multiplied by a scalar and the corresponding column of \mathbf{W} adjusted inversely without altering the fit. These indeterminacies are typically resolved post-separation through additional criteria, such as ordering by variance or application-specific constraints. Performance is evaluated using metrics that decompose the error in estimated sources, including the Signal-to-Distortion Ratio (SDR), which measures overall fidelity; (SIR), which quantifies noise from other sources; and (SAR), which assesses distortions introduced by the separation process.

Techniques

Blind Source Separation

Blind source separation (BSS) is a signal processing technique that aims to recover unobserved source signals from a set of observed mixtures without prior knowledge of the mixing process or the source signals themselves, relying primarily on the statistical independence of the sources. The core assumption in BSS is that the source signals are statistically independent and non-Gaussian, allowing the separation to exploit higher-order statistics beyond mere correlation. Classical methods in BSS often begin with principal component analysis (PCA), which utilizes second-order statistics to decorrelate the observed mixtures and perform prewhitening, reducing the problem dimensionality and simplifying subsequent steps. A prominent approach is the Joint Approximate Diagonalization of Eigenmatrices (JADE) algorithm, introduced in 1996, which achieves separation by approximately jointly diagonalizing multiple eigenmatrices derived from fourth-order cumulants of the whitened data, thereby exploiting non-Gaussianity to identify independent components. The FastICA algorithm, developed in 1999, provides an efficient fixed-point iteration method for BSS by maximizing the negentropy of the estimated sources, a measure of non-Gaussianity approximated using contrast functions. It iteratively updates the separation vectors using a nonlinearity derived from the contrast function, such as \mathbf{w}^+ = E\left\{\mathbf{x} \mathbf{g}(\mathbf{w}^T \mathbf{x})\right\} - E\left\{\mathbf{g}'(\mathbf{w}^T \mathbf{x})\right\} \mathbf{w}, followed by normalization, where \mathbf{g}(u) = \frac{d}{du} \mathbf{G}(u) and a common choice for \mathbf{G}(u) is \log \cosh u due to its robustness and computational simplicity. Despite their effectiveness, classical BSS methods like JADE and FastICA exhibit limitations, including sensitivity to outliers, which can distort cumulant estimates and lead to poor separation performance, as well as a strict reliance on the assumption of source independence, which may not hold in all real-world scenarios. Equivariant adaptive source separation via independence (EASI) and its variants address some of these issues by incorporating multiplicative group structure in the parameter space for improved equivariance and stability in online adaptive settings. Independent component analysis (ICA) represents a key subset of BSS, focusing specifically on linear mixtures under independence assumptions.

Independent Component Analysis

Independent Component Analysis (ICA) is a prominent technique within blind source separation that seeks to recover unobserved independent source signals from their linear mixtures by exploiting the statistical independence of the sources. Unlike methods that focus on , ICA assumes that the sources are non-Gaussian and mutually independent, allowing for the identification of the mixing process up to and scaling ambiguities. The core objective of ICA is to find an unmixing \mathbf{W} such that the estimated sources \hat{\mathbf{s}} = \mathbf{W} \mathbf{x} maximize the statistical independence among the components \hat{s}_i. This is formally achieved by minimizing the I(\hat{s}_1, \dots, \hat{s}_n) between the estimated sources, where mutual information quantifies the dependence as I(\mathbf{Y}) = \sum_i H(y_i) - H(\mathbf{Y}), with H denoting ; the minimization \hat{\mathbf{s}} = \arg\min I(\hat{s}_1, \dots, \hat{s}_n) yields components that are as independent as possible under the \mathbf{x} = \mathbf{A} \mathbf{s}. A common approach to solving this is through , assuming the source densities are known or approximated, often as super-Gaussian distributions to model sparse or super/sub-Gaussian signals typical in signal separation tasks. The log-likelihood objective is given by L(\mathbf{W}) = \sum \log p(y_i) - \log |\det \mathbf{W}|, where y_i = \mathbf{w}_i^T \mathbf{x} are the projections, and the determinant term accounts for the in the density transformation; maximization of this likelihood under independence assumptions leads to the ICA solution via gradient-based or s. Practical implementations, such as the algorithm, approximate this by using as a non-Gaussianity measure in a scheme, enabling efficient computation without explicit . Preprocessing steps are essential for : data centering removes means to ensure zero-mean sources, while whitening (sphering) transforms the data to unit variance and uncorrelated components via \mathbf{z} = \mathbf{V}^{-1/2} (\mathbf{x} - \mathbb{E}[\mathbf{x}]), reducing the ICA problem to orthogonal unmixing and simplifying the optimization. Several variants extend the basic ICA framework to address specific challenges. Infomax ICA reformulates the independence maximization as an information-theoretic objective using a architecture, where the output is maximized subject to orthogonality constraints, providing a gradient-based suitable for adaptive processing. Kernel ICA handles nonlinear dependencies by mapping data to a high-dimensional feature space via functions, estimating independence through canonical correlation analysis while preserving computational efficiency for small datasets. Online ICA adapts the algorithm for streaming data by employing recursive updates, such as natural on mini-batches, allowing real-time separation without storing the entire dataset. Extensions like complex ICA accommodate frequency-domain signals by extending the real-valued model to circularly symmetric complex sources, using in the fixed-point algorithm to separate modulated signals effectively.

Sparsity-Based and Deep Learning Methods

Sparsity-based methods exploit the assumption that signals can be represented using a small number of basis elements from an overcomplete dictionary, enabling efficient separation even from underdetermined mixtures. A foundational approach is (NMF), which decomposes a non-negative \mathbf{X} approximating the observed signal into basis \mathbf{W} and activation \mathbf{H} matrices such that \mathbf{X} \approx \mathbf{W} \mathbf{H}, with constraints ensuring non-negativity to reflect physical signal properties like magnitude. The Lee-Seung algorithm optimizes this factorization through iterative multiplicative updates, converging to a local minimum while preserving interpretability. In audio processing, NMF is particularly applied to magnitude spectrograms, where \mathbf{X} represents the magnitudes, allowing separation of harmonic components like vocals from accompaniment by learning spectral templates in \mathbf{W}. Dictionary learning extends sparsity by adaptively constructing the dictionary \mathbf{D} to minimize reconstruction error under sparsity constraints, formulated as sparse coding where each signal \mathbf{x} is approximated as \mathbf{x} = \mathbf{D} \mathbf{\alpha} with \mathbf{\alpha} having few non-zero entries. Optimization often employs basis pursuit, solving \min \|\mathbf{\alpha}\|_1 subject to \mathbf{x} = \mathbf{D} \mathbf{\alpha}, which promotes sparsity via the \ell_1-norm and enables separation by matching mixture components to learned atoms. Seminal algorithms like iteratively update dictionary atoms and sparse codes. These methods outperform traditional techniques in handling real-world signals with structured sparsity, though they assume linear mixing and fixed dictionaries. Deep learning approaches have advanced signal separation by learning hierarchical representations directly from data, surpassing sparsity methods in capturing complex patterns. architectures, originally for segmentation, enable pixel-wise separation in image-like representations such as spectrograms, using encoder-decoder paths with skip connections to preserve spatial details during separation of overlapping sources like vocals from music. In audio, TasNet employs learnable encoders and decoders in the , avoiding spectrogram artifacts, and achieves state-of-the-art signal-to-distortion ratios exceeding 15 dB on benchmark datasets like WSJ0-2mix through convolutional blocks that model temporal dependencies. Post-2020, transformer-based models leverage self-attention mechanisms to capture long-range dependencies, enhancing video separation by jointly processing audiovisual cues for tasks like speaker diarization in dynamic scenes. Recent advances as of 2025 include diffusion models and multimodal integrations for improved robustness in speech and biomedical applications. Hybrid methods integrate sparsity with to combine interpretability and performance, such as NMF-Net variants that unfold NMF iterations into neural layers for end-to-end on audio mixtures. These approaches embed non-negative constraints within convolutional or recurrent networks, improving separation of time-varying sources by leveraging NMF's spectral modeling alongside neural feature extraction. Compared to , which assumes linear statistical , sparsity-based and deep methods better address nonlinear mixing through flexible representations. Key advantages include robustness to nonlinear distortions and superior from large sets, enabling applications in . However, challenges persist in data requirements, as models demand extensive labeled mixtures for , and high computational costs from mechanisms or iterative optimizations limit deployment on resource-constrained devices.

Applications

Audio and Speech Processing

In audio and speech processing, signal separation addresses the challenge of isolating individual sound sources from mixed recordings, a task central to applications like hearing aids and voice assistants. The cocktail party problem, which describes the human ability to focus on a single speaker amid background noise, has motivated much of this field by highlighting the need for robust separation in reverberant, multi-source environments. This scenario often involves convolutive mixtures due to echoes and delays, leading to the development of convolutive blind source separation (BSS) techniques tailored for acoustic signals. Speech enhancement techniques focus on isolating target voices from noise or competing speakers, particularly in single-channel scenarios where only one microphone captures the mixture. (NMF) decomposes spectrograms into basis elements representing speech components, enabling separation by reconstructing the desired source while suppressing interference. Deep clustering, introduced as a discriminative approach, learns low-dimensional representations of time-frequency units and clusters them to assign segments to specific speakers, achieving effective single-channel separation. These methods adapt general principles to the temporal and spectral structure of speech, improving intelligibility in noisy settings like teleconferencing. Music source separation targets the extraction of individual instruments or vocals from polyphonic recordings, often using factorization to model and rhythmic patterns. NMF-based approaches factorize spectrograms into non-negative activations and templates for sources like drums, , or vocals, allowing iterative refinement to disentangle overlapping frequencies. Benchmarks on the MUSDB18 dataset, released in 2016 for the Signal Separation Evaluation Campaign, have standardized evaluation, with top methods achieving scale-invariant signal-to-distortion ratios (SI-SDR) exceeding 10 dB for vocals on this corpus of multitrack music. Real-time applications leverage microphone arrays to capture spatial information, combining for directional enhancement with ICA to resolve non-stationary sources. suppresses off-axis by weighting array signals, while ICA unmixes the focused outputs, enabling low-latency separation in scenarios like smart speakers or robotic audition. Such hybrid systems process convolutive mixtures with delays under 100 ms, supporting interactive environments. Performance in audio separation is commonly assessed using the scale-invariant signal-to-distortion ratio (SI-SDR), which measures similarity while normalizing for differences, providing a perceptually relevant insensitive to scaling artifacts. SI-SDR values above 5 dB typically indicate audible improvements in source quality. Key challenges include , which smears signals across time through room reflections, complicating localization and increasing permutation ambiguities in frequency-domain methods, and overlapping harmonics, where simultaneous notes from instruments or voices share spectral bins, leading to artifacts in separation. These issues persist in real-world acoustics, demanding adaptive models that incorporate spatial cues or prior knowledge of source statistics.

Biomedical Imaging and Signals

Signal separation techniques play a crucial role in biomedical imaging and physiological , enabling the isolation of clinically relevant information from noisy or mixed data sources. In (MRI), (ICA) is widely applied to remove artifacts caused by cardiac motion, distinguishing physiological noise from tissue signals. For instance, in pediatric cardiac MRI, ICA-based denoising has been shown to enhance image quality by separating motion-induced artifacts, improving diagnostic accuracy without additional scanning time. Similarly, in functional MRI (fMRI), ICA facilitates source separation for mapping activity, identifying spatially independent patterns of neural during tasks such as color-naming, which helps delineate task-related signals from physiological fluctuations. In (EEG) and (MEG), multi-channel recordings are prone to artifacts from eye blinks, muscle activity, and cardiac sources, which ICA effectively mitigates by decomposing signals into independent components representing neural versus non-neural origins. Extended ICA algorithms, such as Infomax, have demonstrated robust removal of these artifacts across diverse EEG datasets, preserving underlying brain signals for improved analysis in . For (ECG), blind source separation (BSS) methods enable the extraction of fetal ECG from maternal abdominal signals, a challenge addressed since the mid-1990s through subspace separation techniques that exploit statistical independence to isolate the weaker fetal component. Early adaptive BSS approaches, including variants, achieved reliable fetal detection in mixed recordings. Ultrasound imaging benefits from sparsity-based signal separation to suppress clutter in Doppler flows, where low-rank and sparse models distinguish tissue motion echoes from blood signals, enhancing vascular visualization. These methods, evaluated across multiple sparsity-promoting algorithms, outperform traditional filters in clutter rejection while maintaining . In recent advancements, deep learning-driven separation in () scans isolates tumor-specific signals from multi-tracer mixtures, as seen in dual-tracer protocols where convolutional networks reconstruct and segregate uptake patterns for precise detection. Such AI approaches, applied in simulations and clinical data from the , reduce and improve tumor localization without sequential scanning.

Image and Video Analysis

Signal separation in image and video analysis involves decomposing visual data into constituent components, such as separating mixed spectral signatures in images or distinguishing foreground motion from static backgrounds in videos, to enable tasks like material identification and object tracking. In image processing, hyperspectral unmixing addresses the challenge of identifying materials within pixels that contain multiple substances, adapting the linear spectral mixture model (LSMM) to account for spectral variability and non-linear effects. The LSMM posits that observed pixel spectra are convex combinations of endmember spectra weighted by abundance fractions, with adaptations like the augmented linear mixing model incorporating perturbations to handle endmember variability, improving unmixing accuracy on real hyperspectral datasets. (NMF) has also been applied to hyperspectral unmixing by factorizing the data matrix into non-negative factors representing endmembers and abundances. Another key application is shadow removal, where sparsity-based methods exploit the low-rank of shadow-free regions and sparse shadow perturbations to reconstruct illuminated images, using local dictionaries learned from image patches to achieve robust separation even under varying . Blind image separation techniques further enable the recovery of overlaid textures without prior knowledge of mixing processes, leveraging dictionary learning to sparsely represent sources over adaptive bases. In this approach, an iterative framework jointly optimizes source separation and dictionary adaptation, allowing recovery of underlying textures from superimposed images by minimizing reconstruction errors under sparsity constraints, as demonstrated on synthetic and natural image mixtures. For video analysis, (RPCA) separates moving objects from static s by decomposing video frames into a low-rank component () and a sparse component (foreground motion), with the seminal 2011 algorithm solving the problem via principal component pursuit, achieving real-time performance on surveillance footage. clustering extends this to crowd analysis, grouping trajectories or features into coherent subspaces to separate individual or group motions from cluttered scenes, using spectral methods to handle non-linear manifolds in high-dimensional video data. Deep learning has advanced these methods, particularly for challenging environments. Generative adversarial networks (GANs) facilitate separation in underwater images by learning to disentangle scattering and absorption effects, with a 2019 framework using cycle-consistent GANs to restore clear scenes from degraded inputs, improving visibility metrics like underwater image quality measures on real datasets. In videos, disentanglement employs deep networks to separate rigid and non-rigid motion components, enabling efficient flow estimation by factorizing flows into independent subspaces, as shown in models that reduce while maintaining accuracy on like Sintel. The Berkeley Segmentation Dataset (BSDS500) serves as a for evaluating separation algorithms, providing ground-truth segmentations to assess boundary detection and region decomposition in natural images. Key challenges include handling illumination variations, which introduce non-stationary mixtures, and occlusions, which obscure signal components, necessitating robust priors like sparsity or low-rank assumptions to maintain separation fidelity.

Emerging Domains

In , blind source separation () techniques have gained prominence for multi-user detection in and emerging multiple-input multiple-output () systems, particularly in integrated sensing and communication (ISAC) scenarios. Post-2020 standards emphasize in-band full-duplex (IBFD) operations, where enables simultaneous self-interference cancellation and channel estimation without dedicated waveforms, improving in massive setups. For instance, FastICA-based frameworks separate self-interference signals from communication streams in IBFD nodes, achieving convergence in under 18 iterations at 10 dB SNR and reducing estimation errors with frame sizes exceeding 350 symbols. This approach supports 's joint communication and sensing requirements by exploiting reflected interference for environmental perception alongside data transmission. Environmental monitoring leverages (ICA) and related methods to separate pollution sources in sensor networks, addressing the challenge of disentangling overlapping emissions from urban or industrial data streams. In wireless sensor networks deployed for air quality assessment, identifies independent variability in concentrations, such as PM2.5 and PM10, by modeling mixtures influenced by traffic, industry, and without prior knowledge of source profiles. A study on indoor air pollutants applied to time-series data from electrochemical sensors, successfully isolating sources like and , with separation accuracy enhanced by non-negative constraints to reflect physical realism. These techniques enable source in distributed networks, reducing calibration needs and supporting in high-density urban areas. In , sensor techniques draw on signal separation principles to isolate ego-motion from dynamic elements in and camera data, facilitating robust navigation in cluttered environments. Ego-motion estimation methods, such as those using generalized iterative closest point (GICP) on point clouds, separate static backgrounds from moving objects by accumulating frames corrected for vehicle motion, improving 3D metrics like mean average precision (mAP) from 32.0 to 38.7. Learning-based variants, including on scene flow (PCAc), adapt to radar-LiDAR fusion by thresholding radial velocities to classify dynamic points, akin to BSS in isolating independent motion signals. This separation enhances autonomous systems' perception, particularly in urban datasets like View-of-Delft, where noisy sensor inputs demand disentanglement for mapping. Quantum signal processing represents an emerging frontier, where entanglement-based blind quantum source separation (BQSS) addresses challenges in quantum communications by disentangling mixed qubit states without classical priors. In multi-qubit systems, BQSS exploits and entanglement to reverse unknown mixing operations, such as undesired spin couplings, using criteria like minimization adapted to probabilistic measurements. Protocols for blind qubit disentanglement employ structures with quantum processing units (QPUs) to recover pure states from entangled mixtures, achieving improvements in scenarios like over lossy channels. Research from the 2020s highlights BQSS's potential in entanglement distribution networks, where it separates communication signals from noise in time-bin or frequency-encoded photons, paving the way for scalable quantum repeaters. Future trends in signal separation emphasize integration with for real-time processing in (IoT) ecosystems, enabling low-latency source extraction at distributed nodes. Memristor-based accelerators facilitate in-situ , such as ICA for acoustic or vibrational signals, reducing power consumption in edge devices compared to cloud offloading. In IoT , blind disaggregation treats appliance signals as mixed sources, applying sparse coding variants to separate consumption patterns from aggregate meter data with high signal-to-distortion ratios. These advancements support 6G-enabled IoT by combining with mobile edge computing, allowing in sensor streams without centralization, as demonstrated in for smart factories.

References

  1. [1]
    [PDF] Blind signal separation: statistical principles - page index
    Blind signal separation (BSS) recovers unobserved signals from observed mixtures, where no information about the mixture is available, using the assumption of ...<|control11|><|separator|>
  2. [2]
    Signal Separation - an overview | ScienceDirect Topics
    Signal separation, or SSA, refers to the process of isolating nonstationary signals from sensor arrays by utilizing their time-frequency signatures, ...
  3. [3]
    Signal Source Separation Using W-Net Architecture - MathWorks
    It consists of separating the signal components of a signal mixture when only the mixture is available. An important source separation problem consists of ...
  4. [4]
    [PDF] Blind Source Separation: Fundamentals and Recent Advances - arXiv
    Mar 9, 2016 · Blind source separation (BSS) is the decoupling of unknown signals mixed in an unknown way, like the 'cocktail party problem' of separating ...
  5. [5]
    [PDF] Chapter 15 - BLIND SOURCE SEPARATION: Introduction - MIT
    In this chapter we will examine how we can generalize the idea of transforming a time series into an alternative representation, such as the Fourier ...
  6. [6]
    [PDF] A Tutorial on Blind Source Separation using Independent ...
    Abstract—Blind Source Separation (BSS) is needed to recover several source signals from several mixture-signals. The mixture- signals are linear ...
  7. [7]
    Blind source separation for ambulatory sleep recording - PMC
    Introduction. The blind source separation approach deals with the problem of identifying n mutually independent unknown sources from m linear and instantaneous ...
  8. [8]
    [PDF] Music Source Separation in the Waveform Domain - arXiv
    We consider here the case of supervised source separation, where the training data contain music tracks (i.e., mixtures), together with the ground truth ...
  9. [9]
    A fast algorithm for sparse multichannel blind deconvolution
    MED is another sparsity-promoting blind deconvolution approach, widely explored during the 1980s (Claerbout, 1977; Gray, 1978b; Wiggins, 1978; Ooe and Ulrych, ...
  10. [10]
    Independent component analysis, A new concept? - ScienceDirect
    The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence ...
  11. [11]
    [PDF] SIGNAL - PROCESSING Independent component analysis, A ... - HAL
    Cardoso focused on the algebraic properties of the fourth-order cumulants, and interpreted them as linear operators acting on matrices. A simple case is the ...
  12. [12]
    [PDF] Independent Component Analysis: Algorithms and Applications
    Aapo Hyvärinen and Erkki Oja ... Moreover, dimension reduction prevents overlearning, which can sometimes be observed in ICA (Hyvärinen et al., 1999).
  13. [13]
  14. [14]
    Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude ... - arXiv
    Sep 20, 2018 · The proposed Conv-TasNet system significantly outperforms previous time-frequency masking methods in separating two- and three-speaker mixtures.
  15. [15]
    [PDF] Dual-Path Transformer Network: Direct Context-Aware Modeling for ...
    The dominant speech separation models are based on complex recurrent or convolution neural network that model speech se- quences indirectly conditioning on ...<|separator|>
  16. [16]
    ‪Pierre Comon‬ - ‪Google Scholar‬
    ICA: a potential tool for BCI systems. A Kachenoura, L Albera, L Senhadji, P Comon. IEEE Signal Processing Magazine 25 (1), 57-68, 2008. 320, 2008 ; Enhanced ...
  17. [17]
    ‪Aapo Hyvärinen‬ - ‪Google Scholar‬
    Aapo Hyvärinen. Other names Aapo Hyvarinen. University of Helsinki ... Variational autoencoders and nonlinear ica: A unifying framework. I Khemakhem ...
  18. [18]
    ‪Yi Luo‬ - ‪Google Scholar‬
    Conv-tasnet: Surpassing ideal time–frequency magnitude masking for speech separation. Y Luo, N Mesgarani. IEEE/ACM transactions on audio, speech, and language ...
  19. [19]
    [PDF] Independent Component Analysis - Computer Science
    ... ICA is a relatively new invention. It was for the first time in- troduced in early 1980s in the context of neural network modeling. In mid-1990s, some ...
  20. [20]
    [PDF] a survey of convolutive blind source separation methods
    In this chapter, we provide an overview of existing algorithms for blind source separation of convolutive audio mixtures. We provide a taxonomy, wherein many of ...<|separator|>
  21. [21]
    [PDF] Blind Source Separation Of More Sources Than Mixtures Using ...
    This method uses overcomplete representations to separate more sources than mixtures by learning an overcomplete representation and inferring sources. It can ...
  22. [22]
    [PDF] Independent Component Analysis: A Tutorial
    Blind means that we no very little, if anything, on the mixing matrix, and make little assumptions on the source signals. ICA is one method, perhaps the most ...
  23. [23]
  24. [24]
    [PDF] Performance Measurement in Blind Audio Source Separation - IRIT
    Given a set of allowed distortions, we eval- uated the quality of an estimated source by four measures called. SDR, SIR, SNR, and SAR. Experiments involving ...
  25. [25]
    Blind Signal Separation - an overview | ScienceDirect Topics
    Blind signal separation is defined as the process of isolating a set of source signals from a mixture of recorded signals without prior knowledge of the source ...
  26. [26]
    Implementing blind source separation in signal processing and ...
    The learning algorithm is furthermore extended through the inclusion of appropriate noise filters to deal with noisy blind separation applications such as ...
  27. [27]
    Jacobi Angles for Simultaneous Diagonalization - SIAM.org
    Simultaneous diagonalization of several matrices can be implemented by a Jacobi-like technique. This note gives the required Jacobi angles in close form.
  28. [28]
    Independent component analysis: algorithms and applications
    Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the ...
  29. [29]
    An Information-Maximization Approach to Blind Separation and ...
    Nov 1, 1995 · Abstract. We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units.
  30. [30]
    Kernel Independent Component Analysis
    Bach, Michael I. Jordan; 3(Jul):1-48, 2002. Abstract. We present a class of algorithms for independent component analysis (ICA) which use ...
  31. [31]
    [PDF] А fast fixed-point algorithm for independent component analysis
    Jan 19, 2000 · ICA is a statistical method for transforming an observed multidimensional random vector into components that are mutually as independent as pos-.
  32. [32]
    [PDF] An introduction to multichannel NMF for audio source separation
    This chapter introduces multichannel nonnegative matrix factorization (NMF) methods for audio source separation. All the methods and some of their extensions ...
  33. [33]
    Sparse pursuit and dictionary learning for blind source separation in ...
    Jan 28, 2021 · We develop a novel sparse pursuit algorithm that can match the discrete frequency spectra from the recorded signal with the continuous spectra ...
  34. [34]
    [PDF] SINGING VOICE SEPARATION WITH DEEP U-NET ... - ISMIR
    This paper proposes using the U-Net architecture, adapted from medical imaging, for singing voice separation, achieving state-of-the-art performance.
  35. [35]
  36. [36]
    Deep Learning's Diminishing Returns - IEEE Spectrum
    Faced with rising economic and environmental costs, the deep-learning community will need to find ways to increase performance without causing computing ...
  37. [37]
    Deep learning reduces data requirements and allows real-time ...
    Mar 19, 2024 · First, FCS is data hungry, requiring 50,000 frames at 1-ms time resolution to obtain accurate parameter estimates. Second, the data size makes ...
  38. [38]
    (PDF) The Cocktail Party Problem - ResearchGate
    This review presents an overview of a challenging problem in auditory perception, the cocktail party phenomenon, the delineation of which goes back to a classic ...
  39. [39]
  40. [40]
    Deep clustering: Discriminative embeddings for segmentation and ...
    Aug 18, 2015 · We train a deep network to produce spectrogram embeddings that are discriminative for partition labels given in training data.Missing: enhancement 2010
  41. [41]
    [PDF] Real-Time Blind Source Separation and DOA Estimation Using ...
    The mixed signals observed by the micro- phone array are processed by Independent Component Analy- sis (ICA) in the frequency domain. The system estimates DOA.<|separator|>
  42. [42]
    [PDF] SDR – Half-Baked or Well Done?
    Mar 29, 2019 · We propose to use a slightly modified definition, resulting in a simpler, more robust measure, called scale-invariant SDR (SI-SDR). We present ...
  43. [43]
    [PDF] The 2011 Signal Separation Evaluation Campaign (SiSEC2011)
    Due to the challenging reverberation conditions, datasets with different difficulty levels were provided (i.e. varying the source-array distance and the ...
  44. [44]
    Reduction of Motion Artifacts and Noise Using Independent ... - NIH
    Most important, ICA denoising improved the diagnostic value of the fMRI studies (Fig 2). After realignment alone, 10 of the 12 patients had at least 1 ...
  45. [45]
    Spatially independent activity patterns in functional MRI data during ...
    ICA is a new method for analyzing fMRI that is able to separate task-related activations from artifactual and other physiological fluctuations in the fMRI ...<|separator|>
  46. [46]
    [PDF] Extended ICA Removes Artifacts from Electroencephalographic ...
    Many methods have been proposed to remove eye movement and blink artifacts from EEG recordings. Often regression in the time or frequency domain is performed on ...Missing: MEG | Show results with:MEG
  47. [47]
    [PDF] FOETAL ECG EXTRACTION USING BLIND SOURCE SEPARATION ...
    Three methods to achieve Blind Source Separa- tion are applied to the foetal electrocardiogram (ECG) extraction problem: Principal Component Analysis. (PCA), ...
  48. [48]
    Clutter suppression in ultrasound: performance evaluation and ...
    May 28, 2020 · In this paper, we present a comprehensive review of ultrasound clutter suppression techniques and exploit the feasibility of low-rank and sparse decomposition ...
  49. [49]
    Signal separation of simultaneous dual-tracer PET imaging based ...
    May 29, 2024 · The proposed FBPnet-Sep model was considered a potential method for the reconstruction and signal separation of simultaneous dual-tracer PET imaging.
  50. [50]
    An Augmented Linear Mixing Model to Address Spectral Variability ...
    Oct 29, 2018 · To this end, we propose a novel spectral mixture model, called the augmented linear mixing model (ALMM), to address spectral variability by ...
  51. [51]
    Hyperspectral Unmixing Based on Nonnegative Matrix Factorization
    May 20, 2022 · This paper reviews NMF-based methods for hyperspectral unmixing, which estimates endmembers and abundances from hyperspectral images. It ...
  52. [52]
    Shadow removal using sparse representation over local dictionaries
    In this paper, we introduce a method to remove the shadow from a real image using the morphological diversities of shadows and sparse representation.
  53. [53]
  54. [54]
    [PDF] Robust Principal Component Analysis? - Columbia University
    This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component.
  55. [55]
    Visualization and clustering of crowd video content in MPCA subspace
    This paper presents a novel approach for the visualization and clustering of crowd video contents by using multilinear principal component analysis (MPCA).<|separator|>
  56. [56]
    Real-time GAN-based image enhancement for robust underwater ...
    Jul 6, 2023 · We propose introducing the generative adversarial network (GAN) to perform effective underwater image enhancement before conducting SLAM.Missing: separation | Show results with:separation
  57. [57]
    [PDF] Explicit Motion Disentangling for Efficient Optical Flow Estimation
    In this paper, we propose a novel framework for opti- cal flow estimation that achieves a good balance between performance and efficiency.
  58. [58]
    The Berkeley Segmentation Dataset and Benchmark
    The goal of this work is to provide an empirical basis for research on image segmentation and boundary detection.Missing: signal separation
  59. [59]
    Facing challenges: A survey of object tracking - ScienceDirect.com
    Environmental factors pose challenges to object tracking, such as occlusion, illumination change, object deformation, object high-speed movement, which are ...
  60. [60]
    None
    ### Summary of Blind Source Separation in Joint Communication and Sensing for 5G/6G MIMO Systems
  61. [61]
    [PDF] Blind Data Detection in Massive MIMO via - arXiv
    Abstract—Massive MIMO has been regarded as a key enabling technique for 5G and beyond networks. Nevertheless, its performance is limited by the large ...Missing: source | Show results with:source
  62. [62]
    Indoor air pollutant sources using blind source separation methods
    Feb 3, 2018 · The objective of this study is to separate different sources of variability of air pollutant concentrations time series of particulate ...
  63. [63]
    Source apportionment of atmospheric particle number ...
    Nov 15, 2021 · Similar to principal component analysis (PCA) and independent component analysis (ICA), the goal of NMF is to describe the observed data ...
  64. [64]
    [PDF] Ego-Motion Estimation and Dynamic Motion Separation from 3D ...
    Aug 29, 2023 · This paper analyzes ego-motion estimation and dynamic motion separation from 3D point clouds to improve 3D object detection by accumulating ...Missing: signal | Show results with:signal
  65. [65]
    Concepts and Criteria for Blind Quantum Source Separation - arXiv
    Nov 12, 2016 · This article discusses some consequences of the existence of the entanglement phenomenon, and of the probabilistic aspect of quantum ...
  66. [66]
    Blind quantum source separation: Quantum-processing qubit ...
    Blind Quantum Source Separation (BQSS) deals with multi-qubit states, called “mixed states”, obtained by applying an unknown “mixing function” (which ...
  67. [67]
    Memristor-based signal processing for edge computing
    Insufficient relevant content. The provided URL (https://ieeexplore.ieee.org/document/9614067) does not contain accessible text or a full document for extraction and summarization regarding "memristor-based BSS for edge computing." Only a title and basic metadata are visible:
  68. [68]
    [PDF] IoT-based Analysis for Smart Energy Management - arXiv
    As mentioned earlier, energy disaggreagtion is a special case of blind source separation, sparse coding has been proven effective for such problems where an ...
  69. [69]
    [PDF] In-Network Processing Acoustic Data for Anomaly Detection ... - arXiv
    Oct 4, 2021 · A natural idea is to first transfer all data to a centralized node; when all data are received, a sort of Blind Source Separation (BSS) [5] ...