Fact-checked by Grok 2 weeks ago

Robust principal component analysis

Robust principal component analysis (RPCA) is a computational method that decomposes an observed into the sum of a low-rank , capturing the underlying principal components or global structure, and a , representing gross corruptions, outliers, or sparse noise that traditional (PCA) cannot handle effectively. Introduced in a seminal paper, RPCA addresses the limitations of classical PCA, which assumes and fails when data contains arbitrarily large but sparse errors, such as occlusions in images or sensor failures. The core formulation of RPCA solves the problem of minimizing the nuclear norm of the low-rank component (to promote low ) plus a scaled \ell_1-norm of the sparse component (to enforce sparsity), subject to their sum equaling the observed : \min_{L,S} \|L\|_* + \lambda \|S\|_1 s.t. L + S = M. Here, \lambda = 1/\sqrt{\max(m,n)} is a regularization that balances the two terms, with m and n denoting the dimensions. Theoretical guarantees establish that this approach exactly recovers the true low-rank and sparse components with high probability, provided the low-rank has sufficiently low (bounded by \rho_r n / \mu(\log n)^2) and the sparse component has bounded sparsity (\rho_s n^2), where \rho_r, \rho_s < 1 and \mu relates to the 's incoherence. Practical algorithms for solving RPCA include the augmented Lagrangian multiplier (ALM) method, which uses alternating minimization and singular value thresholding for efficient convergence, often achieving state-of-the-art performance in large-scale settings. Extensions to handle incomplete observations incorporate missing data into the constraints. RPCA has found wide applications in fields like computer vision, including background subtraction in surveillance videos where the low-rank component models static scenes and the sparse component detects moving foreground objects, and face recognition by separating facial structures from occlusions or lighting variations. It also aids in latent variable modeling for recommender systems, shadow removal in images, and preprocessing high-dimensional data in neuroscience and finance to mitigate outlier effects. Ongoing research explores nonconvex variants and stochastic approximations for scalability to massive datasets, with recent advances (as of 2025) including cellwise robust and tensor-based methods.

Introduction

Overview and Motivation

Robust principal component analysis (RPCA) is a statistical technique that extends classical principal component analysis by decomposing an observed data matrix M into a low-rank component L, which captures the principal underlying structure of the data, and a sparse component S, which accounts for outliers or corruptions, such that M = L + S. This decomposition allows RPCA to separate systematic patterns from anomalous entries in high-dimensional datasets. The primary motivation for RPCA arises from the vulnerability of classical principal component analysis to gross errors, such as heavy-tailed noise, missing values, or adversarial corruptions, which can severely distort the estimated principal components. By assuming that the errors form a sparse matrix—meaning only a small fraction of entries are corrupted—RPCA provides a robust alternative that isolates these anomalies without compromising the recovery of the low-rank structure. RPCA offers significant benefits in real-world applications, including enhanced dimensionality reduction for noisy datasets, effective separation of static backgrounds from moving foregrounds in video surveillance, and reliable anomaly detection in sensor data or financial time series. These advantages stem from its ability to recover both components exactly under mild conditions, leading to more accurate modeling in contaminated environments. The roots of RPCA trace back to robust statistics in the late 20th century, but its modern formulation as a low-rank plus sparse decomposition emerged in the early 2000s amid advances in computer vision and signal processing, gaining widespread adoption following the theoretical guarantees established by Candès et al. in 2011.

Historical Development

The development of robust principal component analysis (RPCA) emerged from efforts in robust statistics to address the sensitivity of classical principal component analysis (PCA) to outliers, which can distort the estimation of principal components by leveraging the empirical covariance matrix. In the 1990s, initial variants focused on projection pursuit techniques to detect and mitigate outliers in high-dimensional data. A key early contribution was ROBPCA, introduced by Hubert, Rousseeuw, and Vanden Branden in 2005, which combines projection pursuit for initial dimension reduction with robust covariance estimation using the minimum covariance determinant to handle contaminated datasets effectively. The field gained a foundational breakthrough with the seminal work of Candès, Li, Ma, and Wright between 2009 and 2011, who formulated as the convex optimization problem of decomposing a matrix into a low-rank component and a sparse outlier matrix, using nuclear norm minimization for the low-rank part and ℓ1-norm for the sparse part. They proved that under conditions of low-rank incoherence and sparsity, the method achieves exact recovery of the underlying components with high probability. This approach shifted from heuristic robustification to a theoretically grounded framework, enabling applications in signal processing and computer vision. Following this, post-2011 research expanded on computational efficiency and flexibility, introducing non-convex methods to accelerate convergence while maintaining robustness. For instance, Qiu et al. in 2014 proposed recursive algorithms for robust low-rank and sparse recovery, demonstrating faster computation for large-scale problems by iteratively refining estimates without relying solely on convex relaxations. Concurrently, integration with deep learning emerged, such as unfolding optimization iterations into neural network architectures for end-to-end learning of parameters. As of 2025, recent trends emphasize handling complex noise distributions and connections to machine learning paradigms. Roy, Basu, and Ghosh in 2024 developed a density power divergence-based RPCA estimator, offering superior robustness to heavy-tailed noise through minimum divergence optimization while preserving scalability. Additionally, RPCA has been reframed through factor model lenses in machine learning contexts, facilitating interpretable latent structure discovery in high-dimensional settings with outliers.

Background and Prerequisites

Classical Principal Component Analysis

Principal component analysis (PCA) is a statistical technique used for dimensionality reduction that identifies orthogonal directions, called principal components, in a dataset that capture the maximum variance. Introduced by in 1901 as a method to find lines and planes of closest fit to points in space, PCA transforms the original variables into a new set of uncorrelated variables ordered by the amount of variance they explain. To compute PCA, the data matrix \mathbf{X} \in \mathbb{R}^{m \times n}, where m is the number of features and n is the number of samples, is first centered by subtracting the mean of each feature to obtain \tilde{\mathbf{X}}. The sample covariance matrix is then calculated as \boldsymbol{\Sigma} = \frac{1}{n} \tilde{\mathbf{X}}^T \tilde{\mathbf{X}}. The eigendecomposition of \boldsymbol{\Sigma} yields eigenvectors \mathbf{V} and eigenvalues \boldsymbol{\Lambda} such that \boldsymbol{\Sigma} = \mathbf{V} \boldsymbol{\Lambda} \mathbf{V}^T, where the columns of \mathbf{V} are the principal components ordered by decreasing eigenvalues, representing the directions of maximum variance. The data is projected onto the top-k principal components by \mathbf{Y} = \mathbf{V}_k^T \tilde{\mathbf{X}}, retaining the subspace that explains the most variance. Equivalently, PCA can be performed via singular value decomposition (SVD) of \tilde{\mathbf{X}} = \mathbf{U} \boldsymbol{\Sigma} \mathbf{V}^T, where the left singular vectors in \mathbf{U} provide the principal components, enabling a low-rank approximation \tilde{\mathbf{X}} \approx \mathbf{U}_k \boldsymbol{\Sigma}_k \mathbf{V}_k^T by truncating to the top k singular values. PCA assumes that the data follows a linear structure with Gaussian noise and no significant outliers, relying on second-order statistics like variance to identify meaningful patterns. Under these conditions, it effectively reduces dimensionality while preserving the primary sources of variation. However, PCA is sensitive to gross outliers and sparse corruptions, such as salt-and-pepper noise, which can bias the covariance matrix and lead to distorted principal components that misrepresent the underlying structure. addresses these limitations by decomposing data into low-rank and sparse components to handle outliers more effectively.

Challenges with Outliers in Data

In real-world datasets, outliers manifest in various forms that compromise the assumptions underlying classical (PCA). These include gross errors arising from sensor failures or measurement inaccuracies, impulsive noise characterized by sudden, high-amplitude deviations, missing entries due to data acquisition issues, and adversarial corruptions introduced deliberately or through system tampering in high-dimensional settings. Such outliers deviate substantially from the bulk of the data, often appearing as isolated or sparsely distributed anomalies rather than systematic variations. The presence of these outliers severely undermines PCA's performance by inflating the estimated variance and skewing the directions of principal components toward the anomalous points. Since PCA relies on minimizing squared Euclidean distances to identify the subspace of maximum variance, even a small fraction of outliers can dominate the computation, leading to distorted low-dimensional representations that misalign with the true data structure. This distortion propagates errors into downstream tasks, such as clustering where misaligned components group unrelated samples together, or forecasting where inflated variance amplifies prediction uncertainty. Illustrative examples highlight these issues across domains. In video data, sudden lighting changes or shadows introduce sparse corruptions that act as outliers, pulling principal components away from the underlying motion patterns and degrading surveillance applications. Similarly, in genomics, rare mutations represent outliers in high-dimensional expression profiles, skewing PCA-based population stratification and obscuring genetic signals in ancestry or disease association studies. From a statistical viewpoint, classical PCA exhibits a breakdown point near zero, meaning it becomes unreliable with even minimal contamination (e.g., beyond 1% outliers), as a single gross error can arbitrarily alter the subspace. This contrasts sharply with robust estimators like the median, which maintain a high breakdown point approaching 50% by downweighting extremes, underscoring PCA's vulnerability in contaminated environments.

Problem Formulation

Mathematical Decomposition Model

Robust principal component analysis (RPCA) seeks to decompose an observed matrix M \in \mathbb{R}^{m \times n} into a low-rank component L and a sparse component S such that M = L + S. Here, L captures the principal low-dimensional structure underlying the data, analogous to the output of classical but robustified against corruptions, while S accounts for sparse outliers or gross errors. The low-rank assumption imposes \operatorname{rank}(L) \leq r, where r is a small integer relative to \min(m, n), ensuring L lies in a low-dimensional subspace. The sparse component S is entry-wise sparse, meaning it has at most \|S\|_0 \leq \alpha m n nonzero entries, with \alpha typically less than 0.05 to reflect that outliers affect only a small fraction of the data. To ensure recoverability, additional structural assumptions are placed on L and S: L must be incoherent, meaning its energy is not concentrated in few rows or columns, while the support of S is often assumed to be uniformly random or adversarially placed but sufficiently sparse. Coherence measures quantify the incoherence of L. The (entry-wise) coherence is defined as \mu(L) = \frac{m n}{r} \max_{i,j} \left| \langle L, e_i e_j^T \rangle \right|_F^2 \leq \mu_0, where e_i and e_j are standard basis vectors, and \mu_0 is a small constant (often \mu_0 \leq 0.1). Row coherence and column coherence are defined as \mu_r(L) = \frac{m}{r} \max_i \| P_{T_L} (e_i) \|_2^2 \leq \mu_0, \quad \mu_c(L) = \frac{n}{r} \max_j \| P_{T_L^r} (e_j) \|_2^2 \leq \mu_0, where P_{T_L} denotes the orthogonal projection onto the column space of L, and P_{T_L^r} denotes the orthogonal projection onto the row space of L; these prevent L from being row- or column-sparse. Variants of the model distinguish between exact and inexact recovery. In the exact case, the decomposition assumes M = L + S precisely, with L and S satisfying the above rank and sparsity conditions. The inexact variant relaxes this to M = L + S + E, where E is a small dense noise matrix with \|E\|_F \leq \epsilon for some \epsilon > 0, accommodating measurement errors in real-world data.

Optimization Objectives and Constraints

The principal optimization objective in robust principal component analysis (RPCA) seeks to decompose an observed matrix M \in \mathbb{R}^{m \times n} into a low-rank component L and a sparse component S by solving \min_{L,S} \operatorname{rank}(L) + \|S\|_0 subject to M = L + S. This formulation is NP-hard due to the combinatorial nature of the rank function and the \ell_0-norm, which counts the number of non-zero entries in S. To address this intractability, a convex relaxation replaces the rank with the nuclear norm \|L\|_*, defined as the sum of the singular values of L, and the \ell_0-norm with the \ell_1-norm \|S\|_1 = \sum_{i,j} |S_{i,j}|, yielding the principal component pursuit (PCP) problem: \min_{L,S} \|L\|_* + \lambda \|S\|_1 subject to M = L + S. The nuclear norm serves as a convex surrogate for the rank, promoting low-rank solutions, while the \ell_1-norm encourages sparsity in S. The regularization parameter \lambda > 0 balances the low-rank and sparsity penalties and is typically set to \lambda = 1 / \sqrt{\max(m,n)} to ensure compatibility with the matrix dimensions. Variants of this relaxation incorporate weighted \ell_1-norms to adaptively emphasize certain outliers or structured sparsity patterns, such as \|W \odot S\|_1 where W is a weight matrix. Group sparsity extensions replace the entrywise \ell_1-norm with group norms, like the \ell_{2,1}-norm \sum_g \|S_g\|_{2}, to capture correlated outliers across rows or columns. In the inexact case, where observations include additive noise such that M = L + S + N with \|N\|_F \leq \delta, the constraint relaxes to \|M - L - S\|_F \leq \delta, forming \min_{L,S} \|L\|_* + \lambda \|S\|_1 subject to this Frobenius norm bound. Equivalently, the Lagrangian form minimizes \|L\|_* + \lambda \|S\|_1 + \frac{\mu}{2} \|M - L - S\|_F^2 for some penalty parameter \mu > 0, allowing optimization without explicit constraints.

Algorithms

Convex Relaxation Methods

Convex relaxation methods address the robust principal component analysis (RPCA) problem by solving the program \min_{L,S} \|L\|_* + \lambda \|S\|_1 subject to L + S = M, where M is the observed , \| \cdot \|_* denotes the nuclear promoting low-rank structure in L, \| \cdot \|_1 is the \ell_1- encouraging sparsity in the outlier matrix S, and \lambda > 0 balances the terms. This formulation relaxes the non-convex rank minimization and \ell_0- objectives into tractable surrogates, enabling polynomial-time solvers with provable recovery guarantees under conditions such as incoherence and restricted isometry properties () on the support of S. A principal algorithm is the Augmented Lagrange Multiplier (ALM) method, which reformulates the constrained problem using the augmented Lagrangian \mathcal{L}(L, S, Y, \mu) = \|L\|_* + \lambda \|S\|_1 + \langle Y, M - L - S \rangle + \frac{\mu}{2} \|M - L - S\|_F^2, where Y is the dual variable and \mu > 0 is the penalty parameter. The inexact ALM (IALM) variant proceeds by initializing L_0 = 0, S_0 = 0, Y_0 = 0, and \mu_0 > 0; then, in each iteration k, it computes the SVD U \Sigma V^T = M - S_k + \mu_k^{-1} Y_k and sets L_{k+1} = U \mathcal{S}_{\mu_k^{-1}}(\Sigma) V^T, where \mathcal{S}_{\tau}(\Sigma) applies soft-thresholding \max(\sigma_i - \tau, 0) to each singular value \sigma_i; updates S_{k+1} = \mathcal{S}_{\lambda / \mu_k}(M - L_{k+1} + \mu_k^{-1} Y_k) via entrywise soft-thresholding; sets Y_{k+1} = Y_k + \mu_k (M - L_{k+1} - S_{k+1}); and increases \mu_{k+1} = \rho \mu_k (\rho > 1) if the residual is insufficiently small, until convergence (e.g., \|M - L - S\|_F / \|M\|_F < 10^{-7}). Each iteration requires one SVD for the low-rank update, with computational complexity O(m n r) using partial SVD for rank-r approximation, or O(m n \log r) via randomized SVD techniques for scalability on large matrices m \times n. Proximal gradient descent provides an alternative, accelerating convergence through momentum-like updates on the smooth nuclear norm surrogate while applying proximal operators for the nonsmooth \ell_1 term. It initializes S_0 = 0, Y_0 = 0, \mu > 0, and iterates by computing L_{k+1} = \arg\min_L \|L\|_* + \frac{\mu}{2} \|L - (M - S_k + \mu^{-1} Y_k)\|_F^2 via singular value thresholding on the SVD of the residual, followed by S_{k+1} = \mathcal{S}_{\lambda / \mu}(M - L_{k+1} + \mu^{-1} Y_k) and Y_{k+1} = Y_k + \mu (M - L_{k+1} - S_{k+1}), converging in a near-constant number of iterations (often under 20) for well-conditioned problems. The per-iteration cost mirrors ALM, dominated by the SVD, and benefits from the same low-rank approximations for efficiency. Variants like the exact ALM (EALM) refine IALM by solving subproblems more precisely with fixed \mu_k updates until sufficient accuracy, ensuring Q-linear convergence and attainment of the global optimum under RIP conditions on the sparse component's support, as established in the original RPCA recovery theorems. These methods are widely adopted for their simplicity, empirical speed (e.g., IALM is up to 5 times faster than proximal gradient on benchmark datasets), and theoretical robustness, making them suitable for applications requiring .

Non-Convex Optimization Techniques

Non-convex optimization techniques for robust principal component analysis (RPCA) replace the convex nuclear norm and \ell_1-norm surrogates with non-convex alternatives that more closely approximate the non-convex and \ell_0-norm objectives, enabling tighter relaxations at the expense of global optimality guarantees. These methods often leverage smooth or piecewise penalties to promote low-rank structure and sparsity while facilitating efficient iterative solvers. For instance, the serves as a non-convex for minimization, defined as \log \det (X + \epsilon I) for small \epsilon > 0, which encourages small s more aggressively than the nuclear norm. This approach has been integrated into RPCA formulations to recover low-rank components from corrupted data by solving smoothed optimization problems that avoid the computational overhead of full singular value decompositions (SVDs) in each iteration. To induce sparsity in the outlier matrix, penalties such as the minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD) are employed as non-convex surrogates for the \ell_0-norm. The MCP, introduced by Zhang (2010), is given by \rho_{\gamma,\lambda}(x) = \lambda \int_0^{|x|} \left(1 - \frac{t - \gamma\lambda}{\gamma^2\lambda} \mathbf{1}_{t \geq \gamma\lambda}\right)_+ dt, where \gamma > 1 and \lambda > 0, offering unbiasedness, sparsity, and continuity properties that outperform the \ell_1-norm in thresholded recovery. Similarly, SCAD, proposed by Fan and Li (2001), uses \rho_{\lambda,a}(x) = \lambda |x| for |x| \leq \lambda, \frac{a\lambda - |x|}{a-1} \lambda for \lambda < |x| \leq a\lambda, and constant \frac{\lambda^2 a}{2(a-1)} otherwise (with a > 2), providing a differentiable approximation that reduces bias in large coefficients. In RPCA, these penalties are applied element-wise to the sparse component, often combined with non-convex rank surrogates like the \gamma-norm, \|X\|_\gamma = \sum_i \psi_{1,\gamma}(\sigma_i(X)), where \psi_{1,\gamma}(t) = t for t \leq 1 and constant otherwise, to yield formulations solved via majorization-minimization schemes. Algorithms for these non-convex RPCA problems typically rely on alternating minimization, which iteratively optimizes the low-rank and sparse components while fixing the other. A prominent example is the alternating projections method proposed by Cherian and Sra (2014), which uses hard thresholding for sparsity and rank-r projections via for the low-rank part, initialized from a subsampled and refined in stages to build up the rank. This approach initializes with a relaxation solution for warm-starting and then applies descent-like updates to escape suboptimal points. More generally, inexact alternating minimization schemes, such as those in Hintermüller and Wu (2015), solve subproblems approximately using proximal operators tailored to the non-convex penalties, achieving local linear under incoherence assumptions on the true low-rank . These methods often require good initialization, such as from partial or RPCA outputs, to mitigate trapping in local minima. Compared to convex relaxation methods, non-convex techniques offer efficiency gains, particularly for large-scale or high-rank problems, often with faster in practice. For example, alternating minimization avoids full computations by factorizing the low-rank matrix, lowering per-iteration cost from O(mn \min(m,n)) to O(r mn) where r is the , making them scalable to matrices with dimensions exceeding $10^4 \times 10^4. However, the primary challenge remains sensitivity to local minima, which can lead to suboptimal decompositions if initialization is poor; this is often addressed by multi-stage refinement or ensemble starts from baselines. Despite these risks, empirical results on video surveillance datasets demonstrate improved performance over methods.

Learning-Augmented Approaches

Learning-augmented approaches to robust principal component analysis (RPCA) integrate deep neural networks to enhance the adaptability and performance of traditional optimization-based methods, enabling data-driven decomposition of low-rank structures from sparse corruptions. These frameworks replace hand-crafted proximal operators—such as thresholding or shrinkage in augmented Lagrange multiplier (ALM) solvers—with learnable neural modules, allowing the model to adapt to complex, input-specific patterns during inference. For instance, the deep unfolded spatiotemporal RPCA network (DUST-RPCA) unfolds ALM iterations into convolutional layers with neural proximal operators that enforce spatial and temporal regularization via graph Laplacians and reweighted masks, improving robustness in dynamic scenes like video surveillance. A prominent class of these methods involves unfolded networks, where optimization iterations are unrolled into sequential neural layers for end-to-end training on task-specific datasets. The RPCA-Net, introduced in 2020, unfolds a for reference-based RPCA into a multi-layer network, using adaptive convolutional proximal operators to handle temporal correlations in video data; it is trained via loss on synthetic sequences like moving MNIST, achieving superior foreground-background separation compared to prior deep methods like . Similarly, the learned robust PCA () framework from 2021 parameterizes a non-convex RPCA solver as a feedforward-recurrent , learning iteration-specific parameters to accelerate and minimize false positives in detection, with applications in high-dimensional tasks such as video denoising. These unfolded architectures bridge model-based priors with , reducing computational overhead while maintaining interpretability through explicit ties to optimization steps. Recent advances from 2023 to 2025 have incorporated architectures to address sequential and multi-dimensional data, enhancing the modeling of long-range dependencies in corruptions. The deep unfolding tensor RPCA network (DU-TRPCA), proposed in 2025, integrates a top-K sparse within its encoder to enforce sparsity priors on tensor decompositions, alternating with thresholded t-SVD for ; this enables effective denoising of hyperspectral image sequences under mixed noise, outperforming traditional RPCA variants with state-of-the-art PSNR gains on datasets like ICVL. Complementing this, paradigms have emerged to learn sparsity priors without labeled data, as in the deep unfolded tensor RPCA model from 2023, which optimizes four hyperparameters via an L1-based self-supervision loss on errors, achieving over 90% error reduction in video and recovery tasks by implicitly capturing sparse outlier patterns. These learning-augmented techniques offer key benefits, including to non-i.i.d. corruptions—such as clustered outliers in real-world —through adaptive thresholding learned from distributions, and the ability to infer task-specific low-rank structures, like motion-consistent backgrounds in videos, surpassing fixed-optimization baselines in both accuracy and efficiency. By leveraging end-to-end , they facilitate deployment in data-scarce environments while preserving the theoretical guarantees of RPCA through unfolding.

Theoretical Foundations

Exact Recovery Guarantees

Exact recovery guarantees in robust principal component analysis (RPCA) establish conditions under which the convex optimization problem known as Principal Component Pursuit (PCP) uniquely recovers the true low-rank matrix L_0 and sparse outlier matrix S_0 from their sum M = L_0 + S_0 in the noiseless case. These guarantees rely on structural assumptions that ensure the low-rank and sparse components do not overlap significantly, allowing the nuclear norm and \ell_1 norm penalties to separate them effectively. The seminal result, due to Candès, Li, Ma, and Wright, provides probabilistic recovery with high probability under mild conditions on the rank, incoherence, and sparsity. The key theorem states that if the low-rank matrix L_0 \in \mathbb{R}^{m \times n} has rank r satisfying \mu r < 1/20, where \mu is the incoherence parameter, and the sparse matrix S_0 satisfies \|S_0\|_1 / (m n) < 1/10, then PCP recovers L_0 and S_0 uniquely with high probability (at least $1 - c (m n)^{-10} for some constant c > 0) by solving \min_{L,S} \|L\|_* + \lambda \|S\|_1 subject to L + S = M, with \lambda = 1 / \sqrt{\max(m,n)}. This result holds for rectangular matrices and assumes the support of S_0 is uniformly random, ensuring the corruptions are well-distributed. Incoherence conditions play a crucial role in preventing the low-rank component from concentrating energy in few rows or columns, which could mimic sparsity. Specifically, letting L_0 = U \Sigma V^T be the compact singular value decomposition with U \in \mathbb{R}^{m \times r} and V \in \mathbb{R}^{n \times r}, the conditions are:
  • \max_{i=1,\dots,m} \|U^T e_i\|_2^2 \leq \mu r / m,
  • \max_{j=1,\dots,n} \|V^T e_j\|_2^2 \leq \mu r / n,
  • \|U V^T\|_\infty \leq \sqrt{\mu r / (m n)},
where e_i are vectors and \|\cdot\|_\infty is the entrywise maximum . These bounds, with \mu \geq 1, quantify how "spread out" the singular vectors are, avoiding alignment with the sparse support of S_0 and enabling exact separation. The \mu r < 1/20 ensures the effective complexity \mu r is sufficiently small relative to the dimensions. The proof relies on constructing a certificate to verify that (L_0, S_0) is the unique optimizer of PCP. This involves finding matrices W_L and W_S such that the subgradient holds: W = U V^T + W_L + \lambda (\operatorname{sgn}(S_0) + W_S), where P_T(W_L) = 0 (projection onto the of L_0), \|W_L\|_2 < 1, P_{\Omega^\perp}(W_S) = 0 (projection onto the complement of the support \Omega of S_0), and \|W_S\|_\infty < 1. The construction uses a "golfing scheme" to build W_L via iterative least-squares solves on random subsets of \Omega^c, leveraging the (RIP) of partial (or identity) matrices to bound the \|P_{\Omega^c} P_T\| < 1/2 with high probability. For W_S, a least-squares fit on \Omega ensures the infinity bound. This approach guarantees strict duality and uniqueness. The primary assumption is that the \Omega of S_0 has p = | \Omega | and is chosen uniformly at random (or via a model), which facilitates probabilistic concentration via RIP. Extensions to deterministic supports are provided via a de-randomization argument: for any fixed \Omega satisfying a mild "non-concentration" condition (no row or column of S_0 exceeds \sqrt{(p / (m n)) \log(m n)} in \ell_1 norm), exact recovery holds by fixing the signs of S_0 and applying a union bound over possible sign patterns, though with a slightly weaker probability. These guarantees underscore RPCA's ability to handle adversarially placed but not overly concentrated outliers.

Stability and Approximation Bounds

In practical scenarios, observed data matrices frequently include additive noise beyond the low-rank and sparse components, motivating the noisy model M = L + S + E, where E denotes a noise matrix satisfying \|E\|_F \leq \epsilon. Stability analysis in robust principal component analysis (RPCA) focuses on quantifying the approximation errors \|\hat{L} - L\|_F and \|\hat{S} - S\|_F for the recovered components \hat{L} and \hat{S}, typically obtained via convex optimization such as principal component pursuit. These analyses extend the noise-free exact recovery guarantees to realistic settings, revealing how errors propagate from noise and model misspecification. A foundational stability result establishes an error bound for the low-rank component: \|\hat{L} - L\|_F \leq C \epsilon + D \frac{\|S - \hat{S}\|_1}{\sqrt{mn}}, where m and n are the matrix dimensions, C and D are universal constants, \epsilon measures the noise magnitude, and the second term captures residual error from imperfect sparse recovery. This bound holds with high probability under assumptions of low incoherence for L, low rank, and controlled sparsity for S, ensuring that small noise perturbations yield proportionally small recovery errors. Similar bounds apply to the sparse component, with \|\hat{S} - S\|_\infty \leq \text{constant} \cdot (\epsilon + \|S - \hat{S}\|_1 / \sqrt{mn}). These results demonstrate uniform recovery up to a noise level proportional to \epsilon, provided the underlying components satisfy the necessary structural conditions. Chandrasekaran et al. (2011) further elucidate robust phase transitions, mapping the parameter space of and sparsity where stable recovery succeeds with high probability, even in the presence of . These transitions highlight sharp boundaries beyond which approximation errors degrade significantly, influenced by the interplay of incoherence \mu, r, and sparsity \alpha (the of nonzero entries in S). Specifically, recovery is stable when r \lesssim n / (\mu (\log n)^2) and \alpha \lesssim 1 / (20 \mu r \log^2 n / n), with probabilistic guarantees under random corruption support outperforming worst-case deterministic bounds that demand stricter parameter separation. The dependence on these factors underscores RPCA's robustness limits: higher incoherence \mu or rank r tightens the allowable sparsity \alpha, while probabilistic analyses relax requirements by averaging over typical data realizations. In practice, these bounds imply failure thresholds where recovery breaks down, such as when corruption exceeds approximately 20% of entries, leading to error amplification beyond the noise floor and rendering the decomposition unreliable for downstream tasks.

Applications

Computer Vision and Surveillance

In video surveillance systems, robust principal component analysis (RPCA) is widely applied for by decomposing video frames into a low-rank L representing the static or slowly varying and a S capturing moving objects such as pedestrians or vehicles. This decomposition facilitates tasks like and crowd monitoring, where the low-rank structure models the inherent redundancy in scenes, while the sparse component isolates dynamic foreground elements. Seminal work demonstrated that under suitable conditions, such as incoherent low-rank matrices and sufficiently sparse corruptions, RPCA recovers the true components exactly, enabling reliable in cluttered environments. For applications, such as crowd monitoring in public spaces, online variants of RPCA streaming video frames incrementally, avoiding the computational burden of batch methods and achieving processing rates suitable for live feeds. Frame-wise decomposition applies RPCA to individual or small batches of frames, treating each as a vectorized entry, which supports efficient foreground even in high-resolution . To maintain temporal consistency across frames—preventing flickering in detected objects—reweighted RPCA techniques iteratively refine the sparsity penalty, promoting smoother trajectories for moving foregrounds in videos. In face recognition under , RPCA recovers robust subspaces from corrupted face images by separating the low-rank principal components of clean facial structures from sparse like masks or shadows, outperforming traditional () on benchmark datasets. Evaluations on the AR dataset, which includes over 4,000 images of 126 subjects with real-world such as and scarves, show RPCA achieving recognition accuracies up to 20% higher than in severely occluded scenarios, highlighting its utility in security surveillance. Performance in these vision tasks is typically assessed using precision and recall for foreground masks, with 2010s benchmarks on datasets like CDnet revealing RPCA-based methods attaining average F1-scores above 0.85 in dynamic scenes, establishing their effectiveness for practical deployment over non-robust alternatives.

Signal Processing and Face Recognition

In signal processing, robust principal component analysis (RPCA) facilitates source separation by decomposing observed signals into a low-rank component representing the clean or structured signal and a sparse component capturing artifacts or outliers. For instance, in audio processing, RPCA separates singing voices from musical accompaniment in monaural recordings, modeling the repetitive music structure as the low-rank part and the sparser vocal elements as the error term, achieving signal-to-distortion improvements of approximately 1-1.4 dB over prior methods without requiring training data. Similarly, in electrocardiogram (ECG) analysis, RPCA detects and removes motion artifacts by treating the underlying cardiac signal as low-rank and the artifacts—such as those from low-frequency noise (0-20 Hz)—as sparse corruptions, enabling 100% detection accuracy across varying signal-to-noise ratios in benchmark datasets like MIT-BIH. This decomposition enhances signal quality for downstream tasks like arrhythmia detection, with error magnitudes scaling predictably with noise bandwidth. In face recognition, RPCA addresses challenges from shadows and occlusions by modeling these corruptions as sparse errors in the observation matrix, allowing recovery of the low-rank facial structure for robust identification. Shadows, arising from illumination variations, and occlusions, such as disguises or blockages, are isolated in the sparse component via nuclear norm minimization of the low-rank part and L1-norm penalization of the error, enabling descriptor-based recognition that combines sparsity (non-zero element count) and smoothness (gradient magnitude) metrics. This approach integrates with sparse coding frameworks, where test images are represented as sparse linear combinations of training samples plus error terms, treating occlusions as an additional "class" via L1-minimization, which theoretically handles up to 33% corruption and empirically yields 98.5% accuracy at 30% contiguous occlusion. On the Extended Yale B dataset, RPCA-based methods extracting sparse errors achieve 95.06% recognition accuracy under severe illumination variations, demonstrating resilience to up to 40% effective corruption through weighted sparsity-smoothness fusion. Specific algorithms like those using augmented Lagrange multipliers for RPCA have been adapted for dynamic face scenarios, such as video sequences with expression changes, by iteratively refining low-rank alignments to handle temporal corruptions. Evaluations on the Extended Yale B dataset under 40% random pixel corruption report approximately 95% accuracy, outperforming standard by isolating outliers without retraining. However, remains a key challenge for high-resolution signals, as the computational cost of singular value thresholding grows cubically with matrix dimensions, prompting dictionary-coupled variants that downsample inputs while preserving recovery guarantees.

Biomedical and Financial Data Analysis

In biomedical data analysis, robust principal component analysis (RPCA) is particularly valuable for handling noisy gene expression datasets, where the low-rank component captures underlying biological signals and the sparse component isolates experimental errors or outliers. For instance, RPCA has been applied to RNA-Seq data to accurately detect outlier samples that could skew downstream analyses, such as differential expression studies, by decomposing the data matrix into clean and corrupted parts. Similarly, in microarray gene expression profiles, RPCA treats differentially expressed genes as sparse perturbations recoverable from the low-rank background, enabling reliable identification of biologically relevant genes responsive to stimuli like abiotic stress. This decomposition enhances the robustness of analyses in high-dimensional, noisy environments typical of genomics. RPCA further supports cancer subtype clustering using datasets like (TCGA), where it processes profiles to separate tumor-specific patterns from noise, facilitating more accurate of samples into subtypes such as those in esophageal carcinoma (ESCA) or head and neck squamous cell carcinoma (HNSC). In tumor tasks, RPCA identifies characteristic genes and features for subsequent models like support vector machines, demonstrating effectiveness across multiple cancer datasets by mitigating the impact of outliers on clustering performance. Compared to standard , RPCA yields superior results in noisy conditions, with studies showing higher clustering accuracy rates (e.g., improved F1 measures and Rand indices) on TCGA data for single- and multi-cancer scenarios. In (fMRI), RPCA addresses background removal by decomposing spatiotemporal data into a low-rank component representing the structured signal and a sparse component capturing motion artifacts or other corruptions. This approach effectively eliminates RF spike artifacts and motion influences without relying on predefined templates, restoring quantitative metrics like (FA) and mean diffusivity (MD) in diffusion tensor imaging (DTI) to levels consistent with artifact-free data. For example, in DTI scans, RPCA-based despiking reduced FA discrepancies and aligned results with healthy norms, outperforming traditional thresholding methods in versatility across cardiac, renal, and neural imaging. Turning to financial data analysis, RPCA aids in by modeling low-rank trends in asset prices and sparse outliers representing events like market crashes. This decomposition allows for the identification of irregular patterns amid heavy-tailed returns, improving the detection of deviations from normal market behavior. In , RPCA contributes to robust estimation under approximate factor models, such as the Fama-French framework applied to stocks, where it minimizes the influence of outliers in daily returns to yield more stable risk assessments. Real-world applications on 2005–2013 data show that RPCA-based estimators produce smaller risk errors compared to sample methods, especially under gross constraints, enhancing overall optimization reliability. Recent studies have extended RPCA-inspired techniques to markets, where volatile data is decomposed to detect abnormal amounts linked to shifts or activities. For instance, a on-chain of datasets used an RPCA-like model to flag deviations in volumes, capturing volatility-driven anomalies with high by separating low-rank regular flows from sparse irregularities. These applications underscore RPCA's utility in financial settings prone to outliers, leading to more resilient pattern discovery and .

Extensions and Variants

Online and Streaming RPCA

Online and streaming robust principal component (RPCA) extends the batch RPCA framework to handle that arrives sequentially, enabling and without requiring the full to be stored in memory. In this setting, the goal is to incrementally decompose an incoming into a low- component L and a sparse component S, adapting to evolving subspaces while minimizing a combination of and sparsity penalties over time. This is particularly useful for high-dimensional where batch methods become computationally prohibitive due to their O(m n r) , with m and n denoting the matrix dimensions and r the . The core model involves processing column-by-column or in small batches, updating estimates of L and S such that each new column d_t \in \mathbb{R}^m satisfies d_t \approx l_t + s_t, where l_t lies in a low-rank and s_t is sparse. The objective is to minimize an surrogate for the batch RPCA problem, often formulated as \min \sum_t \|l_t\|_* + \lambda \|s_t\|_1, subject to d_t = l_t + s_t, but solved via incremental approximations to avoid recomputing the entire . Algorithms maintain a time-varying subspace basis U_t \in \mathbb{R}^{m \times r} for the low-rank part, updating it as new arrives while identifying and isolating sparse corruptions. Seminal algorithms build on online (PCA) methods like , which performs incremental to track from incomplete observations with O(r^2) complexity per update. This is extended to robust settings in GRASTA, which incorporates \ell_1-norm penalties to handle sparse via alternating direction method of multipliers (ADMM) for sparsity estimation and updates for the , achieving performance (e.g., 57 frames per second on ). Another influential approach is ORPCA, which uses stochastic block-coordinate descent on a factorized nuclear norm formulation to process one sample at a time, converging to the batch RPCA solution under bounded assumptions. To address concept drift or abrupt changes, windowed batching techniques like OMWRPCA maintain a sliding window of recent samples (e.g., size 200) for local RPCA solves, incorporating change point detection via hypothesis testing on support sizes to restart estimates when needed. These methods offer significant efficiency gains, with per-update complexity typically O(m r^2) or lower (versus batch O(m n r)), and memory requirements scaling as O(m r) independent of the stream length n. They find applications in resource-constrained environments like sensor networks for and , where data arrives continuously and full storage is infeasible. Theoretical guarantees include subspace tracking error bounds that decay as O(1/t) for static subspaces and remain bounded under slow temporal variations (e.g., subspace angle changes slower than the step size), provided outlier sparsity is below a incoherence threshold.

Tensor and Multi-Modal RPCA

Tensor robust principal component analysis (TRPCA) extends the matrix-based RPCA framework to higher-order tensors, enabling the decomposition of multi-dimensional data \mathcal{T} \in \mathbb{R}^{n_1 \times n_2 \times \cdots \times n_d} into a low-rank component \mathcal{L} and a sparse outlier component \mathcal{S}, such that \mathcal{T} = \mathcal{L} + \mathcal{S}. The low-rank component \mathcal{L} is typically modeled with a low Tucker rank, defined via the multilinear ranks of its mode-k unfoldings, or alternatively using the tensor singular value decomposition (t-SVD) under the t-product algebra, which leverages the discrete Fourier transform along the tube fibers for efficient computation. Low-rank modeling can also employ canonical polyadic (CP) decomposition, though Tucker- and t-SVD-based approaches dominate due to their balance of expressiveness and tractability. This formulation addresses the limitations of matrix RPCA by capturing multi-way correlations inherent in tensor-structured data, such as videos or multi-spectral images, without flattening into matrices that lose structural information. Key methods for TRPCA include t-SVD-based optimization, as introduced by Zhang et al., which minimizes a surrogate like the sum of norms of frontal slices in the domain to recover \mathcal{L} and \mathcal{S} exactly under incoherence conditions. For multi-modal , TRPCA integrates complementary views by stacking modalities into a tensor to enforce low-rank consistency across views, enabling robust subspace learning in multi-view settings. Algorithms often employ alternating minimization or augmented Lagrange multipliers to solve the non- optimization, with proximal operators adapted for tensor norms to ensure scalability. These approaches outperform matrix RPCA in preserving higher-order dependencies, particularly in noisy multi-view settings. Applications of TRPCA include hyperspectral image denoising, where the spectral-spatial tensor is decomposed to separate low-rank clean signals from sparse noise like stripes or dead lines, achieving superior peak signal-to-noise ratios compared to matrix methods on datasets like Indian Pines. In multi-view clustering, TRPCA facilitates robust subspace learning by enforcing low-rank consistency across views in a shared tensor, enabling accurate partitioning of incomplete or noisy multi-modal data, as demonstrated on benchmarks like the Extended Yale B face dataset with improved normalized mutual information scores. Despite these advances, TRPCA faces challenges from increased in higher dimensions, with t-SVD requiring O(n^3 \log n) time per iteration due to FFT operations, though this is balanced by multi-linear algebra's ability to exploit tensor sparsity and parallelism for practical efficiency. Recent developments as of 2025 include nonconvex regularizers for improved recovery guarantees in TRPCA and extensions to functional data via robust functional variants.

Implementation Resources

Software Libraries and Tools

Several libraries provide implementations of robust principal component analysis (RPCA), supporting various optimization algorithms such as alternating minimization and augmented Lagrange multipliers (ALM). These tools are available in multiple programming languages and cater to different computational needs, from basic to accelerated variants for large-scale data. In , the inexact ALM solver, originally developed in conjunction with the seminal RPCA work, offers a for decomposing a into low-rank and sparse components via Principal Component Pursuit. This , distributed through academic repositories, includes scripts for exact under the standard RPCA assumptions and has been widely adopted for prototyping. Additionally, the SPGL1 package serves as a solver for the \ell_1-minimization subproblems inherent in RPCA formulations, enabling efficient handling of sparse detection through sparse least-squares optimization. SPGL1 integrates seamlessly with 's ecosystem and supports large-scale problems by leveraging compiled C routines for projection operations. Python libraries extend RPCA accessibility with scikit-learn-compatible interfaces. The PyRPCA package implements core RPCA algorithms, including batch processing for low-rank and sparse matrix separation, and provides utilities for loading and visualizing results in formats compatible with MATLAB files. It follows a fit-transform API similar to scikit-learn's decomposition modules, facilitating integration into machine learning pipelines. For tensor-based variants, TensorLy includes a robust tensor PCA module that decomposes higher-order tensors into low-multilinear-rank and sparse components using ALM, with built-in support for missing data imputation. This feature is particularly useful for multi-dimensional applications, and recent updates have improved scalability for video and hyperspectral data. In , the robpca function from the rospca package performs robust sparse , an extension of RPCA that incorporates \ell_1-penalization for while resisting outliers through projection-pursuit techniques. This implementation, part of the robust , includes reweighting options for skewed and is optimized for multivariate analysis in statistical workflows. The cellPCA package from robust@ provides robust principal components with casewise and cellwise weights for handling outliers in high-dimensional . For low-level acceleration, the PROPACK library provides efficient partial (SVD) routines essential for RPCA iterations, supporting sparse and structured matrices with up to 10x speedups over standard on large inputs. PROPACK is often wrapped in higher-level languages for RPCA solvers, enhancing convergence in . Enterprise tools like include a Robust Principal Component Analysis node for integration into workflows, supporting detection and as of 2025.

Datasets and Evaluation Benchmarks

In robust principal component analysis (RPCA), standard datasets for experimentation include those from , , and financial domains, often with controlled corruptions to simulate real-world noise, s, or anomalies. Video Dataset, featuring surveillance footage with static backgrounds and dynamic foregrounds, serves as a key for separating low-rank scene structures from sparse moving objects. Similarly, the Extended Yale B Dataset, which contains 2,414 face images of 38 subjects captured under nine poses and 64 illumination conditions, is extensively used to test RPCA's performance in recovering low-rank facial structures amid corruptions like random occlusions or illumination variations at sparsity levels of 10% to 50%. These datasets enable evaluation of RPCA's ability to decompose matrices into low-rank and sparse components while preserving essential features. For signal processing tasks, datasets from the UCI Machine Learning Repository, such as the dataset comprising 452 electrocardiogram (ECG) instances with 279 attributes, provide benchmarks for applying RPCA to denoise , impute missing values, or detect anomalies in biomedical . In financial applications, historical stock data, including multivariate of prices and volumes for major indices, is utilized to benchmark RPCA for , where sparse outliers represent market irregularities like or crashes. Evaluation metrics for RPCA focus on recovery fidelity and detection accuracy. A primary metric is the relative Frobenius norm recovery error for the low-rank component: \frac{\|L - \hat{L}\|_F}{\|L\|_F}, where L denotes the ground-truth low-rank matrix and \hat{L} the RPCA estimate; lower values indicate successful decomposition under incoherent assumptions. For image and video recovery, the (PSNR) assesses reconstruction quality, with higher values (e.g., above 30 dB) signaling effective outlier removal. In anomaly detection scenarios, the Area Under the Curve (ROC-AUC) quantifies discrimination performance, where values closer to 1 reflect superior separation of anomalies from background. Key benchmarks include phase transition plots, which map the empirical success rate of exact RPCA recovery against varying matrix rank r (typically 1–10% of dimensions) and sparsity \rho (10–30% of entries), revealing sharp boundaries where convex optimization succeeds with high probability for incoherent low-rank matrices.

References

  1. [1]
    [PDF] Robust Principal Component Analysis?
    Dec 17, 2009 · Robust PCA recovers low-rank and sparse components using Principal Component Pursuit, even with corrupted or missing data, using a weighted ...
  2. [2]
    Robust principal component analysis? | Journal of the ACM
    Jun 9, 2011 · This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component.
  3. [3]
    The Augmented Lagrange Multiplier Method for Exact Recovery of ...
    Sep 26, 2010 · This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its ...
  4. [4]
    [PDF] EE227 Project Report Robust Principal Component Analysis
    May 10, 2012 · This section will review some applications using Robust PCA. Currently, most of the applications are related to computer vision. As we will ...
  5. [5]
    [PDF] Robust Principal Component Analysis? - Columbia University
    This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component.
  6. [6]
    Principal component analysis: a review and recent developments
    Apr 13, 2016 · (c) Robust principal component analysis. By its very nature, PCA is sensitive to the presence of outliers and therefore also to the presence ...
  7. [7]
    [0912.3599] Robust Principal Component Analysis? - arXiv
    Dec 18, 2009 · This paper proposes Principal Component Pursuit to recover low-rank and sparse components, even with corrupted or missing data, enabling robust ...
  8. [8]
    Robust principal component analysis: A factorization-based ...
    In this paper, we propose a new factorization-based RPCA model, which is equivalent to the traditional convex RPCA under some mild conditions. We develop an ...
  9. [9]
    Robust Principal Component Analysis using Density Power ...
    We introduce a novel robust PCA estimator based on the minimum density power divergence estimator. This combines the theoretical strength of the M-estimators.Missing: RPCA Balasubramanian et al
  10. [10]
    Robust Density Power Divergence Estimates for Panel Data Models
    Aug 5, 2021 · In this study, we propose a minimum density power divergence estimation procedure for panel data regression models with random effects to achieve robustness ...
  11. [11]
    [PDF] pearson1901.pdf
    Pearson, K. 1901. On lines and planes of closest fit to systems of points in space. Philosophical Magazine 2:559-572.
  12. [12]
    [PDF] arXiv:1404.1100v1 [cs.LG] 3 Apr 2014
    Apr 3, 2014 · Principal component analysis (PCA) is a standard tool in mod- ern data analysis - in diverse fields from neuroscience to com- puter graphics ...
  13. [13]
    [PDF] Ensemble Principal Component Analysis - arXiv
    Nov 3, 2023 · Two well-known limitations of the method include sensitivity to outliers and noise and no clear methodology for the uncertainty ...
  14. [14]
    [PDF] Outlier-Robust PCA: The High Dimensional Case
    The robust estimate is typically obtained either by an outlier rejection procedure, subsampling (including “leave-one-out” and related approaches) or by a ...
  15. [15]
    [PDF] robust pca methods for complete and missing data - NNW
    A rather simple and straightforward method of reducing the effect of outliers and impulsive noise in the data is to suppress large elements in the data vectors ...Missing: gross | Show results with:gross
  16. [16]
    [PDF] Robust PCA via Outlier Pursuit - arXiv
    Dec 31, 2010 · We emphasize, however, that the inability to do this simply via the convex optimization step, poses significant technical challenges, as we ...<|control11|><|separator|>
  17. [17]
    On rare variants in principal component analysis of population ...
    Mar 17, 2020 · The PCA of population stratification performs worse with rare variants than with common ones. It is necessary to restrict the selection to only the common ...
  18. [18]
    Robust subspace methods for outlier detection in genomic data ...
    Feb 5, 2020 · Outlier data are often identified by considering the probability density of normal data and comparing data likelihoods against some threshold.Missing: video | Show results with:video
  19. [19]
    [PDF] Robust principal component analysis and outlier detection with ...
    An important parameter in dealing with outliers is the breakdown point. This measures the ability of an estimation method to identify the unbiased estimates ...
  20. [20]
    Robust PCA with adaptive weighting for sparse outlier suppression
    However, PCA assumes that the data is clean and follows a Gaussian distribution, making it highly sensitive to noise and outliers [3]. This assumption poses a ...
  21. [21]
    [PDF] Nonconvex Relaxation Approaches to Robust Matrix Recovery - IJCAI
    Based on these three properties, Fan and Li proposed a new penalty func- tion called the smoothly clipped absolute deviation penalty. (SCAD). Recently, Zhang ...
  22. [22]
    None
    ### Summary of Non-Convex Optimization Techniques for Robust PCA
  23. [23]
    [PDF] Efficient Optimization Algorithms for Robust Principal Component ...
    In this paper we review existing optimization methods for solving convex and nonconvex relaxations/variants of robust PCA, discuss their advantages and ...
  24. [24]
  25. [25]
    [PDF] A Deep-Unfolded Spatiotemporal RPCA Network For L+S ... - arXiv
    Nov 6, 2022 · The proximal operator of DUST-RPCA, on the other hand, adapts for every input at each layer of the network, making it more robust to the varying ...Missing: ALM | Show results with:ALM
  26. [26]
    [PDF] A Deep-Unfolded Reference-Based RPCA Network For Video ...
    This paper proposes a new deep-unfolding-based network design for the problem of Robust. Principal Component Analysis (RPCA) with application to video.
  27. [27]
    [PDF] Learned Robust PCA: A Scalable Deep Unfolding ... - NIPS papers
    Robust principal component analysis (RPCA) is a critical tool in modern machine learning, which detects outliers in the task of low-rank matrix ...
  28. [28]
    None
    ### Summary of Transformer Use in DU-TRPCA for Hyperspectral Image Denoising
  29. [29]
    None
    ### Summary of Self-Supervised Deep Unfolded Tensor RPCA
  30. [30]
    None
    Summary of each segment:
  31. [31]
    [PDF] arXiv:0906.2220v1 [math.OC] 11 Jun 2009
    Jun 11, 2009 · We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and ...
  32. [32]
    Robust PCA via Principal Component Pursuit: A review for a ...
    This work aims to initiate a rigorous and comprehensive review of RPCA-PCP based methods for testing and ranking existing algorithms for foreground detection.<|control11|><|separator|>
  33. [33]
    [PDF] An Efficient Face Recognition Algorithm Based on Robust Principal ...
    We use the occluded face images in the AR database to test the performance of different algorithms. Some cropped images are shown in Figure 4. These occluded ...Missing: dataset | Show results with:dataset
  34. [34]
    [PDF] Extension of Robust Principal Component Analysis for Incremental ...
    We use the occluded face images in the AR database to test the performance of different algorithms. In this experiment, 70 subjects are selected for the dataset ...
  35. [35]
    Block-Sparse RPCA for Consistent Foreground Detection
    Our method is able to obtain crisply defined foreground regions, and in general, handles large dynamic background motion much better.
  36. [36]
    None
    ### Summary of GRASTA as an Extension of GROUSE to Robust PCA
  37. [37]
    [PDF] Online Identification and Tracking of Subspaces from Highly ... - arXiv
    Jul 12, 2011 · In this paper, we present GROUSE (Grassmannian Rank-One Update Subspace Estimation), a subspace identification and tracking algorithm that ...
  38. [38]
    [PDF] Online Robust Principal Component Analysis with Change Point ...
    Mar 20, 2017 · Abstract. Robust PCA methods are typically batch algorithms which requires loading all ob- servations into memory before processing.<|control11|><|separator|>
  39. [39]
    Tensor Robust Principal Component Analysis with A New ... - arXiv
    Apr 10, 2018 · In this paper, we consider the Tensor Robust Principal Component Analysis (TRPCA) problem, which aims to exactly recover the low-rank and sparse components ...Missing: Zhang 2014
  40. [40]
    [PDF] Novel methods for multilinear data completion and de-noising based ...
    In this paper we propose novel methods for comple- tion (from limited samples) and de-noising of multilinear. (tensor) data and as an application consider ...
  41. [41]
    [PDF] Tensor Robust Principal Component Analysis with a New ...
    This paper considers the Tensor Robust Principal Component Analysis (TRPCA) problem, which aims to exactly recover the low-rank and sparse components from ...
  42. [42]
  43. [43]
    Simultaneous denoising and moving object detection using low rank ...
    The OS-RPCA [28] algorithm is a graph regularized algorithm which helps to separate background and foreground components from RGB-D videos. In this method, low- ...
  44. [44]
    Tensor Robust Principal Component Analysis via Non-Convex Low ...
    In this paper, different from current approach, we developed a new t-Gamma tensor quasi-norm as a non-convex regularization to approximate the low-rank ...Missing: det | Show results with:det
  45. [45]
    Hyperspectral image denoising using the robust low-rank tensor ...
    In this paper, we generalize the RPCA to the denoising of the tensor Y based on the following two aspects. First, according to the previous work in [17], the ...
  46. [46]
    [PDF] Low-Rank Tensor Constrained Multiview Subspace Clustering
    In this paper, we explore the problem of multiview sub- space clustering. We introduce a low-rank tensor constrain- t to explore the complementary ...
  47. [47]
    SPGL1: A solver for sparse least squares - Michael P. Friedlander
    SPGL1 is an open-source Matlab solver for sparse least-squares. It is designed to solve any one of these three problem formulations.Missing: RPCA | Show results with:RPCA
  48. [48]
    weilinear/PyRPCA: Robust PCA in Python - GitHub
    Robust PCA in Python. Contribute to weilinear/PyRPCA development by ... mat files with respect to each algorithms and can be directly readable from matlab.
  49. [49]
    tensorly.decomposition.robust_pca
    Robust Tensor PCA via ALM with support for missing values. Decomposes a tensor X into the sum of a low-rank component D and a sparse component E ...Missing: variants | Show results with:variants
  50. [50]
    robpca ROBust PCA algorithm - RDocumentation
    ROBPCA algorithm of Hubert et al. (2005) including reweighting (Engelen et al., 2005) and possible extension to skewed data (Hubert et al., 2009).
  51. [51]
    PROPACK - Software for large and sparse SVD calculations
    May 16, 2012 · The software package PROPACK contains a set of functions for computing the singular value decomposition of large and sparse or structured matrices.Missing: RPCA | Show results with:RPCA
  52. [52]
    [PDF] Fast and Provable Nonconvex Tensor RPCA
    In this paper, we study nonconvex tensor robust principal component analysis (RPCA) based on the t-SVD. We first propose an alternating pro-.
  53. [53]
    [PDF] Efficient Robust Principal Component Analysis via Block Krylov ...
    Based on the estimated rank, CUR decomposition is adopted to replace SVD in up- dating low-rank matrix component, whose complexity re- duces from O(rnd) to O(r ...
  54. [54]
    [PDF] Robust Principal Component Analysis? - People @EECS
    This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component.
  55. [55]
    Illumination-robust face recognition system based on differential ...
    Sep 27, 2012 · Performance evaluation of the proposed system was performed using an extended Yale face database B which consists of 2,414 face images for 38 ...Missing: dataset | Show results with:dataset
  56. [56]
    Missing Value Estimation Methods Research for Arrhythmia ... - NIH
    Jun 21, 2020 · The experimental results on the UCI database indicate that the “RPCA-based method” can successfully handle missing values in arrhythmia dataset ...
  57. [57]
    Unsupervised real-time anomaly detection for streaming data
    Nov 1, 2017 · Examples from industry include Netflix's robust principle component analysis (RPCA) method [7] and Yahoo's EGADS [8] both of which require ...
  58. [58]
    [PDF] Flexible Generalized Low-Rank Regularizer for Tensor RPCA - IJCAI
    Evaluation metrics. The peak signal-to-noise ratio. (PSNR) and structural similarity (SSIM) are used to evaluate the recovery performance. Noising Data ...
  59. [59]
    Hyperspectral Anomaly Detection Based on Improved RPCA with ...
    Mar 10, 2022 · We introduced the detection map, anomaly background separability map, ROC curve, and AUC value to carry out a qualitative and quantitative ...
  60. [60]
    Rope-net: deep convolutional neural network via robust principal ...
    May 22, 2025 · To overcome this problem, in this paper, we propose an effective compression approach of DCNNs by robust principal component analysis (RPCA) on ...