Fact-checked by Grok 2 weeks ago

Low-rank approximation

Low-rank approximation is a fundamental technique in that seeks to represent a given A \in \mathbb{R}^{m \times n} by a B of lower k \ll \min(m, n), minimizing the \|A - B\| under a specified , such as the Frobenius or norm. This approach reduces the dimensionality and redundancy of the data while preserving its essential structure, making it computationally efficient for large-scale matrices. The optimal low-rank approximation in the and norms is achieved via the truncated (SVD) of A, where B is formed by retaining only the top k singular values and corresponding singular vectors, as established by the Eckart–Young–Mirsky . The concept traces back to the seminal work of Eckart and Young in 1936, who formalized the problem and proved the optimality of the SVD-based approximation, later extended by Mirsky to other unitarily invariant norms. Traditional methods for computing low-rank approximations include full , which is exact but O(\min(mn^2, m^2n)) in complexity, and iterative techniques like Lanczos or orthogonal iterations for partial decompositions. For massive datasets where exact is infeasible, randomized algorithms have become prominent since the early ; these project the matrix onto a low-dimensional random and approximate the range before applying , achieving near-optimal accuracy with linear or near-linear . Such methods, including randomized range finders and power iterations, are particularly effective for matrices with rapidly decaying singular values. Low-rank approximations underpin numerous applications across scientific computing, , and data analysis. In (), they enable by identifying dominant variance directions in high-dimensional data, such as profiles in bioinformatics. They facilitate image and signal compression by discarding small singular values, as seen in JPEG-like schemes, and support recommendation systems via matrix factorization, where user-item interactions are modeled as low-rank structures. In large-scale simulations, dynamical low-rank methods approximate solutions to matrix differential equations, accelerating computations in quantum physics and . Ongoing research focuses on robust variants for noisy data, weighted approximations, and streaming models to handle real-time processing.

Fundamentals

Definition

In linear algebra, the rank of a matrix A \in \mathbb{R}^{m \times n} is defined as the dimension of the column space of A, which is equivalently the maximum number of linearly independent columns (or rows) in A. A matrix has full rank if its rank equals the minimum of m and n; otherwise, it is low-rank. For example, a diagonal matrix with only the first k < \min(m,n) diagonal entries nonzero and the rest zero has rank k, illustrating a structured low-rank form where most entries are zero. Low-rank approximation seeks a matrix B \in \mathbb{R}^{m \times n} of rank at most k that closely approximates a given matrix A \in \mathbb{R}^{m \times n}, typically by minimizing the error \|A - B\| under a chosen matrix norm. This process reduces the effective dimensionality of A while preserving essential information, as low-rank matrices often capture underlying patterns in data. Common norms include the , defined as \|A\|_F = \sqrt{\sum_{i=1}^m \sum_{j=1}^n a_{ij}^2}, which measures the Euclidean length of the matrix viewed as a vector, and the spectral norm \|A\|_2, the largest singular value of A, which quantifies the maximum stretch induced by A as a linear operator. A simple illustrative example is the rank-1 approximation of a high-rank matrix A, where B = uv^T for vectors u \in \mathbb{R}^m and v \in \mathbb{R}^n, representing an outer product that aligns with the dominant direction in A; this minimizes the approximation error in the spectral or when u and v are the leading left and right singular vectors of A.

Historical Development

The term "matrix" was coined by in 1850, who defined it as an oblong arrangement of terms, laying early groundwork for matrix theory and understanding linear independence in systems of equations. Sylvester later defined the nullity of a matrix in 1884, contributing to the . The concept of matrix rank was first formally defined by in 1878, in the context of canonical forms, as the largest order of a non-vanishing minor of the matrix. A pivotal advancement came with the introduction of singular value decomposition (SVD), discovered independently by Eugenio Beltrami in 1873 and Camille Jordan in 1874, which provided a canonical factorization enabling practical computations of low-rank approximations by decomposing matrices into orthogonal components scaled by singular values. This decomposition became essential for truncating matrices to lower ranks while preserving key structural information. In 1936, Carl Eckart and Gale Young established the optimality of SVD-based approximations in their theorem, proving that the best low-rank approximation minimizes error in both the spectral and . Leon Mirsky extended this result in 1960, generalizing it to all unitarily invariant norms, broadening the theorem's applicability across various matrix approximation problems. The advent of big data in the early 2000s spurred renewed interest in scalable low-rank methods, with Halko, Martinsson, and Tropp's 2011 framework introducing randomized algorithms for efficient computations on large matrices, achieving near-optimal approximations with reduced computational cost. More recently, from 2023 onward, low-rank approximations have integrated deeply with machine learning, particularly through for neural networks; for instance, works at explored low tensor rank learning for neural dynamics to model efficient recurrent networks, while 2024 contributions advanced activation map compression via tensor methods to enhance on-device deep learning.

Applications

Dimensionality Reduction and PCA

Low-rank approximation forms the foundation of principal component analysis (PCA), an unsupervised method for dimensionality reduction that uncovers the principal directions of variance in a dataset. By approximating the covariance matrix of centered data with a low-rank matrix through truncated singular value decomposition (SVD), PCA identifies principal components as the right singular vectors corresponding to the largest singular values, allowing projection of high-dimensional data onto a lower-dimensional subspace while preserving key statistical structure. The procedure starts with centering the data matrix X \in \mathbb{R}^{m \times n} (with m samples and n features) by subtracting the column means to obtain \tilde{X}. The SVD of \tilde{X} is then \tilde{X} = U \Sigma V^T, where U and V are orthogonal matrices, and \Sigma contains the singular values \sigma_1 \geq \sigma_2 \geq \cdots \geq 0 on its diagonal. For a rank-k approximation (k \ll n), the truncated SVD retains only the top-k components: \tilde{X}_k = U_k \Sigma_k V_k^T, where U_k and \Sigma_k are the first k columns/values of U and \Sigma, and V_k the first k columns of V; the columns of V_k serve as the principal components for data projection. The approximation error is assessed via the explained variance ratio, which captures the fraction of total data variance retained by the top-k components: \frac{\sum_{i=1}^k \sigma_i^2}{\|\tilde{X}\|_F^2}, where \|\tilde{X}\|_F^2 = \sum_{i=1}^{\min(m,n)} \sigma_i^2 is the squared of the centered data, equivalent to the total variance scaled by the number of samples. This metric guides the selection of k, ensuring substantial information retention, such as 95% of variance, without over-reduction. In practice, this approach excels in feature extraction from image datasets; for example, applying to the —where each 28×28 grayscale image is flattened into a 784-dimensional vector—can reduce dimensionality to around 150 components while explaining over 95% of the variance, effectively mapping images to a low-rank subspace that highlights dominant patterns like stroke orientations for downstream analysis. A key limitation of PCA via low-rank approximation is its sensitivity to outliers, as the emphasis on maximizing variance allows extreme observations to skew the principal components away from the data's core variability, potentially distorting the subspace; this issue underscores the need for robust low-rank methods that mitigate outlier influence.

Matrix Factorization in Machine Learning

In recommender systems, low-rank approximation is widely applied through matrix factorization to model user-item interactions, approximating a sparse user-item rating matrix R \in \mathbb{R}^{m \times n} as R \approx U V^T, where U \in \mathbb{R}^{m \times r} and V \in \mathbb{R}^{n \times r} are low-rank factor matrices with r \ll \min(m, n), capturing latent factors such as user preferences and item attributes. This decomposition enables prediction of missing entries by computing \hat{r}_{ij} = u_i^T v_j, facilitating personalized recommendations in collaborative filtering. A prominent variant is non-negative matrix factorization (NMF), which imposes non-negativity constraints on the factors to ensure interpretable, parts-based representations suitable for recommendation tasks. NMF solves the optimization problem \min_{U, V \geq 0} \| R - U V^T \|_F^2, using multiplicative update rules that iteratively refine U and V while preserving non-negativity, as derived from Karush-Kuhn-Tucker conditions. These rules, such as u_{ia} \leftarrow u_{ia} \frac{(R V)_{ia}}{(U V V^T)_{ia}} and v_{ja} \leftarrow v_{ja} \frac{(U^T R)_{ja}}{(U^T U V)_{ja}}, converge to a local minimum and have been applied to factorize rating matrices for uncovering additive latent features like genre preferences. In the Netflix Prize competition (2006–2009), low-rank matrix factorization models excelled by capturing latent factors in a massive user-movie rating matrix, achieving up to 10% improvement in prediction accuracy over neighborhood methods through factorization into user and movie latent spaces. For instance, models with 50–100 latent dimensions modeled subtle patterns like user mood or movie genres, contributing to the winning solution's root-mean-square error of approximately 0.8567 on the test set. Scalability challenges arise with large datasets, where full matrix storage and computation become prohibitive; this has led to the adoption of alternating least squares (ALS) for optimization, which alternately fixes one factor matrix and solves a least-squares problem for the other via closed-form solutions or regularization. ALS handles implicit feedback and sparsity efficiently in distributed settings, as implemented in frameworks like Apache Spark MLlib, reducing training time on billion-scale matrices. Weighted variants of these formulations address missing data by assigning lower weights to unobserved entries, as explored in related problem settings. Recent extensions integrate low-rank approximations into graph neural networks (GNNs) for link prediction, where node embeddings from GNN layers approximate low-rank structures of the adjacency matrix, improving prediction of edges in social or biological networks by incorporating structural priors. Another prominent development is low-rank adaptation (LoRA), introduced in 2021, which approximates weight updates in large language models with low-rank matrices to enable efficient fine-tuning. By injecting trainable low-rank decomposition layers into transformer blocks, LoRA reduces the number of trainable parameters by orders of magnitude (e.g., from billions to millions) while maintaining performance comparable to full fine-tuning, and as of 2025, variants like QLoRA continue to advance parameter-efficient training for massive models.

Signal and Image Processing

In signal and image processing, low-rank approximation exploits the inherent redundancy in media data, where signals and images often exhibit low effective rank due to smooth variations or structural similarities, enabling efficient compression and noise removal. For image compression, truncated is applied to the pixel matrix of an image, retaining only the dominant singular values and corresponding vectors to form a low-rank approximation that captures essential visual features while discarding minor components. This approach achieves lossy compression ratios, for instance, reducing storage by factors of 10 to 100 for typical grayscale images while maintaining perceptual quality. A key application is denoising, where a noisy observation A = A_{\text{clean}} + N (with N as additive noise) is approximated by solving the optimization problem \min_{B} \| A - B \|_F \quad \text{subject to} \quad \rank(B) \leq k, which recovers a low-rank matrix B close to the clean signal by minimizing the error (as detailed in the theoretical foundations). This method leverages the assumption that the clean image has lower rank than the noise-corrupted version, effectively separating structured content from random perturbations. In practice, such techniques applied to grouped similar patches yield denoised images with improved (PSNR), often gaining 0.1 dB or more at high noise levels (e.g., \sigma = 100) compared to baseline filters. In video processing, low-rank approximation facilitates background subtraction in surveillance footage by modeling video frames as columns of a matrix, decomposing it into a low-rank component representing the static background and a sparse component capturing moving foreground objects. Seminal work using robust principal component analysis solves this via convex relaxation, enabling real-time detection even under illumination changes or partial occlusions. Historical developments in the 2000s extended low-rank methods to wavelet domains for enhanced compression, incorporating elements akin to those in standards, which use discrete wavelet transforms to exploit multi-resolution low-rank structures for superior rate-distortion performance.

Problem Formulations

Basic Low-Rank Approximation

The basic low-rank approximation problem involves finding a matrix B \in \mathbb{R}^{m \times n} of rank at most k that minimizes the error relative to a given matrix A \in \mathbb{R}^{m \times n}, expressed as \min_{\rank(B) \leq k} \|A - B\|, where the norm is typically the \| \cdot \|_F = \sqrt{\sum_{i,j} | \cdot |_{ij}^2} or the \| \cdot \|_2 (the largest singular value). This formulation arises in contexts requiring dimensionality reduction while preserving essential structure, such as data compression or noise reduction. For the Frobenius norm, the optimal B_k is obtained via the truncated of A. If the full SVD is A = U \Sigma V^T with singular values \sigma_1 \geq \sigma_2 \geq \cdots \geq 0, then B_k = \sum_{i=1}^k \sigma_i u_i v_i^T, where u_i and v_i are the corresponding left and right singular vectors. This equivalence holds because the Frobenius norm is unitarily invariant, aligning the error minimization with retaining the largest singular components. The establishes the optimality of this solution (detailed in the Theoretical Foundations section). A similar result applies to the spectral norm, where the truncated SVD also yields the best approximation. Computing the full SVD, necessary for the exact solution, has a time complexity of O(\min(m n^2, m^2 n)) for an m \times n matrix using standard algorithms like the Golub-Reinsch method. This cost dominates practical applications for large matrices, motivating approximate methods in subsequent sections. To illustrate, consider the 3×3 matrix A = \begin{pmatrix} 3 & 1 & 1 \\ 1 & 3 & 1 \\ 1 & 1 & 3 \end{pmatrix}. Its SVD yields singular values 5, 2, and 2. The rank-1 approximation is B_1 = \frac{5}{3} \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix}, with Frobenius error \|A - B_1\|_F = \sqrt{8} \approx 2.83, capturing about 75% of the total "energy" (trace of A^T A). Although the rank constraint \rank(B) \leq k renders the optimization non-convex, for the Frobenius and spectral norms it is efficiently solvable via the truncated SVD. The problem is NP-hard in general for other objectives, such as entry-wise L_p norms with p \neq 2 (as discussed in later subsections), complicating direct solutions beyond the SVD approach. A common convex relaxation replaces the rank with the nuclear norm \|B\|_* = \sum_i \sigma_i(B), leading to tractable semidefinite programming formulations for related problems like matrix completion.

Weighted and Entry-wise L_p Variants

The weighted low-rank approximation problem extends the standard formulation by incorporating a non-negative weight matrix W to account for varying entry importance in the input matrix A. It is defined as \min_{B: \rank(B) \leq k} \|W \odot (A - B)\|_F, where \odot denotes the entry-wise (Hadamard) product and \|\cdot\|_F is the Frobenius norm. This allows the method to handle heterogeneous data, such as downweighting noisy entries or setting weights to zero for missing observations, effectively generalizing matrix completion tasks. The weighted variant is NP-hard even for rank-one approximations, necessitating heuristic or approximation algorithms for practical computation. Entry-wise L_p variants further adapt the error measure by using the entry-wise L_p norm instead of the Frobenius norm, formulated as \min_{B: \rank(B) \leq k} \left( \sum_{i,j} |a_{ij} - b_{ij}|^p \right)^{1/p} for p \neq 2. When p=1, the objective promotes sparsity in the residual matrix, enhancing robustness to outliers and sparse corruptions compared to L_2-based methods. This makes L_1 low-rank approximation particularly useful for applications involving impulsive noise or heavy-tailed errors, though it introduces computational challenges due to non-differentiability. The L_1 entry-wise low-rank problem is NP-hard, even in the rank-one case, via a reduction from the that exploits the combinatorial nature of minimizing absolute deviations under rank constraints. Approximations often rely on iterative reweighted least squares (IRLS), which solves a sequence of surrogate weighted L_2 problems to progressively approximate the L_1 objective, achieving local convergence guarantees under conditions like the null space property. For instance, harmonic mean IRLS variants optimize related ($0 < p < 1) that encourage low-rank solutions while handling L_1-like sparsity. A practical example of weighted low-rank approximation occurs in wireless sensor networks, where incomplete data matrices arise from sensor faults or transmission losses; weights are assigned based on fault probabilities to recover the low-rank signal subspace via maximum likelihood estimation. Recent developments (2015–2025) have advanced L_1 low-rank techniques for robust PCA in outlier-heavy datasets, such as adaptive weighted least squares integrated with low-rank factorization, which reduces bias and improves recovery accuracy in applications like anomaly detection and image denoising.

Distance and Robust Variants

Distance-based low-rank approximation seeks to minimize the distance between a given matrix A and a low-rank matrix B of rank at most k, formulated as \min_{B: \rank(B) \leq k} \dist(A, B), where \dist is a suitable metric on matrices or their associated subspaces. This approach shifts focus from entry-wise errors to geometric properties, such as the alignment of subspaces spanned by the column spaces of A and B. Common metrics include the subspace distance based on principal angles, defined as the maximum angle \theta between the subspaces or the Frobenius norm \|\sin \Theta\|_F, where \Theta denotes the matrix of principal angles; these measure how closely the low-rank approximation captures the dominant directions of A. In certain applications, the Mahalanobis distance is employed to weight directions within the subspace differently, incorporating covariance structure for more robust subspace alignment, particularly in metric learning tasks. A prominent robust variant is robust principal component analysis (RPCA), which decomposes a matrix A into a low-rank component L and a sparse outlier component S, addressing structured noise and corruptions that standard low-rank methods cannot handle. The problem is solved via the convex optimization \min_{L, S} \|L\|_* + \lambda \|S\|_1 subject to A = L + S, where \| \cdot \|_* is the nuclear norm promoting low rank and \| \cdot \|_1 the \ell_1-norm enforcing sparsity in S, with \lambda > 0 a regularization parameter. This formulation relaxes the non-convex rank constraint using the nuclear norm, as referenced in basic low-rank settings, but extends it to separate outliers explicitly. Algorithms like alternating direction method of multipliers (ADMM) efficiently solve this, converging to the global optimum under mild conditions. In practice, RPCA excels in outlier detection for face recognition, where the low-rank component L captures the intrinsic identity structure across aligned face images, while the sparse S isolates corruptions such as occlusions, shadows, or specularities from varying illumination. For instance, applying RPCA to a of vectorized face images separates the clean low-rank representing facial features from sparse errors, enabling robust alignment and recognition even with up to 10-20% corruptions per image. Theoretical guarantees ensure exact of L and S with high probability, provided the low-rank part satisfies an incoherence condition—limiting energy concentration in few rows or columns—and the outliers are sufficiently sparse, with support size at most a fraction of the matrix dimensions. Recent advances from 2020 to 2025 have extended these robust formulations to tensor data for multi-view learning, where multiple data modalities (e.g., images, text, audio) are jointly analyzed. Robust weighted low-rank tensor decomposes a multi-view tensor into a low-rank capturing shared structure and sparse noise tensors for view-specific outliers, formulated via tensor nuclear norm minimization with weights to balance views. This approach improves clustering accuracy in multi-view datasets by 5-15% over matrix-based methods, as demonstrated on benchmarks like and hand gestures, by preserving high-order correlations while mitigating cross-view inconsistencies. Similar tensor RPCA variants incorporate error-robust constraints for incomplete multi-view data, enabling recovery under and missing entries.

Theoretical Foundations

Eckart–Young–Mirsky for

The Eckart–Young–Mirsky theorem establishes the optimality of the truncated () for low-rank approximation under the , also known as the or 2-norm, defined as \|M\|_2 = \max_{\|x\|_2=1} \|M x\|_2, which equals the largest of M. Specifically, for an m \times n A with A = U \Sigma V^T and singular values \sigma_1(A) \geq \sigma_2(A) \geq \cdots \geq \sigma_{\min(m,n)}(A) \geq [0](/page/0), the theorem states that the minimum over all -k approximations B (with k < \rank(A)) is given by \min_{\rank(B) \leq k} \|A - B\|_2 = \sigma_{k+1}(A), and this minimum is uniquely achieved (up to the choice of singular vectors for equal singular values) by the truncated A_k = \sum_{i=1}^k \sigma_i(A) u_i v_i^T, where u_i and v_i are the left and right singular vectors corresponding to \sigma_i(A). The proof hinges on the min-max theorem for singular values, which characterizes \sigma_{j}(A) as \sigma_j(A) = \min_{\dim(S)=j-1} \max_{\|x\|_2=1, x \perp S} \|A x\|_2 = \max_{\dim(T)=n-j+1} \min_{\|x\|_2=1, x \in T} \|A x\|_2, where S is a subspace of the input space and T of the input space. To demonstrate the lower bound, consider any B with \rank(B) \leq k, written as B = X Y^T where X \in \mathbb{R}^{m \times k} and Y \in \mathbb{R}^{n \times k}. Let U_{k+1} be the subspace spanned by the first k+1 right singular vectors v_1, \dots, v_{k+1} of A, which has dimension k+1. Since Y has rank at most k, the kernel of Y^T has dimension at least n - k \geq 1, and thus U_{k+1} \cap \ker(Y^T) contains a nonzero vector. Scaling to a unit vector w \in U_{k+1} with Y^T w = 0, it follows that B w = X (Y^T w) = 0. By the min-max theorem, \|A w\|_2 \geq \sigma_{k+1}(A), so \|(A - B) w\|_2 = \|A w\|_2 \geq \sigma_{k+1}(A), implying \|A - B\|_2 \geq \sigma_{k+1}(A). Direct computation shows \|A - A_k\|_2 = \sigma_{k+1}(A), confirming optimality. Key steps in the argument involve: (1) identifying the subspace U_{k+1} spanned by the top k+1 right singular vectors; (2) exploiting the rank constraint to find a unit w \in U_{k+1} in the kernel of B; (3) applying the min-max characterization to bound \|A w\|_2 \geq \sigma_{k+1}(A), yielding the desired lower bound on \|A - B\|_2. These steps leverage the variational properties of singular values and the dimension of the kernel under low rank. For illustration, consider the diagonal matrix A = \operatorname{diag}(4, 3, 1) with singular values \sigma_1 = 4, \sigma_2 = 3, \sigma_3 = 1. For k=1, the truncated SVD is A_1 = \operatorname{diag}(4, 0, 0), and the error matrix is A - A_1 = \operatorname{diag}(0, 3, 1), whose spectral norm is \|A - A_1\|_2 = 3 = \sigma_2(A). Any other rank-1 approximation, such as B = \operatorname{diag}(0, 3, 0), yields \|A - B\|_2 = \max(4, 0, 1) = 4 > 3, visualizing how the spectral norm captures the maximum directional error, here dominated by the second singular direction. This theorem applies specifically to the spectral norm, which quantifies the worst-case vector amplification and thus operator-level approximation quality, but it does not guarantee minimality for entry-wise errors (such as \ell_\infty-norm on elements) or other non-operator norms, where different approximations may perform better.

Eckart–Young–Mirsky Theorem for Frobenius Norm

The Eckart–Young–Mirsky theorem establishes the optimal low-rank approximation of a matrix under the Frobenius norm. For an m \times n matrix A with singular value decomposition A = U \Sigma V^T, where \Sigma = \operatorname{diag}(\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_r > 0) and r = \min(m,n), the theorem states that \min_{\operatorname{rank}(B) \leq k} \|A - B\|_F = \sqrt{\sum_{i=k+1}^r \sigma_i^2}, with the minimizer given by the rank-k truncated singular value decomposition B_k = \sum_{i=1}^k \sigma_i u_i v_i^T. This result shows that the error equals the square root of the sum of the squares of the discarded singular values. The proof proceeds by expanding the squared Frobenius norm of the error and leveraging the orthogonality of the singular vectors from the SVD. Consider \|A - B\|_F^2 = \|A\|_F^2 + \|B\|_F^2 - 2 \operatorname{Re} \langle A, B \rangle_F, where \langle \cdot, \cdot \rangle_F denotes the Frobenius inner product. Since \|A\|_F^2 = \sum_{i=1}^r \sigma_i^2 is fixed, minimizing the error requires maximizing \langle A, B \rangle_F subject to \operatorname{rank}(B) \leq k. Expressing B in the singular vector basis of A, let B = \sum_{i,j} c_{ij} u_i v_j^T; then \|B\|_F^2 = \sum_{i,j} |c_{ij}|^2 and \langle A, B \rangle_F = \sum_i \sigma_i c_{ii}. The cross terms vanish due to the orthogonality of U and V, yielding \|A - B\|_F^2 = \sum_{i=1}^r \sigma_i^2 + \sum_{i,j} |c_{ij}|^2 - 2 \sum_{i=1}^r \sigma_i c_{ii}. This expression is minimized when the coefficients c_{ij} are zero except for i = 1 to k, with c_{ii} = \sigma_i for those terms, aligning B with the top singular directions. Key steps in the derivation include substituting the SVD into the Frobenius norm, which decomposes it into a sum over singular values, and using the unitarity of U and V to ensure that off-diagonal contributions in the expansion do not affect the inner product alignment. The minimization follows from the Cauchy-Schwarz inequality applied to the coefficients, confirming that the truncated SVD achieves the bound without cross-term interference. This approach highlights the theorem's reliance on the geometric properties of the singular subspaces. For a concrete illustration, consider the rank-2 matrix A = \operatorname{diag}(3, 1). Its singular values are \sigma_1 = 3 and \sigma_2 = 1. The optimal rank-1 approximation is B_1 = \operatorname{diag}(3, 0), with error \|A - B_1\|_F = \sqrt{1^2} = 1. Any other rank-1 matrix, such as \operatorname{diag}(2, 0), yields a larger error of \sqrt{(3-2)^2 + 1^2} = \sqrt{3} > 1, demonstrating the theorem's optimality. Mirsky extended the theorem in 1960 to all unitarily invariant norms, where the minimal error is \|\left( \sigma_{k+1}, \dots, \sigma_r \right)\| in the corresponding vector norm, encompassing the Frobenius norm as a special case via the norm on singular values.

Representations of Rank Constraints

The rank constraint on a B \in \mathbb{R}^{m \times n}, namely \operatorname{rank}(B) \leq k, can be equivalently expressed using the image representation from . Specifically, \operatorname{rank}(B) \leq k if and only if the of B, denoted \operatorname{im}(B), is contained in the of at most k columns, meaning there exists a P \in \mathbb{R}^{m \times k} such that B = P L for some L \in \mathbb{R}^{k \times n}. This formulation is particularly useful in optimization problems involving structured low-rank approximations, where the image constraint helps enforce the subspace structure while allowing alternating least-squares updates for P and L. An alternative is the representation, which leverages the of the . Here, \operatorname{[rank](/page/Rank)}(B) \leq k if and only if the of B^T, denoted \ker(B^T), has at most k in \mathbb{R}^m, or equivalently, at least m - k. This means there exists a R \in \mathbb{R}^{(m-k) \times m} of full row such that R B = 0, capturing the left null conditions. Such representations are equivalent to the but prove advantageous in local optimization algorithms for large k, as they directly parameterize the null constraints. The factorized form provides a practical parameterization for optimization, expressing B = U V^T where U \in \mathbb{R}^{m \times k} and V \in \mathbb{R}^{n \times k}, thereby avoiding explicit enforcement of the nonconvex function. This low-dimensional representation reduces the number of variables from m n to k (m + n), facilitating nonconvex optimization techniques like alternating minimization while ensuring \operatorname{rank}(B) \leq k. A related variational states that \operatorname{rank}(B) = \frac{1}{2} \min \{ p + q \mid B = C D, \, C \in \mathbb{R}^{m \times p}, \, D \in \mathbb{R}^{q \times n} \}, achieved when p = q = \operatorname{rank}(B), highlighting the minimal dimensions needed to span the . In , exact rank constraints are nonconvex and challenging, so they are often relaxed using lifts. For instance, lifting B to a larger matrix Z \succeq 0 via Z = \begin{pmatrix} I_k & U \\ U^T & W \end{pmatrix} (related to factorizations) allows relaxing \operatorname{[rank](/page/Rank)}(Z) \leq k through constraints like \operatorname{[trace](/page/Trace)}(Z) \leq \tau, where \tau bounds the sum of the top k eigenvalues, promoting low- solutions in the . This approach trades exactness for tractability in applications like . A concrete application arises in rank-constrained (), such as minimizing a subject to \operatorname{[rank](/page/Rank)}(X) \leq [k](/page/K). By substituting X = U U^T with U \in \mathbb{R}^{m \times [k](/page/K)}, the problem transforms into an unconstrained nonconvex QP over U, solvable via standard nonlinear solvers while implicitly satisfying the rank bound. This technique, known as the Burer-Monteiro method, empirically achieves global optimality for sufficiently small [k](/page/K) in many instances.

Algorithms and Methods

Singular Value Decomposition Approach

The singular value decomposition (SVD) provides a classical and optimal method for computing low-rank approximations of matrices by factorizing a A \in \mathbb{R}^{m \times n} (with m \geq n) as A = U \Sigma V^T, where U \in \mathbb{R}^{m \times m} and V \in \mathbb{R}^{n \times n} are orthogonal matrices, and \Sigma \in \mathbb{R}^{m \times n} is a containing the singular values \sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_n \geq 0 along its . To obtain a rank-k approximation B_k (with k < n), the is truncated by retaining only the top k singular values and corresponding singular vectors, yielding B_k = U_k \Sigma_k V_k^T, where U_k consists of the first k columns of U, \Sigma_k is the k \times k of the largest singular values, and V_k is the first k columns of V. This truncation directly follows the Eckart–Young–Mirsky theorem, which guarantees optimality in both the Frobenius and norms. For large-scale or sparse matrices where computing the full SVD is prohibitive, partial SVD variants focus on approximating only the dominant k singular triplets. The , adapted for SVD via bidiagonalization, iteratively builds an for the generated by A^T A or A A^T, enabling efficient computation of the largest singular values and vectors without forming the full decomposition; this is particularly effective for sparse matrices with localized structure. QR-based methods, such as those involving transformations to reduce the matrix to bidiagonal form followed by QR iterations on the , allow for partial computation by targeting the extreme singular values, avoiding the need for dense full-factorization. The standard implementation of the , as detailed in the Golub-Reinsch , first applies reflections for bidiagonalization in O(m n^2) time (assuming m \geq n), followed by QR iterations to deflate and compute the singular values, achieving overall complexity of O(m n^2). This is backward stable, meaning the computed factorization \widehat{A} = \widehat{U} \widehat{\Sigma} \widehat{V}^T satisfies \widehat{A} = (A + \Delta A) with a small relative \|\Delta A\| / \|A\| = O(\epsilon), where \epsilon is machine precision, ensuring reliable results even for ill-conditioned matrices. In practice, truncated SVD can be implemented succinctly in numerical software. For example, in Python using SciPy:
python
import numpy as np
from scipy.linalg import svd

A = np.random.rand(100, 50)  # Example m x n matrix
U, s, Vt = svd(A, full_matrices=False)  # Economy SVD
k = 10
B_k = U[:, :k] @ np.diag(s[:k]) @ Vt[:k, :]  # Truncated rank-k approximation
Similarly, in MATLAB:
matlab
A = rand(100, 50);  % Example m x n matrix
[U, S, V] = svd(A, 'econ');  % Economy SVD
k = 10;
B_k = U(:, 1:k) * diag(S(1:k, 1:k)) * V(:, 1:k)';  % Truncated rank-k approximation
These snippets compute the partial SVD and truncation efficiently for dense matrices. The approach is particularly suitable for dense matrices where an exact optimal low-rank approximation in the Frobenius or norm is required, such as in data compression or , provided the matrix dimensions allow the O(m n^2) cost. For very large dense cases, partial variants mitigate storage and time, but alternatives may be needed if sparsity or further scalability is essential.

Projection-Based Methods

Projection-based methods for low-rank approximation involve iterative algorithms that enforce the rank constraint by projecting onto appropriate subspaces or sets, often applied to structured problems where the low-rank aligns with factorized representations such as B = UV^T. These methods are particularly useful for problems where direct computation of the global optimum is infeasible, allowing local optimization through successive projections. One foundational approach is the method of alternating projections, which approximates a matrix A by iteratively projecting onto the low-rank manifold defined by the . Starting from an initial estimate, alternates between projecting A onto the space of matrices with a fixed row space (spanned by the current left factors) and then onto the space with a fixed column space (spanned by the right factors), effectively enforcing the rank constraint through these orthogonal projections. This method traces back to the work of on the of alternating projections onto closed subspaces. A prominent variant tailored to separable structures is the variable projections (VARPRO) algorithm, which addresses the nonlinear least squares problem \min_{U,V} \|A - UV^T\|_F^2 by fixing one factor and solving a problem for the other, then alternating. Specifically, for fixed U, the optimal V is obtained via the pseudoinverse V = (U^T U)^{-1} U^T A (assuming full rank), and vice versa, reducing the optimization to the nonlinear parameters while analytically eliminating the linear ones. Introduced by Golub and Pereyra, VARPRO exploits the separability to improve efficiency and in low-rank fitting. Under full rank assumptions on the Jacobians of the nonlinear factors, VARPRO exhibits local , typically near the solution, provided the problem is well-posed and the initial guess is sufficiently close. This property has been analyzed in extensions of the original framework, confirming its reliability for structured low-rank problems. A application of VARPRO arises in fitting separable nonlinear models, such as multi-exponential where is modeled as y(t) = \sum_{i=1}^r a_i e^{b_i t}, rewritten in matrix form with nonlinear exponents b_i and linear amplitudes a_i; here, VARPRO iteratively optimizes the b_i while solving for the a_i exactly at each step, enabling accurate recovery even for ill-conditioned cases. For convex relaxations of low-rank approximation, a variant employs projections onto the nuclear norm ball \{ X : \|X\|_* \leq \tau \}, where the nuclear norm \|X\|_* = \sum \sigma_i(X) serves as a surrogate for . Algorithms like minimize \|A - X\|_F^2 subject to this constraint by iteratively updating X via gradient steps followed by projection onto the ball, which thresholds the singular values to enforce the bound. This approach, rooted in nuclear norm minimization techniques, provides guarantees for recovery under incoherence conditions.

Randomized and Sampling-Based Methods

Randomized methods for low-rank approximation leverage probabilistic techniques to efficiently compute near-optimal approximations for large-scale matrices, particularly in scenarios where exact singular value decomposition (SVD) is computationally prohibitive due to time and memory constraints. A foundational approach is the randomized SVD algorithm, which projects the input matrix A \in \mathbb{R}^{m \times n} onto a low-dimensional random subspace to capture its dominant singular structure. Specifically, a Gaussian random matrix \Omega \in \mathbb{R}^{n \times (k+l)} is generated, where k is the target rank and l is a small oversampling parameter (typically l = 10 to $20); the projection Y = A \Omega is then formed, followed by the QR decomposition Y = QR and a thin SVD of the smaller matrix B = Q^T A. The resulting approximation is \tilde{A} = Q (B \tilde{U} \Sigma V^T), where \tilde{U} \Sigma V^T is the SVD of B. This method requires only a few matrix-vector multiplications with A and reduces the SVD computation to a much smaller matrix of size (k+l) \times n. The error guarantee for this randomized range finder ensures that the spectral norm of the approximation error satisfies \|A - Q Q^T A\|_2 \leq (1 + \epsilon) \sigma_{k+1}(A) with high probability, where \sigma_{k+1}(A) is the (k+1)-th singular value of A and \epsilon can be made arbitrarily small by increasing the oversampling l \approx k / \epsilon. To improve accuracy for matrices with rapidly decaying singular values, power iterations can be incorporated by applying the random projection to A^p for a small power p (e.g., p=2), which shifts the spectrum and enhances capture of the top singular vectors without significantly increasing computational cost. These techniques achieve near-linear time complexity O(mn \log k) in practice, making them scalable for matrices with millions of rows and columns. In streaming environments, where data arrives sequentially and storage is limited, single-pass variants of these randomized methods maintain a compact of while updating the low-rank approximation . One such algorithm uses randomized linear sketching to compress incoming columns into a fixed-size S \in \mathbb{R}^{(k+l) \times d}, where d is the , followed by periodic updates on the to track the evolving low-rank structure. Power iterations can be adapted here by repeatedly multiplying the with a random test before updating, ensuring the remains bounded by a factor of the optimal low-rank error with constant probability after a single pass over the stream. This approach is particularly effective for applications like scientific simulations, where it has demonstrated relative errors below 1% for compressing time-evolving datasets such as Navier-Stokes flows. For distributed settings, sampling-based methods like CUR decomposition enable parallel computation by selecting a of actual columns C \in \mathbb{R}^{m \times c} and rows R \in \mathbb{R}^{r \times n} from A (with c, r \approx k), then computing a small intersection matrix U \in \mathbb{R}^{c \times r} to form the approximation A \approx C U R. The objective is to minimize the Frobenius norm \|A - C U R\|_F, often achieved via leverage score sampling for probabilistic guarantees, where columns are chosen with probability proportional to their importance in the top singular . CUR's interpretability—retaining real data subsets—facilitates distributed execution, as column selection and U computation can be parallelized across nodes with minimal communication. These methods have been applied to approximate billion-scale matrices in frameworks like Hadoop and ; for instance, distributed algorithms on process sparse matrices with billions of nonzeros by partitioning rows and columns across clusters. Recent advances from 2023 to 2025 have extended these to non-stationary streams in real-time , incorporating adaptive sampling to handle evolving data distributions. Similarly, randomized single-pass methods for parameter-dependent matrices use adaptive sketches to capture non-stationary variations, demonstrating improved reconstruction errors compared to static baselines in time-varying simulations.

References

  1. [1]
    Lower bounds for the low-rank matrix approximation
    Nov 21, 2017 · In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) ...
  2. [2]
    [PDF] Literature survey on low rank approximation of matrices - arXiv
    Jun 21, 2016 · Low rank approximation approximates a matrix by one with a lower rank, using methods like SVD, QR decomposition, and randomized algorithms, to ...
  3. [3]
    [PDF] The Approximation of One Matrix by Another of Lower Rank
    The mathematical problem of approximating one matrix by an- other of lower rank is closely related to the fundamental postulate of factor-theory.
  4. [4]
    [PDF] Fast Computation of Low Rank Matrix Approximations - cs.Princeton
    Lanczos Iteration and Orthogonal Iteration are the most commonly used techniques to compute low rank approximations of matrices. (When one is interested in ...
  5. [5]
    Randomized algorithms for the low-rank approximation of matrices
    We describe two recently proposed randomized algorithms for the construction of low-rank approximations to matrices, and demonstrate their application.
  6. [6]
    Randomized algorithms for low-rank matrix approximation - arXiv
    Jun 21, 2023 · This survey explores modern approaches for computing low-rank approximations of high-dimensional matrices by means of the randomized SVD, randomized subspace ...
  7. [7]
    [PDF] COMPUTING A LOW-RANK APPROXIMATION TO A MATRIX 1 ...
    When processing and modeling genome-wide expression data, the SVD and its low- rank approximation provides a framework such that the mathematical variables and.
  8. [8]
    [PDF] The Singular Value Decomposition (SVD) and Low-Rank Matrix ...
    Conceptually, this method of producing a low-rank approximation is as clean as could be imagined: we re-represent A using the SVD, which provides a list of A's ...
  9. [9]
    Randomized methods for dynamical low-rank approximation
    Oct 6, 2025 · We introduce novel dynamical low-rank methods for solving large-scale matrix differential equations, motivated by algorithms from randomized ...
  10. [10]
    Rank of a matrix - StatLect
    So, the column rank of a matrix is the number of linearly independent vectors that generate the same space generated by the columns of the matrix. Example ...Column rank · Column rank equals row rank · Definition of rank · Full-rank
  11. [11]
    Frobenius Norm -- from Wolfram MathWorld
    The Frobenius norm, sometimes also called the Euclidean norm (a term unfortunately also used for the vector L^2 -norm), is matrix norm of an m×n matrix.
  12. [12]
    [PDF] Matrix norm and low-rank approximation - San Jose State University
    The Frobenius norm of a matrix A ∈ Rm×n is defined as. kAkF = v u u t m ... We note that the Frobenius and spectral norms of a matrix correspond to the ...
  13. [13]
    Matrices and determinants - MacTutor History of Mathematics
    The first to use the term 'matrix' was Sylvester in 1850. Sylvester defined a matrix to be an oblong arrangement of terms and saw it as something which led ...
  14. [14]
    On the Early History of the Singular Value Decomposition
    The first proofs of the SVD for real square matrices came out of the study of bilinear forms, first by Beltrami in 1873 and, independently, by Jordan in 1874.
  15. [15]
    The approximation of one matrix by another of lower rank
    The mathematical problem of approximating one matrix by another of lower rank is closely related to the fundamental postulate of factor-theory.
  16. [16]
    Symmetric Gauge Functions and Unitarily Invariant Norms
    A symmetric gauge function is a real-valued function on real n-vectors that satisfies specific conditions, including f(i) < D(x) > 0 and <D(px) = H<D(x).
  17. [17]
    Finding Structure with Randomness: Probabilistic Algorithms for ...
    This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions.
  18. [18]
    [PDF] Low Tensor Rank Learning of Neural Dynamics
    LtrRNNs integrate a broad range of concepts in neuroscience and machine learning, including low. (matrix) rank RNNs, tensor decompositions, fitting RNNs to ...
  19. [19]
    [PDF] Activation Map Compression through Tensor Decomposition for ...
    We propose to exploit powerful low-rank approximation algorithms to compress activation maps, enabling efficient on-device learning with controlled information ...
  20. [20]
    [PDF] Note 16: Low-Rank Approximation and Principal Component Analysis
    Apr 19, 2024 · Key Idea 1 (Low-Rank Approximation). Low-rank approximation of a matrix A ∈ Rm×n with rank r is the process of finding another matrix. Aℓ ∈ Rm×n ...
  21. [21]
    [PDF] Principal Component Analysis 1 Introduction 2 Singular-Value ...
    + σ2 r . 3.2 Low-Rank Approximation via the SVD. Consider a matrix A that has an SVD A = P r i=1 σiuiv> i . Given k ≤ r we obtain a rank-k matrix Ak by.
  22. [22]
    [PDF] Lecture 9: SVD, Low Rank Approximation - Washington
    Apr 25, 2016 · 9.2.5 Principal Component Analysis. In this section we study low rank approximation with respect to the operator norm. Definition 9.2 (Norm ...
  23. [23]
    Robust principal component analysis for accurate outlier sample ...
    Jun 29, 2020 · However, cPCA is highly sensitive to outlying observations. Consequently, the first components are often attracted toward outlying points, and ...
  24. [24]
    Matrix Factorization Techniques for Recommender Systems
    Aug 7, 2009 · Abstract: As the Netflix Prize competition has demonstrated, matrix factorization models are superior to classic nearest neighbor techniques ...
  25. [25]
    Learning the parts of objects by non-negative matrix factorization
    Oct 21, 1999 · Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text.
  26. [26]
    Neural graph embeddings as explicit low-rank matrix factorization ...
    We propose an improved approach to learning low-rank factorization embeddings that incorporate information from such unlikely pairs of nodes.
  27. [27]
    An efficient technique for image compression and quality retrieval ...
    In SVD, compression is achieved by reducing the rank of a matrix to approximate the original matrix to represent an image with the exploitation of higher ...
  28. [28]
    [PDF] Denoising by low-rank and sparse representations
    Indeed, noise removal from groups of matched image patches is formulated as low-rank matrix denoising. Based on recent results from random matrix theory, this ...
  29. [29]
    [PDF] Image Denoising Using Low Rank Minimization With Modified Noise ...
    In particular, at σn = 100, the improvements in PSNR values are greater than 0.1 dB for the images of house, jelly beans, peppers, Lena, Barbara and fingerprint ...
  30. [30]
    [0912.3599] Robust Principal Component Analysis? - arXiv
    Dec 18, 2009 · This paper proposes Principal Component Pursuit to recover low-rank and sparse components, even with corrupted or missing data, enabling robust ...
  31. [31]
    [PDF] Image compression using wavelets and JPEG2000: a tutorial
    This paper presents a tutorial on the discrete wavelet transform (DWT) and introduces its application to the new JPEG2000* image compression standard. We start ...
  32. [32]
    [PDF] Weighted Low-Rank Approximations - People | MIT CSAIL
    We study the common problem of approx- imating a target matrix with a matrix of lower rank. We provide a simple and efficient. (EM) algorithm for solving ...
  33. [33]
    Low-Rank Matrix Approximation with Weights or Missing Data Is NP ...
    In this paper, we prove that computing an optimal WLRA is NP-hard, already when a rank-one approximation is sought. In fact, we show that it is hard to compute ...
  34. [34]
    [PDF] Simple and practical algorithms for lp-norm low-rank approximation
    We consider the problem of low-rank matrix approxima- tion, w.r.t. (entrywise) `p-norms, and proposed two algo- rithms that lead to (1 + ")-OPT approximations.
  35. [35]
    Low Rank Approximation with Entrywise $\ell_1$-Norm Error - arXiv
    Nov 3, 2016 · We give the first provable approximation algorithms for \ell_1-low rank approximation, showing that it is possible to achieve approximation ...Missing: Lp | Show results with:Lp
  36. [36]
    On the Complexity of Robust PCA and ℓ1-Norm Low-Rank Matrix ...
    Jun 13, 2018 · In this paper, we prove that ℓ1-LRA is NP-hard, already in the rank-one case, using a reduction from MAX CUT. Our derivations draw ...
  37. [37]
    [PDF] Harmonic Mean Iteratively Reweighted Least Squares for Low-Rank ...
    The easily implementable algorithm, which we call harmonic mean iteratively reweighted least squares (HM-IRLS), optimizes a non-convex Schatten-p quasi-norm ...
  38. [38]
  39. [39]
    Robust PCA Based on Adaptive Weighted Least Squares and Low-Rank Matrix Factorization
    ### Summary of Robust PCA Using Adaptive Weighted Least Squares and Low-Rank Matrix Factorization
  40. [40]
    Learning distance to subspace for the nearest ... - ScienceDirect.com
    In this paper, we propose to improve the classification performance of NSM through learning tailored distance metrics from samples to class subspaces. The ...
  41. [41]
    Robust principal component analysis? | Journal of the ACM
    Jun 9, 2011 · This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component.
  42. [42]
  43. [43]
    Error-robust low-rank tensor approximation for multi-view clustering
    Mar 5, 2021 · In this paper, we propose a novel Markov chain-based spectral clustering method for multi-view clustering to handle different types of errors.
  44. [44]
  45. [45]
    [PDF] An elementary proof of Mirsky's low rank approximation theorem
    Eckart and G. Young, The approximation of one matrix by another of lower rank, Psy- chometrika 1 (1936), 211–218. [2] R.A. Horn and C.R. Johnson, Topics in ...
  46. [46]
    Structured low-rank approximation and its applications - ScienceDirect
    The approaches using kernel and image representations are equivalent to the original low-rank approximation problem. Next we illustrate the use of ...
  47. [47]
    [PDF] Online Learning in the Manifold of Low-Rank Matrices
    In multi-task problems, low rank constraints provide a way to tie together different tasks. In all cases, low-rank matrices can be represented in a factorized ...
  48. [48]
    [PDF] Nonconvex Optimization Meets Low-Rank Matrix Factorization
    The advantage is clear: adopting economical representation of the low-rank matrix results in low storage requirements, affordable per-iteration computational ...
  49. [49]
    Rank constrained semidefinite programming problems - YALMIP
    Sep 17, 2016 · In the current implementation in YALMIP, a rank constraint automatically appends a positive semidefiniteness constraint on the involved matrix.
  50. [50]
    Singular value decomposition and least squares solutions
    Golub, G.H., Reinsch, C. Singular value decomposition and least squares solutions. Numer. Math. 14, 403–420 (1970). https://doi.org/10.1007/BF02163027.
  51. [51]
    A Lanczos Algorithm for Computing Singular Values and Vectors of ...
    This algorithm provides a means for computing the largest and the smallest or even all of the distinct singular values of many matrices.Missing: partial | Show results with:partial
  52. [52]
    svd — SciPy v1.16.2 Manual
    Factorizes the matrix a into two unitary matrices U and Vh, and a 1-D array s of singular values (real, non-negative) such that a == U @ S @ Vh.1.13.0 · 1.12.0 · 1.14.0 · 1.11.4<|separator|>
  53. [53]
    [PDF] Alternating projections on manifolds - Cornell University
    We prove that if two smooth manifolds intersect transversally, then the method of alternating projections converges locally at a linear rate.
  54. [54]
    An Alternating Projection Algorithm for Computing the Nearest ...
    Recent extensions of von Neumann's alternating projection algorithm permit an effective numerical approach to certain least squares problems subject to side ...
  55. [55]
    The Differentiation of Pseudo-Inverses and Nonlinear Least Squares ...
    Variable projection for affinely structured low-rank approximation in weighted 2 -norms. Journal of Computational and Applied Mathematics, Vol. 272 | 1 Dec ...
  56. [56]
    Local Convergence Analysis of a Variable Projection Method for ...
    This article analyzes the impact of these approximations within the variable projection method and introduces stopping criteria to ensure local convergence.
  57. [57]
    [PDF] Variable Projection for Nonlinear Least Squares Problems
    Abstract The variable projection algorithm of Golub and Pereyra (1973) has proven to be quite valuable in the solution of nonlinear least squares problems.
  58. [58]
    Streaming Low-Rank Matrix Approximation with an Application to ...
    Methods such as the singular value decomposition (SVD) may be used to find an approximation to A which is the best in a well-defined sense. These methods ...<|control11|><|separator|>
  59. [59]
    CUR matrix decompositions for improved data analysis - PMC - NIH
    Jan 20, 2009 · CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data ...
  60. [60]
    Tracking online low-rank approximations of higher-order incomplete ...
    Jun 9, 2023 · In this paper, we propose two new provable algorithms for tracking online low-rank approximations of high-order streaming tensors with missing ...Missing: single- | Show results with:single-<|separator|>
  61. [61]
    Randomized low‐rank approximation of parameter‐dependent ...
    Jul 8, 2024 · This work considers the low-rank approximation of a matrix A(t)$$ A(t) $$ depending on a parameter t$$ t $$ in a compact set D⊂ℝd$$ D\subset ...