Fact-checked by Grok 2 weeks ago

Non-negative least squares

Non-negative least squares (NNLS) is a problem that seeks to minimize the of the \|Ax - b\|_2^2, where A is an m \times n , b is an m-dimensional , and the solution x must satisfy x \geq 0 componentwise. This formulation extends the classical problem by imposing non-negativity constraints, ensuring that the estimated parameters cannot be negative, which is essential when modeling phenomena where negative values are physically or contextually impossible, such as concentrations in chemical mixtures or intensities in image processing. As a program with linear inequalities, NNLS guarantees a global minimum and is computationally tractable for moderate-sized problems. The foundational algorithm for solving NNLS is the developed by Charles L. Lawson and Richard J. Hanson, first detailed in their 1974 book Solving Least Squares Problems and later revised in the 1995 SIAM edition. This iteratively identifies and adjusts the set of active (where x_i = 0) using a sequence of unconstrained subproblems, achieving efficiency through passive set strategies that avoid unnecessary checks. Modern implementations, such as those in and , build on this approach, often incorporating block principal pivoting for faster convergence in large-scale settings. Variations include projected gradient methods and interior-point techniques for handling sparsity or additional regularization. NNLS finds broad applications across scientific and engineering domains where non-negativity is inherent. In , it serves as a core subroutine in (NMF), enabling the decomposition of data matrices into interpretable, parts-based representations for tasks like topic modeling and recommender systems. In and astronomy, NNLS is used for hyperspectral unmixing, estimating the abundance of endmember materials in mixed pixel spectra while enforcing non-negative fractions. Additional uses include in finance, where asset weights must be non-negative, and for sparse recovery from non-negative measurements. These applications highlight NNLS's role in promoting sparsity and interpretability in high-dimensional data analysis.

Formulation

Problem Statement

The non-negative least squares (NNLS) problem seeks to determine a non-negative x \in \mathbb{R}^n with x \geq 0 that minimizes the squared of the , formulated as \min_{x \geq 0} \| Ax - b \|_2^2, where A is an m \times n with m \geq n, b is an m-dimensional , and the objective function measures the fit of the Ax to the data b. This arises in scenarios where the entries of x must remain non-negative due to their interpretation as physical or probabilistic quantities, such as chemical concentrations in mixture analysis, where negative values lack physical meaning. The NNLS problem extends the classical unconstrained , which solves \min_x \| Ax - b \|_2^2 via the normal equations A^T A x = A^T b (assuming A^T A is invertible, yielding x = (A^T A)^{-1} A^T b). In cases where this unconstrained solution violates the non-negativity requirement—producing negative components—the NNLS adjusts by enforcing x \geq 0, effectively projecting the solution onto the non-negative while preserving the least squares objective. This adjustment ensures feasibility for applications like spectroscopic quantification, where x represents relative concentrations that must be non-negative. For illustration, consider a one-dimensional example with m = n = 1, A = 1, and b = -1. The unconstrained solution is x = -1, which is infeasible under the non-negativity constraint; thus, the NNLS solution sets x = 0, yielding a residual of \| -1 \|_2^2 = 1. This simple case highlights how the constraint overrides the unconstrained minimizer to maintain interpretability.

Quadratic Programming Equivalence

The non-negative least squares (NNLS) problem, which seeks to minimize \|\mathbf{Ax} - \mathbf{b}\|^2_2 subject to \mathbf{x} \geq \mathbf{0}, can be equivalently reformulated as a quadratic programming (QP) problem. Expanding the squared norm yields the objective function \frac{1}{2} \mathbf{x}^T (\mathbf{A}^T \mathbf{A}) \mathbf{x} - (\mathbf{A}^T \mathbf{b})^T \mathbf{x} + \frac{1}{2} \mathbf{b}^T \mathbf{b}, where the constant term \frac{1}{2} \mathbf{b}^T \mathbf{b} does not affect the optimization and can be omitted. Thus, the NNLS problem is equivalent to minimizing the quadratic function \frac{1}{2} \mathbf{x}^T \mathbf{Q} \mathbf{x} + \mathbf{c}^T \mathbf{x} subject to \mathbf{x} \geq \mathbf{0}, with \mathbf{Q} = \mathbf{A}^T \mathbf{A} and \mathbf{c} = -\mathbf{A}^T \mathbf{b}. The Hessian matrix \mathbf{Q} = \mathbf{A}^T \mathbf{A} is symmetric and , as \mathbf{x}^T (\mathbf{A}^T \mathbf{A}) \mathbf{x} = \|\mathbf{Ax}\|^2_2 \geq 0 for all \mathbf{x}, with if and only if \mathbf{Ax} = \mathbf{0}. This property guarantees that the QP objective is . This formulation positions NNLS as a special case of bound-constrained , featuring simple non-negativity bounds \mathbf{x} \geq \mathbf{0} and no constraints. The recognition of NNLS as a QP traces back to early optimization literature, but its formalization in this context was established by Lawson and in their seminal 1974 work on solving problems.

Theoretical Properties

Convexity and Feasibility

The non-negative least squares (NNLS) problem minimizes the function \|Ax - b\|_2^2 subject to the constraint x \geq 0, where A is an m \times n and b is an m-. This is because it is a f(x) = x^\top (A^\top A) x - 2 b^\top A x + \|b\|_2^2 whose Hessian H = 2 A^\top A is positive semidefinite, as all eigenvalues of A^\top A are non-negative. The feasible set defined by x \geq 0 is the non-negative orthant \mathbb{R}^n_+, which is a polyhedral cone, ensuring that any combination of feasible points remains feasible. Consequently, the NNLS problem is a optimization problem, as it involves minimizing a over a set. The NNLS problem is always feasible, since the origin x = 0 satisfies the non-negativity constraints and yields a finite objective value \|b\|_2^2. In many applications, such as or with non-negative data, the matrix A has non-negative entries and b \geq 0, which further ensures feasibility while aligning with the problem's physical interpretability. The convexity of NNLS has key implications for optimization: any local minimum is also a global minimum, avoiding the pitfalls of local optima traps common in non-convex problems, and enabling the development of efficient, convergent algorithms like active set methods. Geometrically, the objective function resembles an upward-opening in \mathbb{R}^n, restricted to the first by the feasible set, where the minimum occurs at the lowest point of this paraboloid within the orthant boundaries.

Existence and Uniqueness of Solutions

The non-negative least squares (NNLS) problem always admits at least one optimal for any A \in \mathbb{R}^{m \times n} and vector b \in \mathbb{R}^m. This existence is guaranteed because the problem is a convex quadratic program with a nonempty feasible set (the nonnegative \mathbb{R}^n_+, which includes x = 0) and a continuous objective function \frac{1}{2} \|Ax - b\|_2^2 that attains its infimum over the feasible set. The objective is coercive in the sense that it tends to as \|x\|_2 \to \infty along directions where Ax grows, while remaining bounded below (e.g., at x = 0, the value is \frac{1}{2} \|b\|_2^2); in cases where the null space of A intersects the nonnegative nontrivially, the objective is constant along those recession directions, but the minimum is still attained on an affine subset of the feasible set. Uniqueness of the solution holds if A has full column rank, i.e., \operatorname{rank}(A) = n. In this case, the Hessian A^T A is positive definite, making the objective strictly convex over \mathbb{R}^n, and thus the minimizer over the convex feasible set \mathbb{R}^n_+ is unique. If \operatorname{rank}(A) < n, the columns of A are linearly dependent, and the solution set may be nonunique, consisting of all x \geq 0 that achieve the minimum residual on an affine subspace intersected with the nonnegative orthant; this occurs when there exists a nonzero d \geq 0 in the null space of A, allowing multiple points to yield the same objective value. Any optimal solution x^* to the NNLS problem satisfies the Karush-Kuhn-Tucker (KKT) conditions, which are necessary and sufficient for optimality due to the problem's convexity. These conditions are:
  • Primal feasibility: x^* \geq 0,
  • Dual feasibility: \lambda^* \geq 0,
  • Complementary slackness: x_i^* \lambda_i^* = 0 for all i = 1, \dots, n,
  • Stationarity: A^T (Ax^* - b) - \lambda^* = 0.
The complementary slackness condition partitions the variables into active constraints (where x_i^* = 0 and \lambda_i^* \geq 0) and inactive ones (where x_i^* > 0 and \lambda_i^* = 0). For an illustration of nonuniqueness, consider A = \begin{bmatrix} 1 & 1 \end{bmatrix} and b = 1, where the columns are linearly dependent ( < 2). The optimal solutions are all x = (t, 1 - t)^T for $0 \leq t \leq 1, achieving zero residual on the line segment (a face of the feasible set) where x_1 + x_2 = 1 and x \geq 0. In contrast, if A = \begin{bmatrix} 1 \end{bmatrix} (full rank), the unique solution is x^* = 1.

Solution Methods

Active Set Methods

Active set methods for non-negative least squares (NNLS) operate by iteratively partitioning the variables into an active set, where components are constrained to zero, and a passive set, where variables are free to take non-negative values. At each iteration, the method solves an unconstrained least squares problem on the passive set to obtain a candidate solution, then adjusts the sets based on whether the solution violates the non-negativity constraints. This approach leverages the , ensuring that at optimality, variables in the active set have zero values and non-positive Lagrange multipliers, while passive variables are non-negative. The seminal Lawson-Hanson algorithm, introduced in 1974, implements this strategy through a two-phase process: a forward phase to expand the passive set and a backward phase to contract it when necessary. In the forward phase, the algorithm identifies variables in the active set with positive Lagrange multipliers (indicating potential benefit from inclusion) and adds the one with the largest multiplier to the passive set; it then solves the reduced least squares problem on the updated passive set. If any passive variable becomes negative, the backward phase activates: the algorithm computes a step size to the boundary where the most negative variable reaches zero and removes that variable from the passive set, restoring feasibility. The residual is updated as r = b - A x after each passive set solve, where A is the design matrix, b the response vector, and x the current solution. The process repeats until no further adjustments are needed, guaranteeing convergence in a finite number of steps due to the monotonic decrease in the objective function and the finite number of possible active sets. The following outline summarizes the core steps of the Lawson-Hanson algorithm:
  1. Initialization: Set the passive set P = \emptyset (all variables active, x = 0), compute initial residual r = b and Lagrange multipliers w = A^T r.
  2. While w_j > 0 for some j \in active set:
  3. Termination: When \max(w_{active}) \leq 0, x is optimal.
This procedure requires solving least squares subproblems, typically via normal equations or QR factorization for efficiency. In terms of computational complexity, the algorithm terminates in a finite number of iterations, but the worst-case time complexity is exponential in the number of variables due to the potential for exploring a large subset of the $2^n possible active sets. However, in practice, it exhibits polynomial behavior for many problem instances, particularly when the optimal active set is sparse, making it efficient for moderate-sized problems. The algorithm is implemented in standard libraries, such as MATLAB's lsqnonneg function, which follows this exact procedure. As the first algorithm dedicated specifically to NNLS, the Lawson-Hanson method laid the foundation for subsequent developments in constrained quadratic optimization, influencing active set approaches in broader optimization contexts.

Projected and Coordinate Descent Methods

Projected gradient descent addresses the non-negative least squares (NNLS) problem by iteratively applying a gradient step followed by projection onto the non-negative orthant. The update rule is \mathbf{x}_{k+1} = \proj_{\mathbb{R}_{\geq 0}} \left( \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k) \right), where f(\mathbf{x}) = \frac{1}{2} \| A\mathbf{x} - \mathbf{b} \|^2_2, the gradient is \nabla f(\mathbf{x}_k) = A^T (A \mathbf{x}_k - \mathbf{b}), the projection \proj_{\mathbb{R}_{\geq 0}}(\mathbf{z}) applies the component-wise operation \max(0, z_i), and the step size \alpha_k > 0 is typically selected via line search to ensure sufficient decrease in the objective value. This approach exploits the separability of the non-negativity constraints, making the projection computationally inexpensive, and converges at a sublinear rate of O(1/k) under standard assumptions on the step size. Accelerated variants of projected incorporate or restarts to achieve faster, potentially linear . For instance, adaptive restarts reset parameters when the objective increases, ensuring monotonicity while maintaining efficiency for medium-scale problems. These methods are particularly suited to NNLS due to the objective's smoothness, allowing straightforward application of for step size adaptation. Coordinate descent methods for NNLS proceed by cyclically updating a single variable at a time, solving a scalar non-negative least squares subproblem while fixing the others. The update for the j-th coordinate minimizes f(\mathbf{x}) with respect to x_j \geq 0, yielding a closed-form involving the and the j-th column of A. These updates are exact for each coordinate and leverage the problem's separability, enabling parallelization across variables in distributed settings. Accelerations for coordinate descent include variable selection strategies that prioritize promising coordinates based on gradient magnitudes, reducing iterations for sparse solutions, and frugal schemes that skip unnecessary updates in large-scale instances. Fast implementations, such as those refining initial approximations with sequential coordinate updates, achieve high accuracy with low computational overhead, often outperforming block-wise alternatives in practice. Recent developments from 2020 to 2025 have focused on scalability and robustness. A scale-invariant for NNLS with non-negative data, presented at NeurIPS 2022, eliminates dependence on input scaling by normalizing residuals, achieving faster than standard projected methods on tasks like price prediction. In 2024, a unified sparse framework enabled fast for high-dimensional NNLS, exploiting local uniqueness of sparse solutions to accelerate updates and attain O(1/k) rates, with extensions to simplex-constrained variants central to applications. Two-stage methods, such as those combining alternating NNLS with interior-point refinement, have improved efficiency for constrained formulations like simplex-bounded problems in , reducing solve times by orders of magnitude in . Accelerated with restarts can further yield linear under strong convexity conditions. These projected and coordinate descent approaches offer key advantages for large-scale NNLS, including better handling of high-dimensional data through simple iterations and inherent parallelizability, contrasting with the exact subproblem solves in active set methods that scale poorly beyond moderate sizes.

Applications

Matrix Decomposition Techniques

Non-negative least squares (NNLS) plays a central role in (NMF), a that decomposes a non-negative V \in \mathbb{R}^{m \times n} into the product of two lower-rank non-negative matrices W \in \mathbb{R}^{m \times r} and H \in \mathbb{R}^{r \times n} such that V \approx WH, where r \ll \min(m, n). In the alternating optimization framework for NMF, one factor is fixed while NNLS is solved exactly for the other, ensuring monotonic to a local minimum of the Frobenius norm objective \|V - WH\|_F^2. This approach, known as alternating non-negative least squares (ANLS), provides more accurate solutions compared to heuristic methods by directly optimizing the subproblems. Multiplicative update rules, originally proposed for NMF, serve as approximations to these NNLS steps, offering computational efficiency while preserving non-negativity through element-wise operations. These updates iteratively scale the factors to minimize the objective, though they may converge slower than exact NNLS solvers in ANLS. In practice, ANLS has been implemented in high-performance algorithms for large-scale NMF, demonstrating superior in applications requiring precise factorizations. NNLS extends to higher-order tensor decompositions, particularly non-negative polyadic decomposition (NCPD), which generalizes NMF to tensors by approximating a non-negative tensor \mathcal{T} as a sum of rank-1 non-negative outer products. In for NCPD, NNLS solves for each factor matrix while fixing the others, enforcing non-negativity to maintain physical interpretability in fields like , where negative components would be chemically implausible. This ensures decompositions align with constraints such as the Beer-Lambert law in spectroscopic analysis. A prominent application is topic modeling in text analysis, where NMF with NNLS factorizes a to yield non-negative document-topic distributions (rows of W) and topic-term distributions (columns of H), enabling interpretable extractions of thematic structures without negative coefficients that could imply implausible subtractions. The non-negativity constraint in these decompositions promotes parts-based representations, avoiding holistic or subtractive artifacts and enhancing the additive interpretability of factors in real-world data.

Finance

Non-negative least squares is applied in portfolio optimization, where the goal is to minimize the variance of portfolio returns subject to expected return constraints and non-negative asset weights, reflecting the practical impossibility of short-selling in certain investment strategies. Formally, for an asset return covariance matrix \Sigma, expected returns \mu, and target return r, NNLS solves \min_w w^T \Sigma w subject to \mu^T w = r and w \geq 0, often reformulated as a quadratic program equivalent to NNLS after incorporating the equality via Lagrange multipliers or projection. This approach ensures feasible, interpretable allocations that promote diversification without negative holdings, and is implemented in financial software for risk management as of 2023.

Signal and Image Processing

In , non-negative least squares (NNLS) plays a central role in spectral unmixing, where each spectrum is modeled as a of predefined endmember signatures with non-negative abundance fractions that sum to unity, reflecting the physical constraint that materials cannot have negative proportions and fully cover the . Formally, for an observed \mathbf{y} \in \mathbb{R}^L (with L spectral bands), endmember \mathbf{A} \in \mathbb{R}^{L \times K} (with K endmembers), and abundance \boldsymbol{\alpha} \in \mathbb{R}^K, the model is \mathbf{y} = \mathbf{A} \boldsymbol{\alpha} + \boldsymbol{\epsilon}, solved subject to \boldsymbol{\alpha} \geq \mathbf{0} and \mathbf{1}^T \boldsymbol{\alpha} = 1 using NNLS after endmember extraction via methods like pixel purity index or vertex component analysis. This approach ensures geophysically interpretable results, such as estimating mineral or vegetation fractions in data, and is a standard baseline in benchmarks due to its simplicity and effectiveness on linear mixtures. NNLS also finds application in image deconvolution tasks, particularly for recovering non-negative distributions from blurred observations in fields like astronomy and . In radio astronomy, the NNLS algorithm extends the CLEAN method by formulating image reconstruction as a constrained least-squares problem on the dirty image formed by interferometric visibilities, enforcing non-negativity to model positive without iterative component subtraction, which improves fidelity for compact sources like quasars compared to traditional CLEAN or maximum entropy methods. For instance, in the synthesis imaging of extended sources, NNLS reduces sidelobe artifacts and computational demands by directly inverting the point spread function matrix. In fluorescence microscopy, NNLS deconvolves lifetime imaging data by fitting a of exponential decays to observed signals via non-negative scaling coefficients, often regularized with \ell_1 or \ell_2 penalties to enhance resolution of subcellular structures while preserving positivity of fluorophore concentrations. For sparse recovery in compressive sensing, thresholded variants of NNLS enforce both sparsity and non-negativity, making it suitable for reconstructing undersampled signals like those in (MRI), where proton densities and relaxation times are inherently positive. In MRI, NNLS solves for sparse coefficient vectors in overcomplete dictionaries derived from or curvelet transforms, applied to data to recover images with reduced artifacts from non-uniform sampling, outperforming unconstrained by leveraging physiological constraints for faster scans in clinical settings such as T2 mapping for myelin quantification. This is particularly impactful in dynamic contrast-enhanced MRI, where non-negativity prevents negative pixel values that lack physical meaning. An illustrative example of NNLS in is the demixing of audio sources, where a monophonic is treated as a non-negative linear superposition of basis spectra from instruments or voices, solved to estimate weights for separation. In low-cost drum recording setups, dual-channel inputs are unmixed using NNLS to isolate components like and snare by computing weights against a dictionary of templates, enabling real-time transcription or enhancement without negative contributions that could introduce artifacts. This method scales well for underdetermined , providing interpretable separations in applications like music production.

References

  1. [1]
    nnls — SciPy v1.16.2 Manual
    This problem, often called as NonNegative Least Squares, is a convex optimization problem with convex constraints. It typically arises when the x models ...1.15.2 · 1.13.0 · Scipy.optimize.nnls · NnlsMissing: definition | Show results with:definition
  2. [2]
    lsqnonneg - Solve nonnegative linear least-squares problem
    Compute a nonnegative solution to a linear least-squares problem, and compare the result to the solution of an unconstrained problem.Missing: definition | Show results with:definition
  3. [3]
    [PDF] nnls: The Lawson-Hanson Algorithm for Non-Negative Least ...
    Description An R interface to the Lawson-Hanson implementation of an algorithm for non-negative least squares (NNLS). Also allows the combination of non- ...
  4. [4]
    [PDF] A Fast Scale-Invariant Algorithm for Non-negative Least Squares ...
    Mar 8, 2022 · Nonnegative (linear) least square problems are a fundamental class of problems that is well- studied in statistical learning and for which ...
  5. [5]
    Solving non-negative matrix factorization by alternating least ...
    Apr 18, 2012 · Non-negative matrix factorization (NMF) is a method to obtain a ... A popular approach is alternating non-negative least squares (ANLS).
  6. [6]
    [PDF] A Method for Finding Structured Sparse Solutions to Nonnegative ...
    Osher, A split Bregman method for non-negative sparsity penalized least squares with applications to hyperspectral demixing, in Proceedings of the 17th IEEE ...
  7. [7]
    A unified framework for sparse non-negative least squares using ...
    We study the sparse non-negative least squares (S-NNLS) problem. S-NNLS occurs naturally in a wide variety of applications where an unknown, non-negative ...
  8. [8]
    Non-negative least squares for high-dimensional linear models
    Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of ...
  9. [9]
  10. [10]
    Non-negative Least Squares Approach to Quantification of 1 H ...
    Dec 7, 2020 · We propose a library-based approach to quantify proportions of overlapping metabolites from 1 H NMR mixture spectra.
  11. [11]
    Negativity Constraint - an overview | ScienceDirect Topics
    A Negativity Constraint is a limitation where physical quantities like lengths, weights, or signal power must be non-negative.
  12. [12]
    None
    Below is a merged summary of the concepts of Non-Negative Least Squares (NNLS), Convexity, Least Squares Objective, and Feasibility as presented across various segments from "Convex Optimization" by Boyd & Vandenberghe, based on the provided summaries. To retain all information in a dense and organized manner, I will use a combination of narrative text and a table in CSV format to capture detailed references, page numbers, and explanations. The narrative will provide an overview, while the table will serve as a comprehensive reference for specific details.
  13. [13]
    [PDF] A fast non-negativity-constrained least squares algorithm
    SUMMARY. In this paper a modification of the standard algorithm for non-negativity-constrained linear least squares regression is proposed.
  14. [14]
    Lawson and Hanson - NetLib.org
    The subroutine NNLS appeared in 'SOLVING LEAST SQUARES PROBLEMS,' ! by Lawson and Hanson, Prentice-Hall, 1974. Work on BVLS was started ! by C. L. Lawson and ...
  15. [15]
    [PDF] Exact Complexity Certification of a Nonnegative Least-Squares ...
    This section describes properties of Algorithm 1 that will be central in Section IV, where a complexity certification method for Algorithm 1 is outlined.
  16. [16]
    [PDF] Projected Gradient Method for Non-Negative Least Square - GMU
    On the other hand, in case of simple constraints the PG method can be very efficient. In this paper, we apply the PG method to non-negative least squares. (NNLS) ...
  17. [17]
    [PDF] Lecture 6: February 2nd 6.1 Projected Gradient Descent
    non-negative least squares is to solve: min x≥0. 1. 2. kAx − bk2. 2. 6-1. Page 2. 6-2. Lecture 6: February 2nd. Here in order to solve the projection problem we ...
  18. [18]
    [PDF] Nonnegative Least Squares - PGD, accelerated PGD and with restarts
    Feb 19, 2020 · To make the scheme monotone, we can apply adaptive restarts : if error increases, we switch to gradient descent, and reset all parameters.
  19. [19]
    [PDF] Projected Gradient Methods for Non-negative Matrix Factorization
    Abstract. Non-negative matrix factorization (NMF) can be formulated as a minimiza- tion problem with bound constraints.
  20. [20]
    [PDF] Fast Coordinate Descent Methods with Variable Selection for Non ...
    Aug 24, 2011 · Each sub-problem for least squares NMF can be decomposed into a sequence of non-negative least square (NNLS) problems. For example, minimization ...
  21. [21]
    Non-negative least squares - R
    Coordinate Descent NNLS. Least squares by sequential coordinate descent is used to ensure the solution returned is exact. This algorithm was introduced by ...<|separator|>
  22. [22]
    [PDF] Frugal Coordinate Descent for Large-Scale NNLS
    We propose a coordinate descent scheme to solve NNLS. Our method is similar to the successful approach by Hsieh et al. (Hsieh et al. 2008) for solving linear ...
  23. [23]
    [PDF] A Fast Scale-Invariant Algorithm for Non-negative Least Squares ...
    A non-monotonic method for large-scale non-negative least squares. Optimization Methods and Software, 28(5):1012–1039, 2013. [24] Richard Kueng and Peter Jung.
  24. [24]
    A Fast Coordinate Descent Method for High-Dimensional Non ...
    Oct 3, 2024 · We propose a novel coordinate descent based solver for NNLS in high dimensions using our theoretical result as motivation.
  25. [25]
    A fast two-stage algorithm for non-negative matrix factorization in ...
    Jan 21, 2021 · In the first stage, an alternating non-negative least squares (ANLS) framework is used, in combination with active set method with warm ...
  26. [26]
    Nonnegative Matrix Factorization Based on Alternating ... - SIAM.org
    Zhang, Accelerating the Lee-Seung Algorithm for Non-negative Matrix Factorization, Tech. report, Department of Computational and Applied Mathematics, Rice ...
  27. [27]
    [PDF] Fast Nonnegative Tensor Factorization with an Active-set-like Method
    Abstract We introduce an efficient algorithm for computing a low-rank non- negative CANDECOMP/PARAFAC (NNCP) decomposition. In text mining,.
  28. [28]
    Regularized NNLS Algorithms for Nonnegative Matrix Factorization ...
    Moreover, the experiments demonstrate that the regularized NNLS algorithm is superior to many well-known NMF algorithms used for text document clustering.Regularized Nnls Algorithms... · Chapter Pdf · About This Paper
  29. [29]
    DLR HySU—A Benchmark Dataset for Spectral Unmixing - MDPI
    Four traditional algorithms commonly used for abundance estimation were evaluated: unconstrained least squares (UCLS), non-negative least squares (NNLS) ...
  30. [30]
    [PDF] HIGH FIDELITY DECONVOLUTION OF MODERATELY RESOLVED ...
    A new deconvolver has been developed which greatly outperforms CLEAN or. Maximum Entropy on compact sources. It is based on a preexisting Non-Negative Least.
  31. [31]
    A new algorithm for aperture synthesis imaging of extended ...
    ing the images. The non-negative-least-squares (NNLS) ap- proach, has been shown to be an improvement over CLEAN only on mildly extended sources (Briggs ...
  32. [32]
  33. [33]
    HDNLS: Hybrid Deep-Learning and Non-Linear Least Squares ...
    To address these issues, researchers have developed variants like non-negative least squares ... MRI reconstruction and NLS fitting [33]. Therefore ...
  34. [34]
    Minimization of ℓ 1 − 2 for Compressed Sensing - SIAM.org
    Xin, A method for finding structured sparse solutions to non-negative least squares problems with applications, SIAM J. ... (MRI) reconstruction, with performance ...
  35. [35]
    Non‐negative least squares computation for in vivo myelin mapping ...
    Mar 2, 2020 · Regularized non-negative least squares fitting of multi-echo T2 data has been widely employed for the computation of myelin water fraction (MWF) ...
  36. [36]
    [PDF] Dual-channel Drum Separation for Low-cost Drum Recording Using ...
    The part of the drum set uses the Non-negative Least Squares (NNLS) method to calculate the weight of each instrument in the drum set, and then multiplies ...