Fact-checked by Grok 2 weeks ago

Projections onto convex sets

In mathematics, particularly in the fields of convex analysis and functional analysis, projections onto convex sets refer to the operation of finding, for a given point in a Hilbert space, the unique closest point within a closed convex subset, measured by the space's norm. This metric projection, often denoted P_C(x) for a point x and convex set C, satisfies the variational inequality that the vector from the projection to x forms an obtuse angle with any direction toward other points in C. The existence and uniqueness of such projections onto closed convex sets in Hilbert spaces follow from the completeness of the space and the strict convexity of the norm induced by the inner product, ensuring that the distance minimization problem has a unique solution. These projections are firmly nonexpansive operators, meaning they are continuous with constant 1 and preserve certain geometric properties, which makes them valuable in iterative algorithms. A prominent application involves the method of alternating projections, originally developed by John von Neumann in 1933 for closed subspaces, which iteratively applies projections onto multiple convex sets to converge to a point in their intersection. This approach was extended to general closed convex sets in the 1960s, notably through Lev Bregman's relaxation method, forming the basis of the projections onto convex sets (POCS) algorithm. POCS and related projection techniques have wide-ranging applications in optimization, , and image reconstruction, where they enforce constraints by iteratively projecting onto convex sets representing feasible solutions, such as bounded support or positivity conditions. In these contexts, the method's , often linear under suitable conditions like consistent constraints, enables efficient solutions to ill-posed problems.

Fundamentals

Definition

In a \mathcal{H} equipped with an inner product \langle \cdot, \cdot \rangle and the induced norm \|x\| = \sqrt{\langle x, x \rangle}, the metric (or simply projection) of a point x \in \mathcal{H} onto a nonempty closed C \subseteq \mathcal{H} is defined as the point P_C(x) \in C that minimizes the distance to x, i.e., P_C(x) = \arg\min_{y \in C} \|x - y\|. This formulation relies on the geometry of the , where the norm derives from the inner product, ensuring the projection captures the closest point in the sense generalized to dimensions. Equivalently, to emphasize the quadratic nature of the objective and facilitate analysis (e.g., via subdifferentials), the is often expressed using the squared : P_C(x) = \arg\min_{y \in C} \frac{1}{2} \|x - y\|^2. This norm-based definition extends naturally from finite-dimensional spaces \mathbb{R}^n (with the dot product) to arbitrary Hilbert spaces, including separable ones like L^2 spaces, without altering the core minimizing property. A simple example in \mathbb{R}^n is the projection onto a closed ball B = \{ y \in \mathbb{R}^n : \|y\| \leq r \} for r > 0: if \|x\| \leq r, then P_B(x) = x; otherwise, P_B(x) = r \frac{x}{\|x\|}. Another basic case is projection onto a closed half-space H = \{ y \in \mathbb{R}^n : \langle a, y \rangle \leq b \} for a nonzero vector a \in \mathbb{R}^n and scalar b \in \mathbb{R}: if x \in H, then P_H(x) = x; if x \notin H, then P_H(x) = x - \frac{\langle a, x \rangle - b}{\|a\|^2} a.

Geometric interpretation

In , the projection of a point x onto a closed C is the unique point P_C(x) \in C that minimizes the \|x - z\| for z \in C. Geometrically, this corresponds to the foot of the perpendicular from x to the of C at P_C(x), meaning the x - P_C(x) is to the tangent to C at that point, analogous to orthogonal projections onto linear subspaces or . This ensures that no other point in C is closer to x, as any deviation along the set would increase the . Simple illustrations highlight this intuition. For a , the projection of x is the closest point on the segment: if the foot from x to the infinite line falls within the segment, it is that foot; otherwise, it is the nearer , representing the shortest to the set. For a , if x lies inside, P_C(x) = x; if outside, the projection is the nearest boundary point (on an edge or ), with the connecting lying outside except at the . For a closed of r centered at the origin, the projection shrinks x radially: P_C(x) = x if \|x\| \leq r, or P_C(x) = (r / \|x\|) x otherwise, mapping to the sphere boundary along the ray from the center. The nonexpansiveness of the projection operator provides further geometric insight: \|P_C(x) - P_C(y)\| \leq \|x - y\| for any x, y, meaning distances do not expand under . In a visualization, consider two points outside a like a disk; their projections on the boundary lie closer together or at the same distance, as the operator "pulls" points toward C without stretching separations between them, often contracting toward the set's interior or boundary. This effect underscores the stability of projections in geometric mappings.

Theoretical properties

Existence and uniqueness

In Hilbert spaces, the projection onto a closed convex set is guaranteed to exist and be unique, forming a cornerstone of convex analysis. Specifically, let H be a Hilbert space and C \subseteq H a nonempty closed convex set. For every x \in H, there exists a unique P_C x \in C such that \| x - P_C x \| = \inf_{y \in C} \| x - y \|. This P_C x is called the metric projection of x onto C. The existence follows from the completeness and inner product structure of Hilbert spaces. Let d = \inf_{y \in C} \| x - y \|. Select a minimizing sequence \{ y_n \} \subset C such that \| x - y_n \| \to d. The in Hilbert spaces states that for any u, v \in H, \| u - v \|^2 = 2 \| u \|^2 + 2 \| v \|^2 - 4 \left\| \frac{u + v}{2} \right\|^2. Applying this to u = x - y_n and v = x - y_m, and noting that convexity ensures \frac{y_n + y_m}{2} \in C so \left\| x - \frac{y_n + y_m}{2} \right\| \geq d, yields \| y_n - y_m \|^2 \leq 2 \| x - y_n \|^2 + 2 \| x - y_m \|^2 - 4 d^2. As n, m \to \infty, the right-hand side tends to zero, so \{ y_n \} is Cauchy. By of H, y_n \to y for some y \in H. Closedness of C implies y \in C, and continuity of the gives \| x - y \| = d. An alternative proof uses weak : the sequence \{ y_n \} is bounded (since \| y_n \| \leq \| x \| + \| x - y_n \| \to \| x \| + d), so C being closed and ensures weak compactness of the closed ball intersected with C; the continuous distance function then attains its minimum. Uniqueness stems from the strict convexity of the squared norm function y \mapsto \frac{1}{2} \| x - y \|^2, which is strictly convex on the convex set C because the Hilbert space norm satisfies the strict parallelogram identity. Suppose y_1, y_2 \in C both minimize the distance to x. Then \| y_1 - y_2 \|^2 = 2 \| x - y_1 \|^2 + 2 \| x - y_2 \|^2 - 4 \left\| x - \frac{y_1 + y_2}{2} \right\|^2 \leq 4 d^2 - 4 d^2 = 0, so y_1 = y_2. These properties hold precisely because C is closed and convex. If C is convex but not closed, existence can fail; for example, in \mathbb{R} (a Hilbert space), the open interval C = (0, 1) and x = 0 yield infimum distance 0, but no minimizer in C. If C is closed but not convex, uniqueness can fail; for instance, in \mathbb{R}, C = \{-1, 1\} and x = 0 have two closest points at distance 1.

Characterization via variational inequalities

The P_C(x) of a point x onto a nonempty closed C in a real satisfies the \langle x - P_C(x), y - P_C(x) \rangle \leq 0 \quad \forall \, y \in C, where \langle \cdot, \cdot \rangle denotes the inner product. This characterization arises from the first-order optimality condition for the convex minimization problem defining the : P_C(x) = \arg\min_{y \in C} \frac{1}{2} \| x - y \|^2. The subdifferential of the objective function at P_C(x) contains zero when combined with the normal cone to C at that point, yielding x - P_C(x) \in N_C(P_C(x)), where N_C(p) = \{ v \mid \langle v, y - p \rangle \leq 0 \ \forall \, y \in C \} is the normal cone to C at p. The is thus equivalent to the normal x - P_C(x) \in N_C(P_C(x)), providing an analytical tool for verifying projections without direct computation. An important consequence is that the projection operator P_C is firmly nonexpansive, satisfying \| P_C(x) - P_C(z) \|^2 + \| (x - P_C(x)) - (z - P_C(z)) \|^2 \leq \| x - z \|^2 \quad \forall \, x, z in the . To see this, apply the at both points: \langle x - P_C(x), P_C(z) - P_C(x) \rangle \leq 0 implies \langle x - P_C(x), P_C(x) - P_C(z) \rangle \geq 0, and \langle z - P_C(z), P_C(x) - P_C(z) \rangle \leq 0. Adding these yields \langle x - z, P_C(x) - P_C(z) \rangle \geq \| P_C(x) - P_C(z) \|^2, which rearranges to the firm nonexpansiveness . For verification, consider the half-space C = \{ y \mid \langle a, y \rangle \leq b \} with \|a\| = 1. The projection is P_C(x) = x if \langle a, x \rangle \leq b, and P_C(x) = x - (\langle a, x \rangle - b) a otherwise. In the latter case, x - P_C(x) = (\langle a, x \rangle - b) a, and the variational inequality reduces to \langle a, y - P_C(x) \rangle \leq 0 for all y \in C, which holds since \langle a, y \rangle \leq b = \langle a, P_C(x) \rangle. This confirms the sign condition aligning the residual with the bounding hyperplane.

Computational methods

Iterative projection algorithms

Iterative projection algorithms provide a foundational approach for computing projections onto the intersection of multiple sets in s by repeatedly applying projection operators. The of alternating projections (), originally developed for closed subspaces, involves iteratively applying the composition of projections onto two sets, C_1 and C_2, starting from an initial point x^0: x^{2k+1} = P_{C_1}(x^{2k}) and x^{2k+2} = P_{C_2}(x^{2k+1}) for k \geq 0. established the convergence of this method to a point in the C_1 \cap C_2 for closed subspaces in , with the sequence converging in norm at a linear rate determined by the cosine of the Friedrichs between the subspaces. This result was extended by Israel Halperin to closed sets, proving that the iterates converge weakly to a point in the under the that C_1 \cap C_2 is nonempty. For scenarios involving more than two sets, the projections onto convex sets (POCS) algorithm generalizes by cyclically applying projections onto a finite collection of closed sets \{C_i\}_{i=1}^m in a periodic manner: x^{k+1} = P_{C_{j(k+1 \mod m)}}(x^k), where j indexes the sets. If the \bigcap_{i=1}^m C_i is nonempty, the POCS iterates converge to a point in this , as established in extensions of Halperin's theorem to multiple sets. This cyclic approach maintains the simplicity of while handling feasibility problems with multiple constraints, assuming the projections are well-defined and the space is Hilbert. Convergence properties of these iterative methods vary by the nature of the sets. For closed subspaces, both and POCS exhibit linear , satisfying error bounds of the form \|x^{k+1} - x^*\| \leq \rho \|x^k - x^*\| with \rho < 1, where x^* is the limit point and \rho depends on the principal angles between subspaces. In contrast, for general closed convex sets, the convergence is typically sublinear, with rates on the order of O(1/k) for the distance to the intersection, reflecting the slower asymptotic behavior due to the nonlinearity of projections onto non-linear sets. These rates hold under consistency assumptions, where the intersection is nonempty; otherwise, the methods may fail to converge to a meaningful solution without modifications. Dykstra's algorithm addresses the computation of the exact projection onto the intersection \bigcap_{i=1}^m C_i by incorporating correction steps into the alternating projections framework, particularly useful when standard MAP or POCS converges to an arbitrary point in the intersection rather than the closest point to the initial estimate. The procedure maintains auxiliary dual variables \{u_i\}_{i=1}^m, initialized to zero, and updates as follows: at each cycle over the sets, project onto C_i after adding the correction u_i, then update u_i \leftarrow u_i + (y - P_{C_i}(y)) where y is the input to the projection. This ensures convergence to P_{\bigcap C_i}(x^0), the metric projection of the starting point x^0 onto the intersection, even for inconsistent auxiliary problems, with the dual variables capturing the necessary adjustments. The algorithm converges at a rate similar to MAP for the primal sequence, but guarantees the desired projection property.

Specialized algorithms for specific sets

Projections onto specific convex sets often admit efficient, tailored algorithms that exploit the underlying structure, enabling closed-form solutions or low-complexity computations in many cases. These methods are particularly valuable in high-dimensional settings where general-purpose solvers may be inefficient. Common examples include , \ell_p balls, polyhedra, and , each with specialized techniques derived from the geometry of the set. For the simplex, particularly the probability simplex \Delta^{n-1} = \{ x \in \mathbb{R}^n_{\geq 0} : \sum_{i=1}^n x_i = 1 \}, the Euclidean of a vector y \in \mathbb{R}^n can be computed using a sorting-based . This approach, introduced by Michelot, involves the components of y in decreasing order as y_{(1)} \geq \cdots \geq y_{(n)}, then finding the largest index k such that the threshold \theta = \frac{1}{k} \left( \sum_{i=1}^k y_{(i)} - 1 \right) satisfies y_{(k)} > \theta \geq y_{(k+1)}. The x is then given by x_i = \max(y_i - \theta, 0) for each i, ensuring the result lies on the simplex. This finite runs in O(n \log n) time due to and has been widely adopted for its simplicity and exactness. Alternatively, the problem can be reformulated as a quadratic program and solved via standard solvers, though the direct method is preferred for efficiency. Projections onto \ell_p balls, defined as B_p(r) = \{ x \in \mathbb{R}^n : \|x\|_p \leq r \} for p \geq 1 and radius r > 0, also benefit from structure-specific methods. For p = 1, the \ell_1 ball ( or diamond in low dimensions), an efficient algorithm projects by sorting the absolute values of the components and performing search over thresholds, akin to the case but adjusted for the \ell_1 . This yields an exact solution in O(n \log n) time, with linear-time variants available under certain assumptions. For p = \infty, the \ell_\infty ball (a hypercube), the projection is simply component-wise clipping: x_i = \operatorname{sgn}(y_i) \min(|y_i|, r), which is O(n) and closed-form. For general $1 < p < \infty, no simple closed-form exists, but the projection can be computed via iterative soft-thresholding or dual optimization methods, such as solving a univariate Lagrangian dual using Newton's method, achieving high accuracy in few iterations even in high dimensions. For polyhedra, defined by linear inequalities Ax \leq b with A \in \mathbb{R}^{m \times n}, the projection is a quadratic program that can be solved using active-set methods, which iteratively identify and refine the binding constraints. These methods exploit sparsity in A for efficiency, solving a dual problem via linear systems and achieving practical performance in moderate dimensions. Alternatively, interior-point or simplex-based solvers can be adapted, though they may require more iterations. For certain structured polyhedra, such as polytopes, specialized algorithms achieve O(n \log n) complexity by leveraging combinatorial properties. Closed-form projections are available for sets defined by quadratic constraints, such as ellipsoids \mathcal{E} = \{ x \in \mathbb{R}^n : (x - c)^T Q (x - c) \leq 1 \} where Q \succ 0 is positive definite. If y \in \mathcal{E}, the projection is y itself; otherwise, it lies on the boundary and solves for a scalar \lambda \geq 0 by solving the equation (p - c)^T Q (p - c) = 1 with p = (I + \lambda Q)^{-1} (y + \lambda Q c), ensuring the constraint holds with equality. This requires inverting I + \lambda Q, which can be done via of Q for , followed by a one-dimensional root-finding step (e.g., ) to determine \lambda. The overall computation is efficient, scaling as O(n^3) for the decomposition but reusable across projections. Numerical stability is a key consideration for these algorithms, especially in high dimensions. For matrix inversions in ellipsoid projections, ill-conditioned Q can amplify errors; using Cholesky factorization mitigates this by avoiding explicit inverses and preserving positive definiteness. Similarly, sorting-based methods for simplices and \ell_1 balls are robust but require careful handling of ties in floating-point arithmetic to prevent inconsistencies. When closed-form methods are unavailable or unstable, iterative projection techniques serve as reliable fallbacks.

Applications

In optimization problems

Projections onto convex sets play a central role in for solving constrained problems. In these methods, the objective is typically of the form \min_x f(x) + I_C(x), where f is convex and differentiable, I_C is the of a closed C (zero on C and +\infty otherwise), and the of I_C is precisely the Euclidean projection onto C, defined as \Pi_C(x) = \arg\min_{u \in C} \|u - x\|_2^2. This allows reformulation as an unconstrained problem over proximal steps, as in the iterative soft-thresholding algorithm (ISTA), which applies a step on f followed by projection onto C. For faster convergence, the fast iterative shrinkage-thresholding algorithm (FISTA) incorporates acceleration, achieving O(1/k^2) rates while retaining projections as the key operation for enforcing constraints. The alternating direction method of multipliers (ADMM) further leverages projections to decompose complex convex problems into simpler subproblems. For objectives like \min_{x,z} f(x) + g(z) subject to Ax + Bz = c, ADMM alternates scaled dual updates with proximal minimizations, where projections onto individual sets arise when f or g is an indicator of a convex set. Each iteration involves projecting onto these sets, facilitating parallel or distributed computation, with global convergence to primal and dual optima guaranteed under conditions like convexity and qualification of constraints. In duality, projections onto feasible sets enable efficient solutions to dual problems, particularly for high-dimensional cases. The dual of a problem onto a polyhedral set can be formulated as a quadratic program with fewer variables, solvable via projections or related operators, reducing computational burden when the number of constraints is small compared to the ambient dimension. A representative example is sparse regression under \ell_1 constraints, such as \min_x \|Ax - b\|_2^2 to \|x\|_1 \leq 1, where use projection onto the \ell_1-ball to sparsify solutions. For non-negative sparse regression, like topic modeling or portfolio allocation, projection onto the probability \{x \geq 0 : \sum x_i = 1\} enforces non-negativity and in each iteration. For structured convex sets, such as the simplex or \ell_1-ball, projections admit efficient algorithms with linear-time complexity O(n) in the dimension n, or O(n \log n) via sorting, enabling overall optimization in polylogarithmic steps relative to problem scale in certain accelerated schemes.

In signal and image processing

In tomography, projections onto convex sets (POCS) address inconsistent systems arising from incomplete or noisy projection data by iteratively enforcing consistency with measured projections while incorporating convex constraints such as non-negativity of the image intensity. This approach reconstructs images that satisfy both the data fidelity and physical priors, improving reconstruction quality in computed tomography (CT) where limited-angle or sparse projections lead to artifacts. For example, the method projects the current estimate onto the convex set defined by the Radon transform consistency and onto the non-negativity half-space, converging to a feasible solution that mitigates streak artifacts common in underdetermined problems. In denoising applications, POCS projects signals onto convex sets representing wavelet coefficient thresholds or total variation (TV) balls to suppress while preserving edges and structural details. By iteratively alternating projections onto the data consistency set (e.g., a blurred and noisy ) and sparsity-promoting sets in the domain, the method achieves effective noise removal without over-smoothing, as developed in parallel subgradient projection frameworks during the 1990s. This is particularly useful for images corrupted by , where the convex formulation ensures global optimality within the feasible set. Super-resolution leverages iterative POCS to enhance from multiple low-resolution observations by projecting onto the of images consistent with the observed low-resolution data (accounting for motion and ) and onto prior knowledge sets such as or edge-preserving constraints in the high-resolution domain. The Irani-Peleg exemplifies this by using back-projection of simulation errors to refine the high-resolution estimate, effectively alternating between data fidelity and regularization sets to recover fine details lost in downsampling. This approach handles subpixel shifts and is robust to registration errors. A prominent example is (MRI) reconstruction, where POCS iteratively projects onto the convex subspace of measured data ( consistency) and onto sparsity-promoting sets, such as or TV domains, to recover undersampled images while enforcing data fidelity and reducing Gibbs ringing. In partial acquisitions, this enforces Hermitian and sparsity to fill missing lines, yielding artifact-free images suitable for clinical diagnostics. Such methods are especially valuable in accelerated MRI protocols, balancing scan time reduction with maintained diagnostic quality. In recent years, POCS has been integrated with techniques to further improve performance in MRI reconstruction. For example, POCS-augmented CycleGAN frameworks have been used to enhance undersampled MRI images by combining iterative projections with generative adversarial networks, achieving better noise suppression and detail preservation. Additionally, as of 2023, POCS continues to find applications in seismic for , demonstrating its ongoing utility in handling inconsistent datasets.

References

  1. [1]
    [PDF] Vector Space Methods Lecture 21: Projection onto Convex Sets
    The projection of vectors onto subspaces can be generalized to convex sets. For Hilbert space V and closed convex set. A ⊆ V , let PA : V → A denote the ...
  2. [2]
    Existence and Differentiability of Metric Projections in Hilbert Spaces
    If the set S is convex, then it is well known that the corresponding metric projections always exist, unique and directionally differentiable at boundary points ...<|separator|>
  3. [3]
    [PDF] arXiv:2212.02933v3 [math.OC] 17 Feb 2023
    Feb 17, 2023 · In 1933 von Neumann proved a beautiful result that one can approximate a point in the intersection of two convex sets by alternating ...
  4. [4]
    [PDF] the relaxation method of finding the common point of convex sets ...
    Condition II is satisfied, since the D-projection onto a convex set is in this case the same as an ordinary projection. The function defined in condition ...
  5. [5]
    Finding the projection of a point onto the intersection of convex sets ...
    In the present paper we propose an algorithmic scheme which is a modification of Dykstra's algorithm. It allows us to replace projections onto convex sets ...
  6. [6]
    [PDF] September 06 3.1 Topics covered 3.2 Projection onto Convex Sets
    3.3 Separation of Convex Sets. We use the projection result to obtain an outer representation of convex sets. We show that closed convex sets can have dual ...
  7. [7]
    [PDF] Convex sets - CMU School of Computer Science
    1.2 Projection onto a convex set. We have now arrived at one of the most important optimization problems: projection onto a convex set. To begin, recall from ...
  8. [8]
    [PDF] Projection onto l2-balls Andersen Ang - angms.science
    Jan 14, 2024 · Projection onto an l2-ball is defined as finding the point x that minimizes ∥x - y∥2, where the l2-ball is defined as ∥x∥2 ≤ 1. If the l2-ball ...
  9. [9]
    [PDF] Poincaré series for analytic convex bodies - HAL
    Sep 12, 2025 · Assuming that the convex bodies have analytic boundaries, we prove that the Poincaré series, originally defined in the right complex half-plane ...
  10. [10]
    [PDF] Lecture 15 & 16 : Examples of Hilbert Spaces. Projection Theorem ...
    Theorem 2 (Hilbert's Projection Theorem). Given a closed convex set Y in a Hilbert space X and x œ X. There exists a unique y œ Y such that. خx ≠ yخ = min.
  11. [11]
    [PDF] A Strongly Convergent Reflection Method for Finding the Projection ...
    Jan 11, 2006 · This problem, which was already studied by von Neumann in the 1930s in this general Hilbert space setting, is of fundamental importance in ...
  12. [12]
    On Projection Algorithms for Solving Convex Feasibility Problems
    This acceleration technique extends one for convex feasibility problems (CFPs), since projection operators onto closed convex sets are firmly nonexpansive. The ...
  13. [13]
    [PDF] The Method of Alternating Projections - arXiv
    Sep 16, 2018 · By von Neumann's theorem, which states that the limit of alternating pro- jections onto two closed subspaces converges in norm to the ...
  14. [14]
    [PDF] The rate of convergence in the method of alternating projections - HAL
    Dec 19, 2021 · This pa- rameter is used to analyze the rate of convergence in the von Neumann-Halperin method of cyclic alternating projections. General ...<|separator|>
  15. [15]
    [PDF] An Iterative Procedure for Obtaining I-Projections onto the ... - DTIC
    We now propose a procedure which will enable one to obtain I- projections onto a finite intersection of arbitrary closed, convex sets of PD's by iteratively ...
  16. [16]
    [PDF] Dykstra's algorithm with Bregman projections: a convergence proof
    A method for finding projections onto the intersection of convex sets in Hilbert spaces. In: Dykstra, R. L., Robertson, T. and. Wright, F. T. Eds., Advances in ...
  17. [17]
    A finite algorithm for finding the projection of a point onto the ...
    A finite algorithm for finding the projection of a point onto the canonical simplex of ∝n. Technical Note; Published: July 1986. Volume 50, pages 195–200, (1986) ...
  18. [18]
    [PDF] Projection onto the probability simplex: An efficient algorithm ... - arXiv
    Sep 6, 2013 · It can also be solved by alternating projection onto the two constraints in a finite number of steps (Michelot, 1986). Another way (Boyd and ...
  19. [19]
    [PDF] Efficient Projections onto the l1-Ball for Learning in High Dimensions
    We describe efficient algorithms for projecting a vector onto the ℓ1-ball. We present two methods for projection. The first performs exact projec- tion in O(n) ...
  20. [20]
    [PDF] Fast Projection onto the Simplex and the ℓ1 Ball - HAL
    This paper proposes a new algorithm for fast, exact projection of a vector onto a simplex or ℓ1-norm ball, which is faster than existing methods.<|separator|>
  21. [21]
    [PDF] A unified analysis of convex and non-convex `p-ball projection ...
    Mar 2, 2022 · In this paper, we introduce novel, scalable methods for projecting onto the `p ball for general p > 0.
  22. [22]
    Projection onto a Polyhedron that Exploits Sparsity
    An algorithm is developed for projecting a point onto a polyhedron. The algorithm solves a dual version of the projection problem and then uses the ...
  23. [23]
    [PDF] Efficient Euclidean Projections in Linear Time
    The resulting algorithm has a time complexity of O(n log n). In this paper, we propose to cast both Euclidean projections as root finding problems associated ...
  24. [24]
    Comparison of several fast algorithms for projection onto an ellipsoid
    This is because that the intersection of the ellipsoids is decomposed to the Cartesian product of the balls, and the y -subproblem possesses closed-form ...
  25. [25]
  26. [26]
    [PDF] Convex Set Theoretic Image Recovery by Extrapolated Iterations of ...
    Abstract—Solving a convex set theoretic image recovery prob- lem amounts to finding a point in the intersection of closed and.
  27. [27]
    [PDF] Improving Resolution by Image Registration
    1991. Improving Resolution by Image Registration. MICHAL IRANI AND SHMUEL PELEG*. Department of Computer Science, The Hebrew University of Jerusalem, 91904 ...
  28. [28]
    MR image reconstruction of sparsely sampled 3D k-space data by ...
    Zero-filling (ZF) [12], homodyne [13] or projection-onto-convex sets (POCS) [14] techniques may be used to reconstruct these images. A third class of methods ...
  29. [29]
    [PDF] POCS AUGMENTED CYCLEGAN FOR MR IMAGE ...
    For example, a sinusoidal signal is not sparse in time domain but is sparse in frequency domain after apply the Fourier Transform. Because, after the ...