Fact-checked by Grok 2 weeks ago

Convex analysis

Convex analysis is a branch of dedicated to the study of convex sets, convex functions, and associated structures in real vector spaces, providing foundational tools for optimization, duality, and geometric properties that ensure efficient problem-solving. It encompasses concepts like subgradients, conjugate functions, and recession cones, which generalize classical analysis to non-smooth settings while preserving key inequalities such as . The field was formalized by in his 1970 book Convex Analysis, building on earlier work by Werner Fenchel and others to create a coherent framework for nonlinear problems. Central to convex analysis are convex sets, defined as nonempty subsets C of a where, for any x, y \in C and \lambda \in [0,1], the point \lambda x + (1-\lambda)y \in C, ensuring that line segments between points lie entirely within the set. Examples include half-spaces, polyhedra, and balls in norms like the Euclidean norm. Convex functions extend this notion, satisfying f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda) f(y) for \lambda \in [0,1] and points in the domain, with their epigraphs—sets of points above the graph—forming convex sets. Properties such as lower and properness (nonempty domain, not identically infinite) are standard, enabling tools like the subdifferential \partial f(x) = \{u \mid f(y) \geq f(x) + \langle u, y - x \rangle \ \forall y\} for characterizing minima in non-differentiable cases. Convex analysis is pivotal in convex optimization, where minimizing a convex objective over a convex feasible set guarantees global optima and supports algorithms like interior-point methods and subgradient descent. Notable applications include via in , sparse signal recovery through basis pursuit in , and support vector machines for classification in , all leveraging duality theorems like Fenchel-Rockafellar for under conditions such as Slater's qualification.

Fundamental Concepts

Convex sets

In convex analysis, a convex set is defined as a subset C of a real vector space V such that for any two points x, y \in C and any \lambda \in [0, 1], the convex combination \lambda x + (1 - \lambda) y also belongs to C. This geometric condition ensures that the entire line segment joining any two points in the set lies within the set. Convex sets form the foundational building blocks of convex analysis, as they capture the intuitive notion of "no dents" or "roundness" in higher dimensions. Common examples of convex sets include balls, defined as \{ x \in \mathbb{R}^n \mid \|x - x_c\|_2 \leq r \} for x_c and radius r > 0, which are bounded and symmetric. Polyhedra arise as intersections of finitely many half-spaces, such as \{ x \mid A x \leq b \} where A \in \mathbb{R}^{m \times n} and b \in \mathbb{R}^m, encompassing shapes like cubes and simplices. Half-spaces themselves, given by \{ x \mid a^T x \leq b \} with a \neq 0, are unbounded convex sets that define linear inequalities. Simplices, the convex hulls of n+1 affinely independent points in \mathbb{R}^n, represent the simplest bounded polyhedra, such as triangles in \mathbb{R}^2. In contrast, non-examples include the boundary of the unit \{ x \in \mathbb{R}^2 \mid \|x\|_2 = 1 \}, where the between points like (1,0) and (0,1) exits the set. Key properties of convex sets include closure under certain operations. The of any collection (finite or infinite) of convex sets is convex, allowing complex shapes to be built from simpler ones like half-spaces. The of a set S, denoted \operatorname{conv}(S), is the smallest convex set containing S and consists of all convex combinations of points in S; it is always convex. Affine transformations preserve convexity: if C is convex and f(x) = A x + b is affine, then f(C) is convex. The relative interior of a C, denoted \operatorname{ri}(C), consists of points x \in C such that there exists \epsilon > 0 with the B_\epsilon(x) intersected with the affine of C contained in C; it is nonempty for any nonempty C and captures the "interior" relative to the set's dimension. The \operatorname{cl}(C), the smallest containing C, is , and satisfies \operatorname{cl}(\operatorname{ri}(C)) = \operatorname{cl}(C), aiding in topological analyses like in finite dimensions. These concepts ensure robustness in applications, such as separation theorems. A fundamental result is the , which states that a compact A of a locally Hausdorff is the closed of its extreme points—the points in A that cannot be expressed as nontrivial combinations of other points in A. This theorem highlights how compact sets are generated by their "corners," with examples including the vertices of a or .

Convex functions

A convex function is a real-valued function f: V \to \mathbb{R} defined on a convex subset C of a V, satisfying the inequality f(\lambda x + (1-\lambda)y) \leq \lambda f(x) + (1-\lambda) f(y) for all x, y \in C and \lambda \in [0,1]. This condition implies that the function lies below its chords, capturing the intuitive notion of "bowl-shaped" behavior. The domain C must itself be convex to ensure the convex combination \lambda x + (1-\lambda)y remains in the domain. A function f is strictly convex if the inequality is strict for \lambda \in (0,1) and x \neq y. Strict convexity strengthens the geometric interpretation, ensuring that the function lies strictly below its chords except at endpoints. An important consequence is , which states that for a f and weights \lambda_i \geq 0 summing to 1, f\left(\sum_i \lambda_i x_i\right) \leq \sum_i \lambda_i f(x_i). For functions, the inequality is strict unless all x_i are equal. This finite form extends to expectations in probability, where \mathbb{E}[f(X)] \geq f(\mathbb{E}[X]) for f. The epigraph of f, denoted \operatorname{epi} f = \{(x, t) \in V \times \mathbb{R} \mid t \geq f(x)\}, is the set of points above the graph of f. A function f is convex if and only if its epigraph is a convex set. This equivalence provides a geometric link between convex functions and convex sets, allowing function properties to be analyzed through set convexity. Common examples include norms \| \cdot \|, which satisfy \|\lambda x + (1-\lambda)y\| \leq \lambda \|x\| + (1-\lambda) \|y\| and are convex; some, like the Euclidean norm, are strictly convex, while others, like the \ell_1 norm, are not. Quadratic forms f(x) = x^\top A x are convex if A is positive semidefinite, and strictly convex if positive definite. The indicator function of a convex set C, defined as I_C(x) = 0 if x \in C and +\infty otherwise, is convex, extending the notion to extended real-valued functions. Convex functions exhibit strong regularity properties. On an open , a convex function is continuous. More precisely, it is locally continuous on the relative interior of its , meaning it is bounded by a locally, which aids in numerical and analytical treatments. Convexity is preserved under certain compositions. If f is convex and g(x) = f(Ax + b) for affine A and b, then g is convex, as affine transformations preserve convex combinations. Additionally, if f is convex and h: \mathbb{R} \to \mathbb{R} is non-decreasing, then h \circ f is convex, since non-decreasing h maintains the direction.

Duality Theory

Convex conjugate

The convex conjugate, also known as the Legendre-Fenchel transform, of a f: X \to \mathbb{R} \cup \{+\infty\}, where X is a and Y its topological dual, is defined as f^*(y) = \sup_{x \in X} \left( \langle y, x \rangle - f(x) \right), \quad y \in Y. This supremum may be finite or infinite, and the conjugate maps to \mathbb{R} \cup \{+\infty\}. The effective domain of f^* is the set \{ y \in Y \mid f^*(y) < +\infty \}, which characterizes where the conjugate takes finite values, while its range consists of the values achieved by this supremum, often unbounded above unless f is bounded below. Regardless of whether the original function f is convex, the conjugate f^* is always a convex function. Moreover, f^* is lower semicontinuous with respect to the weak-* topology on Y, providing a canonical convex lower semicontinuous envelope even for nonconvex f. A fundamental consequence is the Fenchel-Young inequality, which states that for all x \in X and y \in Y, f(x) + f^*(y) \geq \langle y, x \rangle. Equality holds if and only if x achieves the supremum in the definition of f^*(y), meaning \langle y, x \rangle - f(x) = f^*(y). Illustrative examples highlight the conjugate's role in duality. For a norm \|\cdot\| on X, if f(x) = \|x\|, the conjugate is f^*(y) = 0 if \|y\|_* \leq 1 and +\infty otherwise, where \|y\|_* is the dual norm defined by \|y\|_* = \sup_{\|x\| \leq 1} \langle y, x \rangle; this is the indicator function of the unit ball in the dual norm. The dual norm itself is the conjugate of the indicator function of the primal unit ball. Similarly, for the indicator function \delta_C(x) of a convex set C \subseteq X, where \delta_C(x) = 0 if x \in C and +\infty otherwise, the conjugate is the support function \sigma_C(y) = \sup_{x \in C} \langle y, x \rangle.

Subdifferential

The subdifferential of a convex function f: \mathbb{R}^n \to \mathbb{R} at a point x \in \dom f is defined as the set \partial f(x) = \{ y \in \mathbb{R}^n \mid f(z) \geq f(x) + \langle y, z - x \rangle \ \forall z \in \dom f \}, which consists of all subgradients y that support the epigraph of f via the linear underestimator at x. Any element of \partial f(x) is called a subgradient, generalizing the gradient for nondifferentiable convex functions. For a proper lower semicontinuous convex function f, the subdifferential \partial f(x) is nonempty and convex at every x in the relative interior of \dom f, and it is a closed convex set everywhere in \dom f. The mapping x \mapsto \partial f(x) is monotone, meaning that for any x_1, x_2 \in \dom f and y_1 \in \partial f(x_1), y_2 \in \partial f(x_2), \langle y_1 - y_2, x_1 - x_2 \rangle \geq 0, with the operator being maximal monotone when f is proper lower semicontinuous. The subdifferential establishes the Fenchel-Young equality: for a convex function f and its convex conjugate f^*, y \in \partial f(x) \iff x \in \partial f^*(y) \iff f(x) + f^*(y) = \langle y, x \rangle. This equivalence characterizes equality in the Fenchel inequality f(x) + f^*(y) \geq \langle y, x \rangle. Examples illustrate the subdifferential's structure. For the Euclidean norm \| \cdot \|_2 on \mathbb{R}^n, \partial \|x\|_2 = \{ y \mid \|y\|_2 \leq 1, \ \langle y, x \rangle = \|x\|_2 \} if x \neq 0, which is the face of the dual unit ball exposed by x, and \partial \|0\|_2 is the full dual unit ball. For the absolute value f(x) = |x| on \mathbb{R}, \partial f(0) = [-1, 1], \partial f(x) = \{ \operatorname{sign}(x) \} for x \neq 0. For the indicator function \delta_C(x) of a nonempty closed convex set C \subseteq \mathbb{R}^n, where \delta_C(x) = 0 if x \in C and \infty otherwise, \partial \delta_C(x) = N_C(x) is the normal cone to C at x \in C, consisting of all y such that \langle y, z - x \rangle \leq 0 for z \in C. In convex optimization, a point x^* minimizes a proper convex function f if and only if $0 \in \partial f(x^*), providing the first-order necessary and sufficient optimality condition.

Biconjugate

The biconjugate of an extended real-valued function f: X \to (-\infty, +\infty], where X is a vector space equipped with a duality pairing, is defined as f^{**} = (f^*)^*, the convex conjugate of the convex conjugate f^*. For any such function f, the biconjugate satisfies the pointwise inequality f(x) \leq f^{**}(x) for all x \in X, with f^{**} always being a proper convex and lower semicontinuous function. This inequality arises because the convex conjugate operation is antitone (reversing inequalities) and the biconjugate recovers a convex lower bound. A fundamental result in convex analysis is the Fenchel–Moreau theorem, which establishes that if f is proper, convex, and lower semicontinuous, then f = f^{**}. More generally, for any extended real-valued function f, the biconjugate f^{**} coincides with the lower semicontinuous convex envelope of f, defined as the pointwise supremum of all convex lower semicontinuous functions that are less than or equal to f. This envelope is the greatest (in the pointwise order) convex lower semicontinuous minorant of f, providing a canonical way to "convexify" non-convex functions while preserving lower semicontinuity. In finite-dimensional spaces, lower semicontinuity can often be ensured by closure operations, linking the biconjugate directly to the closure of the convex hull of the epigraph of f: \operatorname{epi} f^{**} = \overline{\operatorname{conv}(\operatorname{epi} f)}. An analogous closure property holds for sets via the bipolar theorem. For a nonempty set A in a locally convex topological vector space, the bipolar A^{**} (obtained by taking the polar A^\circ = \{ y \mid \langle y, x \rangle \leq 1 \ \forall x \in A \} and then the polar again) equals the closed convex hull of A relative to the origin, provided $0 \in A; if A is already closed, convex, and contains the origin, then A = A^{**}. This theorem underscores the biconjugate's role in , where it enforces convexity and closure in the context of duality pairings, mirroring the epigraph closure for functions. Illustrative examples highlight the biconjugate's enveloping property for non-convex functions. Consider f(x) = |x|^p for $0 < p < 1 on \mathbb{R}, which is continuous but non-convex (in fact, concave on [0, \infty)). The biconjugate f^{**} yields the convex lower semicontinuous envelope of f, which is the largest convex function below f and can be explicitly computed via the double conjugate transform, often resulting in a piecewise linear or quadratic form that "fills in" the concavity. Such examples demonstrate how the biconjugate systematically produces the convex hull in the extended sense, essential for regularization in optimization and analysis.

Optimization Applications

Convex minimization

Convex minimization involves solving the optimization problem of minimizing a convex function f: \mathbb{R}^n \to \mathbb{R} over a convex set C \subseteq \mathbb{R}^n, formulated as \inf_{\mathbf{x} \in C} f(\mathbf{x}). This setup leverages the geometric and analytical properties of convexity to guarantee global optimality under mild conditions. A minimizer exists if the objective function is coercive, meaning f(\mathbf{x}) \to +\infty as \|\mathbf{x}\| \to \infty for \mathbf{x} \in C, or if the sublevel sets \{ \mathbf{x} \in C \mid f(\mathbf{x}) \leq \alpha \} are compact for some \alpha. These conditions ensure that the infimum is attained, preventing the minimum from occurring at infinity. If f is strictly convex, any local minimizer is global and unique, as the function's epigraph forms a strictly convex set, eliminating multiple points achieving the same minimum value. A point \mathbf{x}^* is a minimizer if and only if it satisfies the first-order optimality condition \mathbf{0} \in \partial f(\mathbf{x}^*) + N_C(\mathbf{x}^*), where \partial f(\mathbf{x}^*) is the subdifferential of f at \mathbf{x}^* and N_C(\mathbf{x}^*) is the normal cone to C at \mathbf{x}^*. This variational inequality characterizes global optimality for convex problems. For differentiable convex functions, the gradient descent method iteratively updates the iterate as \mathbf{x}^{k+1} = \mathbf{x}^k - \alpha_k \nabla f(\mathbf{x}^k), where the step size \alpha_k > 0 is chosen to ensure descent, such as via or fixed steps under of the gradient; to the global minimum is guaranteed at a sublinear rate. For nonsmooth convex functions, proximal algorithms provide a framework by exploiting the \prox_{\lambda g}(\mathbf{v}) = \arg\min_{\mathbf{y}} \left\{ g(\mathbf{y}) + \frac{1}{2\lambda} \|\mathbf{y} - \mathbf{v}\|^2 \right\} for a convex g; methods like the proximal gradient algorithm apply this to composite objectives f(\mathbf{x}) + g(\mathbf{x}), yielding iterates that to the minimizer. Linear programming exemplifies convex minimization, where the objective is linear (f(\mathbf{x}) = \mathbf{c}^\top \mathbf{x}) over a polyhedral set C = \{ \mathbf{x} \mid A\mathbf{x} \leq \mathbf{b} \}, solvable efficiently via interior-point methods. Another application is regularized least squares, such as minimizing \frac{1}{2} \|A\mathbf{x} - \mathbf{b}\|^2 + \lambda \|\mathbf{x}\|_1 over \mathbb{R}^n, which promotes sparsity while remaining convex.

Dual problems

In convex optimization, dual problems provide a complementary perspective to primal problems, often yielding tighter bounds or simpler computational forms through the use of . Consider the primal problem of minimizing the sum of two functions composed with a : \inf_{x} \left\{ f(x) + g(Ax) \right\}, where f: X \to \mathbb{R} and g: Y \to \mathbb{R} are proper functions, X and Y are spaces, and A: X \to Y is . The dual problem is derived using the (Fenchel transform) h^*(z) = \sup_w \{ z^\top w - h(w) \}, resulting in \sup_{y} \left\{ -f^*(-A^* y) - g^*(y) \right\}, where A^* is the of A and the supremum is over y \in Y. This formulation, known as Fenchel duality, establishes a maximization problem whose optimal value serves as a lower bound on the primal infimum. Weak duality holds universally for this pair: for any feasible primal x and dual y, the dual objective satisfies -f^*(-A^* y) - g^*(y) \leq f(x) + g(Ax), implying that the optimal dual value is at most the optimal primal value. This inequality follows directly from Fenchel's inequality, h(w) + h^*(z) \geq z^\top w applied to both functions, and provides a certificate of optimality when equality is achieved. Strong duality occurs when the and optimal values coincide, ensuring a zero . Under the Slater condition—requiring the existence of a point x in the relative interior of dom f such that Ax ∈ relint(dom g)— holds, and both optima are attained if the primal is feasible. This condition guarantees that the conjugate functions are closed, enabling the biconjugate to recover the original functions and closing the . The approach further elucidates duality by considering a perturbed \inf_x \{ f(x) + g(Ax + u) \}, where u \in Y is a parameter. The optimal value function p(u) is , and its conjugate p^*(-y) equals the dual objective, linking to dual variables: the optimal dual y^* satisfies y^* = -\nabla p(0), interpreting dual solutions as shadow prices for perturbations. A zero corresponds to p(0) = p^{**}(0), recoverable via the biconjugate under appropriate closure assumptions. A representative example is norm minimization: the primal \inf \{ \|x\| : Ax = b \} has dual \sup \{ -b^\top y : \|A^* y\|_* \leq 1 \}, where \|\cdot\|_* is the dual norm and the supremum maximizes the support function of the range of A^*. Strong duality holds if b is in the range of A, yielding the minimal norm as the maximal inner product under the dual norm constraint.

Lagrange duality

In constrained , Lagrange duality provides a framework for transforming a primal problem involving and constraints into an unconstrained problem that can offer insights into optimality and facilitate numerical solution. Consider the primal problem of minimizing a f: \mathbb{R}^n \to \mathbb{R} subject to constraints g_i(x) \leq 0 for i = 1, \dots, m and constraints h_j(x) = 0 for j = 1, \dots, p, where each g_i and h_j is . The function is defined as L(x, \lambda, \mu) = f(x) + \sum_{i=1}^m \lambda_i g_i(x) + \sum_{j=1}^p \mu_j h_j(x), with \lambda \in \mathbb{R}^m_{\geq 0} and \mu \in \mathbb{R}^p. This formulation incorporates the constraints via multipliers \lambda_i and \mu_j, which penalize violations in the augmented objective. The Lagrange dual function is obtained by minimizing the Lagrangian over the primal variable: \theta(\lambda, \mu) = \inf_{x \in \mathbb{R}^n} L(x, \lambda, \mu). This infimum is well-defined and in (\lambda, \mu) for primal problems, as the Lagrangian is jointly in x and affine in the multipliers under the given assumptions. The dual problem then maximizes \theta(\lambda, \mu) subject to \lambda \geq 0, yielding a problem that is often simpler due to fewer variables or unconstrained structure in x. Weak duality holds universally, stating that the optimal value is a lower bound on the optimal value. Optimality in the - pair is characterized by the Karush-Kuhn- (KKT) conditions, which include stationarity (\nabla_x L(x^*, \lambda^*, \mu^*) = 0), feasibility (g_i(x^*) \leq 0, h_j(x^*) = 0), feasibility (\lambda^* \geq 0), and complementary slackness (\lambda_i^* g_i(x^*) = 0 for all i). For problems, these conditions are necessary under suitable qualifications and sufficient for global optimality, as any point satisfying them achieves the minimum. The KKT conditions originate from the work of Kuhn and on , with earlier foundations in Karush's thesis. Strong duality, where primal and dual optimal values coincide, requires constraint qualifications to ensure the dual attains its supremum. Slater's condition, a common qualification for convex problems, posits the existence of a strictly feasible point where g_i(x) < 0 for all i and h_j(x) = 0 for all j; under this, strong duality holds and dual optimal multipliers exist. The linear independence constraint qualification (LICQ) requires that the gradients of active inequalities and equalities at the primal optimum are linearly independent, also implying strong duality and unique multipliers. These qualifications bridge the duality gap, enabling exact recovery of the primal solution from dual variables. A canonical example is the dual of quadratic programming, where the primal minimizes \frac{1}{2} x^T Q x + c^T x subject to A x \leq b and Q \succ 0. The dual becomes \max_{\lambda \geq 0} b^T \lambda - \frac{1}{2} (Q^{-1} (A^T \lambda + c))^T (A^T \lambda + c), a convex problem solvable via methods like interior-point algorithms, illustrating how duality simplifies constraint handling. Support vector machines (SVMs) exemplify Lagrange duality in : the primal hard-margin SVM minimizes \frac{1}{2} \|w\|^2 subject to y_k (w^T x_k + b) \geq 1 for training points (x_k, y_k), a program. Its dual maximizes \sum_{k=1}^N \alpha_k - \frac{1}{2} \sum_{k,l=1}^N \alpha_k \alpha_l y_k y_l (x_k^T x_l) subject to \alpha_k \geq 0 and \sum_k \alpha_k y_k = 0, where the vectors correspond to positive multipliers, enabling extensions and computational efficiency. This formulation, introduced by Boser, Guyon, and Vapnik, leverages under for separable data.

References

  1. [1]
    [PDF] Convex Optimization Theory, 2009 - MIT
    Basic Concepts of Convex Analysis . . . . . . . . . . p. 1. 1.1. ConvexSetsandFunctions . . . . . . . . . . . . . . . . p.2. 1.1.1. ConvexFunctions .Missing: scholarly | Show results with:scholarly
  2. [2]
    [PDF] Convex Analysis - Mathematical Foundations of Data Sciences
    We discus here some important concepts from convex analysis for non-smooth optimization. 16.1 Basics of Convex Analysis. We consider minimization problems of ...
  3. [3]
    [PDF] Introductory Lectures on Stochastic Optimization - Stanford University
    Apr 3, 2011 · Our lectures begin with convex analysis, whose study Rockafellar, influenced by Fenchel, launched in his 1970 book Convex Analysis [49]. We ...
  4. [4]
    None
    Below is a merged summary of all the provided segments on "Introduction to Convex Optimization and Its Relation to Convex Analysis, Key Applications." To retain all the detailed information in a dense and organized manner, I will use a combination of narrative text and a table in CSV format for key applications and useful URLs. This approach ensures comprehensive coverage while maintaining clarity and structure.
  5. [5]
    [PDF] 2. Convex sets
    • hyperplanes are affine and convex; halfspaces are convex. Convex sets. 2–6. Page 7. Euclidean balls and ellipsoids. (Euclidean) ball with center xc and radius ...
  6. [6]
    [PDF] Convex Sets - The Hong Kong University of Science and Technology
    Examples: Euclidean Balls and Ellipsoids. • Euclidean ball with center xc and ... Example: a polyhedron is the intersection of halfspaces and hyper-.
  7. [7]
    [PDF] Lecture 3: Convex Sets and Functions - People @EECS
    Jan 24, 2012 · Two important operations that preserve convexity are: • Intersection: the intersection of a (possibly infinite) family of convex sets is convex.
  8. [8]
    [PDF] Convex Geometry
    Hence, AQ and A(riQ)have the same closure and relative interior which tells us that ri (AQ) ⊂ A(riQ). For the reverse inclusion, let z ∈ A(riQ) and y ∈ riQ such ...
  9. [9]
    [PDF] Extreme points and the Krein–Milman theorem - Caltech
    one can use the Krein–Milman theorem to prove the existence of such representa- tions. Just the existence of extreme points in compact convex sets is powerful.
  10. [10]
    [PDF] Legendre-Fenchel transforms in a nutshell - NC State ISE
    The aim of this report is to list and explain the basic properties of the Legendre-Fenchel transform, which is a generalization of the Legendre transform ...
  11. [11]
    [PDF] Lecture 12: February 23rd 12.1 Fenchel Conjugate
    The most basic property of the conjugate function that is immediate from its definition is that, f∗(y) ≥ xT y − f(x). This is known as Fenchel's inequality or ...
  12. [12]
    [PDF] 5. Conjugate functions
    Indicator of convex cone 𝐶: conjugate is indicator of polar (negative dual) cone ... Norm: conjugate is indicator of unit ball for dual norm. 𝑓 (𝑥) = k𝑥k. 𝑓 ...
  13. [13]
    [PDF] Convex conjugate functions • Conjugacy theorem - DSpace@MIT
    • Convex conjugate functions. • Conjugacy theorem ... • Conjugate of indicator function δX of set X. σX (y) = sup y. x x∈X is called the support function of X.
  14. [14]
  15. [15]
    [PDF] B Fenchel conjugation
    We also observe that ξ ∈ ∂χ ¯K(0) is equivalent to ξtx ≤ 0 for all x ∈. ¯. K, i.e., to ξtx ≤ 0 for all x ∈ K, i.e., to ξ ∈ K∗. 21. Page 2. Lecture 7: Convex ...
  16. [16]
    [PDF] Convex analysis
    Sep 30, 2015 · Proposition 4.2.3 (Involution property, Fenchel-Moreau). If f is convex, l.s.c., proper, then f = f∗∗. Proof. Using proposition 4.2.2, it ...
  17. [17]
    [PDF] Vision, Learning and Optimization - 2. Basic notion of convexity
    Feb 11, 2020 · An extended real valued function f : X → [−∞, +∞] is said to be convex if and only if its epigraph epi f := {(x,λ) ∈X× R : λ ≥ f (x)}.
  18. [18]
    404-page
    - **Insufficient relevant content**: The URL (https://press.princeton.edu/books/hardcover/9780691015866/convex-analysis) returns a 404 error, indicating the page is unavailable or the address is incorrect.
  19. [19]
    [PDF] W. Brannath and W. Schachermayer 1. The Bipolar Theorem
    Abstract. A consequence of the Hahn-Banach theorem is the classical bipolar the- orem which states that the bipolar of a subset of a locally convex vector ...Missing: source | Show results with:source
  20. [20]
    Convex Optimization – Boyd and Vandenberghe - Stanford University
    Convex Optimization – Boyd and Vandenberghe · Download · Catalog links.
  21. [21]
    [PDF] Proximal Algorithms - Stanford University
    A proximal algorithm is an algorithm for solving a convex optimization problem that uses the proximal operators of the objective terms. For example, the ...