Fact-checked by Grok 2 weeks ago

Support function

In convex analysis, the support function of a nonempty K in a is defined as h_K(x) = \sup_{y \in K} \langle y, x \rangle, where \langle \cdot, \cdot \rangle denotes the inner product and x is a direction vector. This real-valued function, which is always and positively homogeneous of degree one, provides a of the set K as the intersection of supporting half-spaces and uniquely determines compact convex sets. Introduced by at the end of the , the support function has become a foundational tool for characterizing geometric properties of convex bodies. The support function exhibits several key properties that make it indispensable in . It is lower semicontinuous and subadditive, with the effective domain forming a closed . For instance, the support function of the unit ball corresponds to the dual , while for a , it aligns with indicator functions in optimization contexts. Moreover, the subdifferential of the support function at a point p consists exactly of the points in K that achieve the supremum, linking it directly to maximization problems. These attributes allow the support function to test set inclusions: K \subseteq L if and only if h_K(x) \leq h_L(x) for all x. Beyond pure geometry, support functions find broad applications in optimization and . In , they facilitate dual formulations and the analysis of feasible sets, such as second-order cones where h_{(x,t): t \geq \|x\|_2}(u,v) = \|u\|_2 v for v \geq 0. In theory, the support function serves as the profit function \pi_A(p) = \sup_{y \in A} p \cdot y over a technology set A, enabling theorems like for under differentiability. They also underpin the separating theorem, ensuring that for a closed A and point outside it, a hyperplane exists with the support function bounding the separation. Extensions to non-compact sets and abstract spaces further broaden their utility in modern research.

Fundamentals

Definition

In , the support function is fundamentally associated with sets. A subset A of a real is if, for any x, y \in A and \lambda \in [0, 1], the \lambda x + (1 - \lambda) y \in A. Consider a real normed vector space X, whose dual space X^* consists of all continuous linear functionals x^*: X \to \mathbb{R}, equipped with the duality pairing \langle x^*, x \rangle = x^*(x). For a non-empty subset A \subseteq X, the support function h_A: X^* \to \mathbb{R} \cup \{+\infty\} is defined by h_A(x^*) = \sup_{x \in A} \langle x^*, x \rangle. This extended-real-valued function encodes the maximal projection of points in A onto the direction specified by x^*. The support function h_A is finite-valued if A is bounded, as the supremum is then bounded by the product of the dual norm of x^* and the of A; otherwise, it may attain +\infty when A is unbounded in the direction of x^*. In the specific setting of \mathbb{R}^n with the standard , the dual space identifies with \mathbb{R}^n itself, and the support function takes the form h_A(x) = \sup_{y \in A} x \cdot y, \quad x \in \mathbb{R}^n. The level sets where this supremum is attained correspond to supporting hyperplanes of A.

Geometric Interpretation

The support function h_A(x) of a nonempty A in a geometrically represents the maximum projection of points in A onto the direction given by the unit x / \|x\|, scaled by the \|x\|. Specifically, for x \neq 0, h_A(x) = \sup_{y \in A} \langle x, y \rangle attains its value at points on the boundary of A that lie farthest in the direction of x. This maximum is intimately connected to supporting hyperplanes: the hyperplane defined by \{ y \in X \mid \langle x, y \rangle = h_A(x) \} supports A at the points where the supremum is achieved, meaning A lies entirely on one side of the . If \|x\| = 1 and h_A(x) < \infty, then h_A(x) equals the signed distance from the origin to this supporting with outward normal x. If h_A(x) = +\infty, the set is unbounded in the direction x, and there is no finite supporting in that direction. In two dimensions, for a convex body A containing the origin in its interior, the h_A(\theta) (where \theta parameterizes the unit circle) gives the perpendicular distance from the origin to the supporting line perpendicular to the direction \theta, tracing the boundary of A as a polar plot of these distances. Every closed convex set A can be recovered from its as A = \bigcap_{x \in X^*} \{ y \mid \langle x, y \rangle \leq h_A(x) \}, the intersection of all its supporting half-spaces (where h_A(x) = +\infty yields the whole space).

Examples

Basic Convex Sets

The support function provides a concrete way to compute the maximum projection of points in a convex set onto a given direction, offering insight into the set's extent in that direction. Consider the simplest case in one dimension: the closed interval [-1, 1] \subset \mathbb{R}, which is convex as the convex hull of its endpoints. The support function is h_{[-1,1]}(x) = \sup_{y \in [-1,1]} x y. To evaluate this, note that the supremum occurs at one of the endpoints depending on the sign of x: if x > 0, the maximum is at y = 1, yielding x \cdot 1 = x; if x < 0, the maximum is at y = -1, yielding x \cdot (-1) = -x; if x = 0, it is $0. Thus, h_{[-1,1]}(x) = |x|, which corresponds to the Euclidean norm in \mathbb{R}. In a general normed vector space X with norm \|\cdot\|, the closed unit ball B = \{ y \in X : \|y\| \leq 1 \} is a fundamental convex set, symmetric about the . Its support function is h_B(x) = \sup_{y \in B} \langle x, y \rangle = \|x\|_*, where \|\cdot\|_* denotes the dual norm defined by \|x\|_* = \sup_{\|y\| \leq 1} \langle x, y \rangle . This equality holds because the supremum is attained at a point y on the unit sphere in the direction maximizing the inner product with x, and by definition of the dual norm, it equals \|x\|_*. For the Euclidean space \mathbb{R}^n with the \ell_2-norm, the dual norm is also the \ell_2-norm, so the support function of the unit ball simplifies to h_B(x) = \|x\|_2. More generally, for a ball of radius r > 0 centered at the , B(0, r) = \{ y \in \mathbb{R}^n : \|y\|_2 \leq r \}, scaling yields h_{B(0,r)}(x) = r \|x\|_2; if centered at x_c, it becomes h_{B(x_c, r)}(x) = \langle x, x_c \rangle + r \|x\|_2. The standard (n-1)-simplex in \mathbb{[R](/page/R)}^n, defined as \Delta^{n-1} = \{ y \in \mathbb{[R](/page/R)}^n : y \geq 0, \sum_{i=1}^n y_i = 1 \}, is the of the vectors e_1, \dots, e_n and represents the set of probability distributions over n outcomes. Its support function is h_{\Delta^{n-1}}(x) = \max_{i=1,\dots,n} x_i, achieved at the e_i where x_i is maximized. This follows from the , as the supremum \sup_{y \in \Delta^{n-1}} \langle x, y \rangle is the maximum over the vertices due to convexity, and \langle x, e_i \rangle = x_i. For the relaxed simplex \{ y \in \mathbb{[R](/page/R)}^n : y \geq 0, \sum_{i=1}^n y_i \leq 1 \}, the of the and the vectors e_1, \dots, e_n, the support function is h(x) = \max \left( 0, \max_{i=1,\dots,n} x_i \right), achieved at the if \max_i x_i \leq 0 or at e_j for j = \arg\max_i x_i otherwise. Half-spaces illustrate how the support function behaves for unbounded sets. Consider H = \{ y \in \mathbb{R}^n : \langle a, y \rangle \leq b \} with a \neq 0 and b \in \mathbb{R}. The support function h_H(x) = \sup_{y \in H} \langle x, y \rangle is finite only if x lies in the dual cone to the recession cone of H, which is the ray generated by a. Specifically, h_H(x) = +\infty unless x = \lambda a for some \lambda \geq 0, in which case h_H(\lambda a) = \lambda b. To see this, normalize so \|a\|_2 = 1; then for x = \lambda a with \lambda \geq 0, the maximum is attained on the bounding at the , yielding \lambda b, while any orthogonal component to a or negative scalar would allow the supremum to diverge along the unbounded directions of H. This reflects the half-space's infinite extent opposite to the normal a.

Polytopes and Polyhedra

Polytopes are compact sets that admit a representation, allowing for an explicit expression of their support function. Specifically, a P \subset \mathbb{R}^d can be expressed as the of a of vertices \{v_1, \dots, v_m\}, so P = \conv\{v_1, \dots, v_m\}. The support function then simplifies to h_P(x) = \max_{i=1,\dots,m} \langle x, v_i \rangle, since the supremum over P is attained at one of the extreme points, which coincide with the vertices for polytopes. This representation highlights the piecewise linear nature of h_P, with linearity over each cone of the normal fan induced by the facets of P. In the dual facet representation, a polytope P is defined by a finite number of inequalities, P = \{ y \in \mathbb{R}^d \mid A y \leq b \}, where A is a matrix with rows corresponding to facet normals a_j and b to the offsets. The support function h_P(x) is the optimal value of the linear program \sup \{ \langle x, y \rangle \mid A y \leq b \}, which by strong duality equals h_P(x) = \min \left\{ \sum_{j=1}^k \lambda_j b_j \;\middle|\; \sum_{j=1}^k \lambda_j a_j = x, \; \lambda_j \geq 0 \right\}. This formulation connects the support function directly to linear programming, where the minimizers \lambda correspond to barycentric coordinates on the dual polytope. A concrete example is the unit cube C = [-1,1]^3 \subset \mathbb{R}^3, whose vertices are all points with coordinates \pm 1. The support function is h_C(x) = |x_1| + |x_2| + |x_3|, obtained by maximizing \langle x, v \rangle over these vertices, where the optimum aligns each coordinate of v with the sign of the corresponding x_i. This equals the \ell_1-norm of x, reflecting the duality between the \ell_\infty-ball (the cube) and the \ell_1-norm. In contrast, for the non-symmetric cube [0,1]^3, the support function is h(x) = \sum_{i=1}^3 \max(x_i, 0), computed similarly via maximization over its eight vertices. For unbounded polyhedra, such as P = \{ y \in \mathbb{R}^d \mid A y \leq b \} with a nontrivial recession cone, the support function h_P(x) may be infinite for directions x in the dual cone of the recession cone. In the formulation, this occurs when the dual problem is infeasible, yielding h_P(x) = +\infty; otherwise, it remains finite and given by the dual optimum. The recession cone must be accounted for to ensure finiteness in bounded directions. Computationally, evaluating the support function of a in vertex representation reduces to solving a linear program over the finite set of vertices, which is efficient for modest dimensions but scales with the number of vertices. In facet representation, it involves solving the LP, often preferable when the number of facets is smaller than vertices. These evaluations underpin algorithms for approximation and optimization.

Properties

Dependence on Direction

The support function h_A(\mathbf{x}) = \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle of a nonempty set A \subseteq \mathbb{R}^n is convex as a function of the direction vector \mathbf{x}. Specifically, for any \mathbf{x}, \mathbf{y} \in \mathbb{R}^n and \lambda \in [0,1], h_A(\lambda \mathbf{x} + (1-\lambda) \mathbf{y}) \leq \lambda h_A(\mathbf{x}) + (1-\lambda) h_A(\mathbf{y}). This follows from the fact that h_A is the pointwise supremum of the family of affine (hence convex) functions \mathbf{x} \mapsto \langle \mathbf{x}, \mathbf{a} \rangle for \mathbf{a} \in A, and the pointwise supremum of convex functions is convex. The support function is also positively homogeneous: for any \mathbf{x} \in \mathbb{R}^n and \lambda \geq 0, h_A(\lambda \mathbf{x}) = \lambda h_A(\mathbf{x}). To see this, note that h_A(\lambda \mathbf{x}) = \sup_{\mathbf{a} \in A} \langle \lambda \mathbf{x}, \mathbf{a} \rangle = \lambda \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle = \lambda h_A(\mathbf{x}), as the supremum scales linearly with \lambda \geq 0. Positive homogeneity and convexity together imply : for any \mathbf{x}, \mathbf{y} \in \mathbb{R}^n, h_A(\mathbf{x} + \mathbf{y}) \leq h_A(\mathbf{x}) + h_A(\mathbf{y}). This holds because h_A(\mathbf{x} + \mathbf{y}) = \sup_{\mathbf{a} \in A} \langle \mathbf{x} + \mathbf{y}, \mathbf{a} \rangle = \sup_{\mathbf{a} \in A} \left( \langle \mathbf{x}, \mathbf{a} \rangle + \langle \mathbf{y}, \mathbf{a} \rangle \right) \leq \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle + \sup_{\mathbf{a} \in A} \langle \mathbf{y}, \mathbf{a} \rangle = h_A(\mathbf{x}) + h_A(\mathbf{y}), using the monotonicity of the supremum. These properties make h_A a sublinear function. If A is compact, then h_A is Lipschitz continuous with respect to the norm on \mathbb{R}^n: for any \mathbf{x}, \mathbf{y} \in \mathbb{R}^n, |h_A(\mathbf{x}) - h_A(\mathbf{y})| \leq \left( \sup_{\mathbf{a} \in A} \|\mathbf{a}\|_2 \right) \|\mathbf{x} - \mathbf{y}\|_2. This bound arises because |h_A(\mathbf{x}) - h_A(\mathbf{y})| = \left| \sup_{\mathbf{a} \in A} \langle \mathbf{x}, \mathbf{a} \rangle - \sup_{\mathbf{a} \in A} \langle \mathbf{y}, \mathbf{a} \rangle \right| \leq \sup_{\mathbf{a} \in A} |\langle \mathbf{x} - \mathbf{y}, \mathbf{a} \rangle| \leq \|\mathbf{x} - \mathbf{y}\|_2 \sup_{\mathbf{a} \in A} \|\mathbf{a}\|_2, leveraging the compactness of A to the suprema are attained and finite. The epigraph of h_A, defined as \operatorname{epi} h_A = \{ (\mathbf{x}, t) \in \mathbb{R}^n \times \mathbb{R} \mid t \geq h_A(\mathbf{x}) \} = \{ (\mathbf{x}, t) \in \mathbb{R}^n \times \mathbb{R} \mid t \geq \langle \mathbf{x}, \mathbf{a} \rangle \ \forall \mathbf{a} \in A \}, is a closed convex cone. This representation underscores the convex duality inherent in the support function.

Dependence on Set

The support function of a convex set exhibits specific transformation rules under various geometric operations on the set, reflecting its role as a complete functional descriptor of the set's extent in different directions. For scaling, the support function is positively homogeneous: if \lambda \geq 0, then h_{\lambda A}(x) = \lambda h_A(x) for all x. This follows directly from the definition, as \sup_{y \in \lambda A} \langle x, y \rangle = \lambda \sup_{z \in A} \langle x, z \rangle. Under translation by a b, the support function shifts additively: h_{A + b}(x) = h_A(x) + \langle x, b \rangle. This arises because \sup_{y \in A + b} \langle x, y \rangle = \sup_{z \in A} \langle x, z + b \rangle = h_A(x) + \langle x, b \rangle. The support function is additive with respect to the Minkowski sum of sets: h_{A + B}(x) = h_A(x) + h_B(x) for sets A and B. To see this, note that for any a \in A and b \in B, \langle x, a + b \rangle = \langle x, a \rangle + \langle x, b \rangle \leq h_A(x) + h_B(x), with equality achievable by choosing near-supremizers independently, so \sup_{(a,b) \in A \times B} \langle x, a + b \rangle = h_A(x) + h_B(x). For the intersection of convex sets, the support function satisfies h_{A \cap B}(x) \leq \min\{h_A(x), h_B(x)\}, since A \cap B \subseteq A and A \cap B \subseteq B imply the supremum over the smaller set is at most that over each. Equality holds under conditions such as when A and B share the same recession cone in the direction -x, ensuring the unbounded behavior (if present) is preserved in the intersection, or when the supremum-attaining points for the minimizing support function lie in A \cap B. The support function is invariant under taking the : h_{\operatorname{conv} A}(x) = h_A(x). This is because the linear functional \langle x, \cdot \rangle achieves its supremum over convex combinations at an of A, so the supremum over \operatorname{conv} A equals that over A. For a closed A containing the origin, the support function of the polar set A^\circ = \{ y \mid \langle y, z \rangle \leq 1 \ \forall z \in A \} coincides with the gauge function of A: h_{A^\circ}(x) = \gamma_A(x), where \gamma_A(x) = \inf \{ \lambda > 0 \mid x \in \lambda A \}. This duality relation, established via , links the two functionals bidirectionally under the given assumptions. This scaling behavior aligns with the positive homogeneity of the support function in its directional argument.

Applications

The support function of a A plays a central role in through its relation to the Fenchel conjugate of the \delta_A, defined as \delta_A(x) = 0 if x \in A and +\infty otherwise. Specifically, the support function h_A(y) = \sup_{x \in A} \langle y, x \rangle is the Fenchel conjugate \delta_A^*(y). In Lagrangian duality for the problem \min_{x} f(x) subject to x \in A, where f is , the dual problem can be formulated using the conjugate of f and the support function of A. The dual maximizes -f^*(-y) - h_A(y), providing a lower bound on the primal value via weak duality. can be computed using support functions, as the direction of the projection from a point p to A aligns with iterative maximization of linear functions over A. In particular, \arg\max_{x \in A} \langle d, x \rangle identifies boundary points that guide the projection in methods like conditional gradient descent, where repeated support function evaluations approximate the Euclidean \arg\min_{x \in A} \|p - x\|^2. In , support functions model uncertainty sets U, reformulating worst-case constraints tractably. For instance, the robust counterpart of \min c^\top x subject to Ax - b \in U for all realizations in U becomes \min c^\top x subject to h_U(Ax - b) \leq 0, where h_U(v) = \sup_{u \in U} \langle v, u \rangle captures the maximum violation over U. Numerical methods for leverage support function oracles, which provide h_A(y) and \arg\max_{x \in A} \langle y, x \rangle, in cutting-plane approaches to approximate feasible sets or solve linear programs over A. These oracles generate supporting hyperplanes that refine polyhedral approximations of A, enabling efficient in ellipsoid or center-of-gravity methods for set containment or optimization.

Duality and Separation

The support function plays a central role in separation theorems for sets. For a nonempty closed A in a and a point x \notin A, there exists a y \neq 0 such that the defined by \{z \mid \langle y, z \rangle = \langle y, x \rangle\} strictly separates x from A, meaning h_A(y) < \langle y, x \rangle. This strict separation follows from the proper separation theorem applied to the sets \{x\} and A, where the supporting hyperplane is determined by the supremum over A in the y. In convex duality, the support function provides a representation of the polar set and facilitates the bipolar theorem. The polar of a convex set A containing the origin is given by A^\circ = \{ y \mid h_A(y) \leq 1 \}, and for any closed convex set A containing the origin, the bipolar theorem states that A = (A^\circ)^\circ. This duality recovers A as the intersection of half-spaces \{ z \mid \langle y, z \rangle \leq 1 \ \forall y \in A^\circ \}, highlighting the support function's role in dual representations of convex sets. Set inclusion between convex sets can be characterized entirely through their support functions. Specifically, for nonempty convex sets A and B, A \subseteq B if and only if h_A(x) \leq h_B(x) for all directions x. The forward direction holds by the definition of the support function as a supremum, while the converse follows from the fact that the closed convex hull of a set is the intersection of all supporting half-spaces \{ z \mid \langle x, z \rangle \leq h(x) \ \forall x \}, so tighter bounds imply a smaller intersection. Support functions, being sublinear (convex and positively homogeneous), enable extensions of linear functionals via the Hahn-Banach theorem. If a linear functional f is defined on a subspace and bounded above by the support function h_A of a convex set A, then f extends to a linear functional g on the entire space satisfying g \leq h_A everywhere. This preserves the supremum bounds and is crucial for maintaining convexity constraints in dual problems. Finally, support functions quantify approximations between convex sets via the Hausdorff distance. For compact convex sets A and B, the Hausdorff distance is given by d_H(A, B) = \sup_{\|x\|=1} |h_A(x) - h_B(x)|, measuring the uniform deviation of their support functions over the unit sphere. This equivalence arises because the Hausdorff metric on compact convex sets coincides with the supremum norm on their support functions restricted to the unit sphere.

Variants

Infinite-Dimensional Spaces

In a Banach space X, the support function of a nonempty convex subset A \subseteq X is defined as h_A(x^*) = \sup_{x \in A} \langle x^*, x \rangle, \quad x^* \in X^*, where X^* denotes the continuous dual space of X and \langle \cdot, \cdot \rangle is the duality pairing between X and X^*. This extends the finite-dimensional notion, but requires A to be weakly closed and convex to ensure the support function is well-behaved, such as being lower semicontinuous in the appropriate topology. If A is bounded, h_A is Lipschitz continuous on X^* with constant equal to the radius of A, reflecting the equicontinuity of the family of linear functionals induced by elements of A. In infinite-dimensional spaces, compactness presents significant challenges compared to the finite-dimensional case, where closed bounded convex sets are compact. Bounded closed convex subsets of X are not necessarily weakly compact, so the supremum defining h_A(x^*) may not be attained for some x^* \in X^*; that is, there may be no x \in A such that \langle x^*, x \rangle = h_A(x^*). To address this, one often considers the weak* closure of A in the bidual X^{**}, as the support function of A coincides with that of its weak* closed convex hull in X^{**}, ensuring the representation captures the "completion" of A under the canonical embedding X \hookrightarrow X^{**}. A concrete example arises in the Hilbert space L^2([0,1]), where the closed unit ball B = \{ f \in L^2([0,1]) : \|f\|_{L^2} \leq 1 \} has support function h_B(g) = \|g\|_{L^2} for g \in (L^2)^* \cong L^2([0,1]), by the Riesz representation theorem and Cauchy-Schwarz inequality. This illustrates how the support function recovers the dual norm in self-dual spaces like Hilbert spaces. The role of reflexivity is pivotal: in a reflexive Banach space, where X^{**} = X and the canonical embedding is surjective, the support function h_A of a closed convex bounded set A is lower semicontinuous in the norm topology on X^*, leveraging the weak compactness of A. In non-reflexive spaces, such as c_0 or L^1([0,1]), one must instead employ the weak* topology on X^{**} to analyze h_A, as norm lower semicontinuity may fail without reflexivity. In functional analysis, support functions further characterize weak compactness: a weakly closed bounded convex set A \subseteq X is weakly compact if and only if every x^* \in X^* attains its supremum on A, i.e., \sup_{x \in A} \langle x^*, x \rangle is achieved, as established by generalizations of James' theorem.

Non-Convex Extensions

The support function can be formally defined for a non-convex set A in the same manner as for convex sets: h_A(x) = \sup_{y \in A} \langle x, y \rangle. However, unlike the convex case where the function fully characterizes the set, for non-convex A, h_A(x) equals the support function of the \operatorname{conv} A, i.e., h_A(x) = h_{\operatorname{conv} A}(x), thereby losing information about the non-convex structure. This equivalence arises because the supremum over a set and its convex hull coincides for linear functionals, limiting the standard support function's utility in capturing irregularities in non-convex geometries. To address non-convexity more directly, extensions incorporate generalized notions that preserve directional information without convexification. One such variant is the lower generalized support function for a non-convex constraint set S, defined as \hat{\sigma}_S(\lambda) = \liminf_{\tilde{\lambda} \to \lambda} \inf_u \{ \tilde{\lambda}^T u \mid \tilde{\lambda} \in N_S(u) \}, where N_S(u) denotes the limiting normal cone at u. This reduces to the standard support function \sigma_S(\lambda) when S is convex, but provides a tighter lower bound for non-convex S, enabling second-order optimality conditions in non-convex optimization problems under assumptions like directional metric subregularity. Similarly, for set-constrained problems \min f(x) subject to g(x) \in \Lambda with non-convex \Lambda, an adapted form \hat{\sigma}_{S,A}(\lambda) restricts to points in a subset A, facilitating analysis of local minima. In nonsmooth non-convex optimization, the Clarke subdifferential \partial^C f(x) of a locally Lipschitz function f plays a key role, with its support function relating directly to directional behavior: \sigma_{\partial^C f(x)}(d) = f^o(x; d), where f^o(x; d) is the Clarke directional derivative, \max_{\xi \in \partial^C f(x)} \langle \xi, d \rangle. This connection extends the support function concept to subdifferentials of non-convex functions, allowing separation theorems and stationarity conditions via convex approximations of the subdifferential set. For instance, in finite dimensions, the Clarke subdifferential at the origin of the difference of support functions can separate non-convex sets, generalizing hyperplane separation. A complementary variant is the directed distance function to a non-convex set A, defined as d_A(x; u) = \inf \{ t > 0 \mid x + t u \in A \}, which replaces the supremum with an infimum to measure penetration in a specific u. This is particularly useful in variational analysis for handling proximal mappings and error bounds in non-convex settings, differing from the support function by focusing on minimal rather than maximal extent. These non-convex extensions emerged in the framework of variational analysis during the late 1980s and 1990s, with foundational contributions from and collaborators, building on Clarke's nonsmooth calculus to unify geometric and analytic tools for irregular sets.

References

  1. [1]
    [PDF] Lecture 3: Convex Sets and Functions - People @EECS
    Jan 24, 2012 · We say that a function is concave if −f is convex. Here are some examples: • The support function of any set is convex. • The indicator function ...
  2. [2]
    [PDF] Support functions of general convex sets - Iowa State University
    The concept of the support function of a non-empty compact convex set was introduced by Minkowski at the end of the 19th century [3, pp. 106, 144, 231].
  3. [3]
    [PDF] Lecture 5: Convex Analysis and Support Functions - P.J. Healy
    Sep 30, 2020 · 5.4 Profit and cost functions​​ Let A be a subset of Rm. Convex analysts may give one of two definitions for the. support function of A as either ...
  4. [4]
    [PDF] 1.2 Vector spaces - Purdue Math
    Definition 1.2.12. Let X be a normed linear space. The dual space of X, denoted. by X0, is the normed space of all bounded linear functionals on X. If f,f1, ...<|control11|><|separator|>
  5. [5]
    [PDF] 5. Conjugate functions
    Indicator of convex set 𝐶: conjugate is the support function of 𝐶. 𝛿𝐶(𝑥) = 0 ... • R. T. Rockafellar, Convex Analysis (1970). Conjugate functions. 5.25.
  6. [6]
    [PDF] ACM 204, FALL 2018: LECTURES ON CONVEX GEOMETRY JOEL ...
    most directions, a compact convex convex set has a supporting hyperplane ... Recall that the support function of a convex set C ⊂ R𝑑 is the function ℎC defined by.
  7. [7]
    [PDF] An introduction to convex and discrete geometry Lecture Notes
    Geometrically, if u is a unit vector, then hK(u) is the (signed) distance from 0 to the supporting hyperplane of K with outward normal u. b. K. 0 u. hK(u).
  8. [8]
    None
    Below is a merged response that consolidates all the information from the provided summaries into a single, comprehensive summary. To retain as much detail as possible, I will use a structured format with tables where appropriate (e.g., for definitions, properties, and examples) and narrative text for context and applications. The response will cover the definition, properties, examples, and relevance of the support function as presented across the segments, drawing from "Convex Optimization" by Boyd & Vandenberghe (available at https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf).
  9. [9]
    [PDF] AN INTRODUCTION TO CONVEXITY - UNI-Lj
    A classic book in convex analysis is Rockafellar's book [11]. A modern text ... (The support function) Let P be a polytope in IRn, say P = conv({v1 ...
  10. [10]
    [PDF] arXiv:1802.06674v1 [math.AG] 19 Feb 2018
    Feb 19, 2018 · Thus, the support function ϕP restricted to each cone in the normal fan is linear. Conversely, we say that a polytope P is normal to a fan Σ if ...
  11. [11]
    [PDF] arXiv:1611.10059v1 [cs.CG] 30 Nov 2016
    Nov 30, 2016 · Computing the support function of a convex polytope is equivalent to solving a linear program (LP). The computed vertex is added to the data- ...
  12. [12]
    [PDF] Convexity I: Sets and Functions
    • Support function: for any set C (convex or not), its support function. I∗. C(x) = max y∈C. xT y is convex. • Max function: f(x) = max{x1,...xn} is convex. 20 ...
  13. [13]
    [PDF] Introduction to Convexity
    Dec 6, 2018 · ... support function L(x) = ha, (x−x?)i+f(x?) for f at x?. Define ... We now prove positive homogeneity of f0(x; ·). For any r ∈ Rd and λ ...
  14. [14]
    [PDF] arXiv:1608.07726v1 [math.OC] 27 Aug 2016
    Aug 27, 2016 · In this section we derive a precise representation of support functions for convex set inter- sections via the infimal convolution of the ...
  15. [15]
    [PDF] Compact Convex Projections - Journal of Machine Learning Research
    This paper studies conditional gradient methods for projections onto convex sets, which are common in machine learning and statistics, like in least squares ...
  16. [16]
    [PDF] Data-driven robust optimization - University of Southern California
    Robust optimization is a popular approach to optimization under uncertainty. The key idea is to define an uncertainty set of possible realizations of the ...
  17. [17]
    [PDF] Separation Theorems | Akshay Agrawal
    Jan 21, 2019 · By theorem 5.1, the epigraph of a convex function is equal to the ... [Roc70] Rockafellar, R. T. Convex Analysis. Princeton University ...
  18. [18]
    [PDF] Topic 9: Support Functions
    So an already closed convex set is the intersection of all the closed half spaces that include it. The support function of a set A is a handy way to summarize ...
  19. [19]
    [PDF] Lectures in Functional Analysis Roman Vershynin - UCI Mathematics
    Theorem 2.3.18 (Hahn-Banach theorem for sublinear functions). Let X0 be a subspace of a linear vector space X. Let } } be a sublinear function on X.
  20. [20]
    [PDF] arXiv:2310.10247v4 [math.AP] 16 Dec 2024
    Dec 16, 2024 · The Hausdorff distance of convex sets can be characterized by support functions as. (2.43). dH(E,F) = dH(∂E,∂F) = khE − hF kL∞(Sn−1), see ...
  21. [21]
    [PDF] Stability of supporting and exposing elements of convex sets in ...
    Abstract. To a convex set in a Banach space we associate a convex function. (the separating function), whose subdifferential provides useful information on ...
  22. [22]
    Reflexivity and the Supremum of Linear Functionals - jstor
    THEOREM 3. A separable Banach space is reflexive if each continuous linear func- tional attains its sup on the unit sphere. If a separable Banach space ...
  23. [23]
    A multiset version of James's theorem - ScienceDirect
    Nov 1, 2022 · A weakly closed and bounded subset A of a real Banach space is weakly compact if (and only if) every continuous and linear functional attains its supremum on A.
  24. [24]
    Support Function and Minkowski Addition of Non-Convex Sets
    This paper studies some boundary representations for sets derived from the support function defined for convex sets in the plane and analyse how to ...
  25. [25]
    [1112.3290] Optimizing convex functions over nonconvex sets - arXiv
    Dec 14, 2011 · In this paper, we present several cases where it is possible to characterize the convex hull by efficiently separable linear inequalities.Missing: generalization | Show results with:generalization
  26. [26]
    [1911.04076] Second-order optimality conditions for non-convex set ...
    Nov 11, 2019 · In the first approach we extend the concept of the support function so that it is applicable to general non-convex set-constrained problems, ...
  27. [27]
    [PDF] Subgradients - Stanford University
    Apr 13, 2022 · non-convex and non-smooth functions through convex analysis. We will introduce Clarke. Subdifferential, which is a natural generalization ...<|control11|><|separator|>
  28. [28]
    Full article: Separation of convex sets by Clarke subdifferential
    We prove that in a finite dimension C can be chosen as the Clarke subdifferential at the origin of , where pA , pB denotes the support functions of A and B ...
  29. [29]
    [PDF] VARIATIONAL ANALYSIS - UW Math Department
    Subgradients and subderivatives of functions, convex and nonconvex, are crucial in analyzing such 'variations', as are the manifestations of Lipschitzian ...
  30. [30]
    Recession function and its applications in optimization - ResearchGate
    Jun 28, 2020 · In this paper, we focus on the recession cone and recession function in the nonconvex case. By virtue of the recession function, we investigate the ...