Fact-checked by Grok 2 weeks ago

Convex function

In mathematics, a convex function is a real-valued function f defined on a convex set C in a real vector space such that for all x, y \in C and \lambda \in [0, 1], the inequality f(\lambda x + (1 - \lambda) y) \leq \lambda f(x) + (1 - \lambda) f(y) holds. This condition implies that the graph of f lies below the secant line connecting any two points on it, providing a geometric interpretation of convexity. Equivalently, a function is convex if its epigraph—the set of points above the graph—is a convex set. Convex functions play a central role in optimization, where minimizing a convex function over a convex set ensures that any local minimum is global, facilitating the development of efficient algorithms. A key property is Jensen's inequality, which states that for a convex function f and probability measure \mu, f\left( \int x \, d\mu(x) \right) \leq \int f(x) \, d\mu(x), with applications in probability and statistics. Strict convexity, a stronger condition where the inequality is strict for \lambda \in (0,1) and x \neq y, guarantees uniqueness of minimizers in optimization problems. Beyond optimization, convex functions arise in diverse fields, including machine learning for loss functions in algorithms like support vector machines, economics for modeling concave utility functions (the negative of convex), and operations research for resource allocation models. Examples include the quadratic function f(x) = x^2 on \mathbb{R}, which is strictly convex, and the absolute value f(x) = |x|, which is convex but not strictly so. Their study forms the foundation of convex analysis, a branch of mathematics bridging geometry, analysis, and applied sciences.

Fundamentals

Definition

A convex set C \subseteq \mathbb{R}^n is a subset such that for any two points x, y \in C and any \theta \in [0, 1], the convex combination \theta x + (1 - \theta) y also belongs to C; geometrically, this means that the line segment connecting any two points in C lies entirely within C. A function f: \mathbb{R}^n \to \mathbb{R} is convex if its domain \dom f is a convex set and, for all x, y \in \dom f and \theta \in [0, 1], f(\theta x + (1 - \theta) y) \leq \theta f(x) + (1 - \theta) f(y). This inequality states that the function value at any point on the line segment between x and y does not exceed the corresponding weighted average of the function values at the endpoints. The function f is strictly convex if \dom f is convex and the inequality above is strict whenever x \neq y and $0 < \theta < 1, f(\theta x + (1 - \theta) y) < \theta f(x) + (1 - \theta) f(y). Strict convexity implies that the graph of f lies strictly below any chord connecting two distinct points on the graph, excluding the endpoints. Convex functions can be extended to the extended real line \mathbb{R} \cup \{+\infty\} by defining \tilde{f}(x) = f(x) for x \in \dom f and \tilde{f}(x) = +\infty otherwise; such an extended-real-valued function \tilde{f} is convex if its epigraph is a convex set. A proper convex function is an extended-real-valued convex function that attains finite values on a nonempty subset of its domain (i.e., \tilde{f}(x) < +\infty for some x) and never attains -\infty (i.e., \tilde{f}(x) > -\infty for all x).

Alternative Terminology

The term "convex function" draws its nomenclature from the geometric concept of convex sets, where a set is convex if the line segment connecting any two points within it lies entirely in the set; this geometric intuition extends to functions via their epigraphs, the sets of points above the graph, which are convex precisely when the function is convex. Early in the 20th century, following Johan Jensen's 1906 formalization of the inequality now bearing his name, such functions were alternatively termed "Jensen-convex" or "convex in the Jensen sense," emphasizing the midpoint convexity condition that underpins the modern definition. A concave function is the negative of a convex function, meaning if f is convex, then -f is concave, and vice versa; this duality highlights their opposing curvatures in optimization and analysis. In contrast, a quasiconvex function has convex lower-level sets (sublevel sets) but need not satisfy the full convexity inequality, allowing for broader applications where only directional minimization properties are required. In optimization, "convex" typically refers to functions amenable to global minimization via efficient algorithms, as formalized in seminal works on convex programming. Conversely, in probability theory, "log-convex" describes positive functions whose logarithms are convex, a property that arises in analyses of densities, integrals, and inequalities, distinct from standard convexity due to the multiplicative structure of probabilities.

Properties

One-Variable Case

In the one-variable case, a convex function f: I \to \mathbb{R}, where I \subseteq \mathbb{R} is an interval, can be graphically interpreted through its epigraph, defined as the set \{(x, t) \mid x \in I, t \geq f(x)\}, which is a convex set if and only if f is convex. This equivalence highlights the geometric foundation of convexity, as the epigraph's convexity ensures that the graph of f lies below or on any line segment (chord) connecting two points on the graph, reflecting the intuitive "bowl-shaped" curvature of such functions. For differentiable convex functions, a first-order condition characterizes convexity: f is convex on I if and only if its derivative f' is non-decreasing, meaning f'(y) \geq f'(x) for all x, y \in I with x < y. This property implies that the function lies above its tangent lines, i.e., f(y) \geq f(x) + f'(x)(y - x) for all x, y \in I, with equality holding if f is affine on that interval. If f is twice differentiable, a second-order condition applies: f is convex if and only if f''(x) \geq 0 for all x \in I, indicating non-negative second derivative and thus the absence of local maxima or inflection points that would violate convexity. A weaker form of convexity, known as midpoint convexity, states that f\left(\frac{x+y}{2}\right) \leq \frac{f(x) + f(y)}{2} for all x, y \in I; under the additional assumption of continuity (which holds for convex functions on open intervals), this implies full convexity. On the positive domain [0, \infty), convex functions exhibit implications for subadditivity and monotonicity when combined with positive homogeneity of degree one, i.e., f(tx) = t f(x) for t \geq 0 and x \geq 0. In this case, f is subadditive, satisfying f(x + y) \leq f(x) + f(y) for all x, y \geq 0, and, being convex, it is also non-decreasing if f'(x) \geq 0 wherever differentiable. Such functions, including norms restricted to one dimension, provide foundational examples in optimization where these properties ensure tractable analysis.

Multivariate Case

In the multivariate case, a function f: \mathbb{R}^n \to \mathbb{R} defined on a convex set is convex if, for all x, y \in \dom f and \lambda \in [0,1], the inequality f(\lambda x + (1-\lambda) y) \leq \lambda f(x) + (1-\lambda) f(y) holds. This generalizes the one-variable condition by considering points in higher-dimensional space, where the graph of f lies below any chord connecting two points on it. A key geometric characterization is that f is convex if and only if its epigraph, defined as the set \epi f = \{(x, t) \in \mathbb{R}^n \times \mathbb{R} \mid t \geq f(x)\}, is a convex set in \mathbb{R}^{n+1}. This epigraph forms a convex "region above the graph," ensuring that line segments between any two points in it remain within the set, which directly enforces the convexity of f. For differentiable functions, the first-order condition states that f is convex if and only if f(y) \geq f(x) + \nabla f(x)^T (y - x) for all x, y \in \dom f. This inequality implies that the graph of f lies above its tangent hyperplane at any point x, providing a supporting hyperplane that bounds the function from below. If f is twice continuously differentiable, the second-order condition requires that the Hessian matrix H(x) = \nabla^2 f(x) is positive semidefinite at every x \in \dom f, meaning all eigenvalues of H(x) are nonnegative or, equivalently, z^T H(x) z \geq 0 for all z \in \mathbb{R}^n. This condition ensures the local curvature is nonnegative in all directions, generalizing the one-variable second derivative test where f''(x) \geq 0. Jensen's inequality extends to the multivariate setting: for a convex f and weights \lambda_i \geq 0 with \sum_{i=1}^k \lambda_i = 1, it holds that f\left( \sum_{i=1}^k \lambda_i x_i \right) \leq \sum_{i=1}^k \lambda_i f(x_i) for any x_1, \dots, x_k \in \dom f. This captures the function's behavior under convex combinations, reinforcing the global underestimation by affine functions. A significant optimization property is that any local minimum of a convex f is also a global minimum over \dom f. If x^* minimizes f in some neighborhood, the convexity ensures no other point in the domain yields a lower value, as any deviation would violate the supporting hyperplane condition.

Convexity Preservation

Algebraic Operations

Convex functions are closed under several pointwise algebraic operations, which allows for the construction of new convex functions from existing ones. Specifically, the non-negative linear combination of convex functions remains convex. If f and g are convex functions defined on a convex set, and \alpha, \beta \geq 0, then h(x) = \alpha f(x) + \beta g(x) is convex. This follows directly from the definition of convexity, as the weighted sum preserves the inequality \alpha f(\theta x + (1-\theta)y) + \beta g(\theta x + (1-\theta)y) \leq \theta (\alpha f(x) + \beta g(x)) + (1-\theta) (\alpha f(y) + \beta g(y)) for \theta \in [0,1]. This property extends to finite sums or suprema over non-negative combinations. The pointwise supremum of a collection of convex functions is also convex. For convex functions f_i: \mathbb{R}^n \to \mathbb{R}, i \in I (with I finite or infinite, provided the supremum is well-defined), the function h(x) = \sup_{i \in I} f_i(x) satisfies h(\theta x + (1-\theta)y) \leq \theta h(x) + (1-\theta) h(y) because each f_i does, and the supremum preserves this inequality. For example, the maximum of two quadratic functions like f_1(x) = x^2 and f_2(x) = (x-1)^2 yields a convex piecewise quadratic envelope. Composition with affine functions preserves convexity. If f: \mathbb{R}^m \to \mathbb{R} is convex and A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, then h(x) = f(Ax + b) is convex on \mathbb{R}^n. This holds because affine maps preserve convex combinations: A(\theta x + (1-\theta)y) + b = \theta (Ax + b) + (1-\theta) (Ay + b), so h(\theta x + (1-\theta)y) = f(\theta (Ax + b) + (1-\theta) (Ay + b)) \leq \theta f(Ax + b) + (1-\theta) f(Ay + b) = \theta h(x) + (1-\theta) h(y). A practical illustration is the norm composition \|Ax + b\|_2, which is convex if the Euclidean norm is. The perspective function provides another operation that preserves convexity under suitable conditions. For a convex function f: \mathbb{R}^n \to \mathbb{R}, the perspective is defined as g(x,t) = t f(x/t) for t > 0, x \in \mathbb{R}^n, and extended appropriately. This g is jointly convex in (x,t) over \mathbb{R}^n \times \mathbb{R}_{++}. For instance, the perspective of the squared Euclidean norm f(x) = \|x\|_2^2 gives g(x,t) = \|x\|_2^2 / t, which is convex and useful in optimization problems like second-order cone programming. The proof relies on showing joint convexity via the definition or epigraph preservation. However, convexity is not preserved under operations involving negative scalars or subtraction. If \alpha < 0, then \alpha f is concave whenever f is convex, as the inequality reverses. For example, consider f(x) = x^2 (convex) and g(x) = -x^2 (\alpha = -1); g is concave, as its second derivative is negative. Similarly, the difference of two convex functions, such as h(x) = x^2 - 2x^2 = -x^2, need not be convex and is typically indefinite or concave. These counterexamples highlight that preservation requires non-negativity in linear combinations.

Set and Function Operations

The infimal convolution of two extended real-valued functions f and g on \mathbb{R}^n is defined by (f \oplus g)(x) = \inf_{y \in \mathbb{R}^n} \left\{ f(y) + g(x - y) \right\}, where the infimum is taken to be +\infty if the set is empty. If f and g are convex, then f \oplus g is convex, as its epigraph is the Minkowski sum of the epigraphs of f and g, both of which are convex sets. The Legendre-Fenchel conjugate (or convex conjugate) of a function f: \mathbb{R}^n \to \overline{\mathbb{R}} is given by f^*(y) = \sup_{x \in \mathbb{R}^n} \left\{ y \cdot x - f(x) \right\}, with the convention that the supremum is -\infty if unbounded above. The conjugate f^* is always a convex function (possibly improper), regardless of the properties of f, because it can be expressed as the support function of the epigraph of f, which is a convex operation. If f is on a set C \subseteq \mathbb{R}^n, then its restriction f|_D to any subset D \subseteq C is also , since the defining inequality for convexity holds on the smaller by direct substitution. A proper function on \mathbb{R}^n is an extended real-valued function f that is , not identically +\infty, and satisfies f(x) > -\infty for all x. Such functions are obtained by extending a finite-valued function defined on a nonempty to +\infty outside that ; this extension preserves convexity because the epigraph remains when adding the rays above the exterior points. The convex hull (or convex envelope) of a function f: \mathbb{R}^n \to \overline{\mathbb{R}}, denoted \mathrm{conv}\, f, is the pointwise supremum of all convex functions g such that g \leq f everywhere, i.e., the greatest convex minorant of f. By construction, \mathrm{conv}\, f is convex and equals the biconjugate f^{**} when f is lower semicontinuous.

Advanced Variants

Strongly Convex Functions

A strongly convex function represents a subclass of convex functions with enhanced curvature properties, parameterized by a positive constant \mu > 0. Formally, a function f: \mathbb{R}^n \to \mathbb{R} is \mu-strongly convex if, for all x, y \in \mathbb{R}^n and \lambda \in [0, 1], f(\lambda x + (1 - \lambda) y) \leq \lambda f(x) + (1 - \lambda) f(y) - \frac{\mu}{2} \lambda (1 - \lambda) \|x - y\|^2. This inequality strengthens the standard convexity condition by incorporating a quadratic penalty term that measures the deviation from linearity, ensuring the function grows at least quadratically away from any point. An equivalent characterization of strong convexity combines standard convexity with a quadratic lower bound on the function's growth. Specifically, if f is differentiable, then f is \mu-strongly convex if and only if f(y) \geq f(x) + \nabla f(x)^\top (y - x) + \frac{\mu}{2} \|y - x\|^2 holds for all x, y \in \mathbb{R}^n. This formulation highlights the function's minimum progress along the first-order condition, augmented by a positive quadratic term that bounds the error from below. Under differentiability assumptions, strong convexity of f implies that its gradient \nabla f is strongly monotone, meaning (\nabla f(y) - \nabla f(x))^\top (y - x) \geq \mu \|y - x\|^2 for all x, y \in \mathbb{R}^n. This property ensures that the gradient provides a reliable and scaled direction of descent, distinguishing it from mere monotonicity in convex gradients. Strongly convex functions possess a unique global minimizer, as the quadratic growth term prevents flat regions and guarantees strict separation from the minimum value. In optimization algorithms, this leads to accelerated convergence rates; for instance, gradient descent on a \mu-strongly convex function with Lipschitz-continuous gradient achieves linear convergence, with the rate factor $1 - \mu / L where L is the Lipschitz constant. Similarly, Newton's method exhibits quadratic convergence near the unique minimizer.

Uniformly Convex Functions

A function f: X \to \mathbb{R} defined on a convex subset of a normed vector space X is uniformly convex with modulus \phi if there exists a convex, increasing function \phi: [0, \infty) \to [0, \infty) with \phi(0) = 0 such that for all x, y \in \dom f and \lambda \in [0, 1], f(\lambda x + (1 - \lambda) y) \leq \lambda f(x) + (1 - \lambda) f(y) - \lambda (1 - \lambda) \phi(\|x - y\|). This notion was introduced by Levitin and Polyak in the context of constrained optimization problems. The modulus \phi quantifies the degree of uniformity in the convexity inequality, providing a uniform lower bound on the deviation from linearity that depends only on the distance between points. The concept of uniform convexity for functions is closely related to the modulus of convexity in Banach spaces, where the space itself is said to be uniformly convex if its norm satisfies a similar inequality with \phi as the modulus of convexity \delta(\epsilon) = \inf \{ 1 - \|\frac{x + y}{2}\| : \|x\| = \|y\| = 1, \|x - y\| \geq \epsilon \}. For functions, the modulus \phi plays an analogous role, ensuring that the function's "curvature" is uniformly bounded away from zero in a distance-dependent manner across the domain. Strong convexity corresponds to the special case of uniform convexity where \phi(t) = \frac{\mu}{2} t^2 for some \mu > 0. In general, the flexible modulus \phi allows uniform convexity to apply to broader classes of functions than just those with quadratic lower bounds. Uniformly convex functions exhibit unique minimizers when they exist, as the strict inequality in the definition implies strict convexity. Moreover, the minimizers are stable under perturbations: small changes in the function or domain lead to small shifts in the location of the minimizer, with the stability quantified by the modulus \phi. This property is particularly valuable in optimization, ensuring robustness in algorithms like gradient descent. Examples of modulus functions include the quadratic \phi(t) = \frac{\mu}{2} t^2, which recovers strong convexity, and higher-order forms like \phi(t) = c t^p for p > 1 and c > 0, applicable to functions such as f(x) = \|x\|^p on \mathbb{R}^n. These moduli capture varying degrees of uniformity, with higher p providing stronger control for larger distances.

Examples

One-Dimensional Examples

A classic example of a convex function in one dimension is the quadratic function f(x) = x^2, defined on \mathbb{R}. This function is strictly convex because its second derivative is f''(x) = 2 > 0 everywhere, satisfying the second-order condition for convexity. Another fundamental example is the absolute value function f(x) = |x|, defined on \mathbb{R}. It is convex but non-differentiable at x = 0, where the subgradient is the interval [-1, 1]; convexity follows from the fact that |x|^p is convex for any p \geq 1. The exponential function f(x) = e^x, defined on \mathbb{R}, provides another illustration of convexity. Its second derivative is f''(x) = e^x > 0 for all x, confirming strict convexity via the second-order test. On the positive reals, the negative logarithm f(x) = -\log x (natural logarithm) is a strictly convex function. Its second derivative is f''(x) = 1/x^2 > 0 for x > 0, again verifying convexity through the second-order condition. To contrast, consider f(x) = x^4 - 2x^2 on \mathbb{R}, which is non-convex. Its second derivative f''(x) = 12x^2 - 4 changes sign (negative for |x| < 1/\sqrt{3}), violating the non-negativity required for convexity.

Multidimensional Examples

In multivariable calculus, the Euclidean norm defined as f(\mathbf{x}) = \|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^n x_i^2} for \mathbf{x} \in \mathbb{R}^n is a convex function. This convexity follows directly from the triangle inequality, which implies subadditivity: \|\mathbf{x} + \mathbf{y}\|_2 \leq \|\mathbf{x}\|_2 + \|\mathbf{y}\|_2, and homogeneity, ensuring the epigraph is convex. Quadratic forms provide another fundamental class of convex functions in multiple dimensions. Consider f(\mathbf{x}) = \mathbf{x}^T A \mathbf{x}, where A is a symmetric positive semidefinite matrix (i.e., all eigenvalues of A are nonnegative). This function is convex because its Hessian matrix is $2A, which is positive semidefinite everywhere, satisfying the second-order condition for convexity. For instance, if A = I (the identity matrix), then f(\mathbf{x}) = \|\mathbf{x}\|_2^2, which is strictly convex. The function f(\mathbf{x}) = \sum_{i=1}^n x_i \log x_i, defined on the positive orthant or the probability simplex \{\mathbf{x} \in \mathbb{R}^n \mid x_i > 0, \sum x_i = 1\}, is convex (corresponding to the negative entropy). Its Hessian matrix is diagonal with entries H_{ii} = 1/x_i > 0 and off-diagonal elements zero, hence positive definite. This function arises in information theory and optimization, where its convexity aids in problems like maximum entropy estimation. The pointwise maximum of convex functions inherits convexity. For example, define f(x, y) = \max\{x^2 + y, y^2 + x\} for (x, y) \in \mathbb{R}^2. Here, both g_1(x, y) = x^2 + y and g_2(x, y) = y^2 + x are convex (as sums of convex quadratics and linear terms), so their maximum f is convex. The epigraph of f is the intersection of the epigraphs of g_1 and g_2, which is convex, and f forms the convex envelope over regions where the functions intersect. Not all multivariable functions are convex; counterexamples illustrate the distinction. The bilinear function f(x, y) = x y for (x, y) \in \mathbb{R}^2 is neither convex nor concave, as its Hessian matrix is \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, which has eigenvalues +1 and -1, making it indefinite. This saddle-like behavior violates the convexity condition, as line segments between points like (1,1) and (-1,-1) yield midpoints where f exceeds the average.

References

  1. [1]
    [PDF] 1 Theory of convex functions - Princeton University
    Let's first recall the definition of a convex function. Definition 1. A function f : Rn → R is convex if its domain is a convex set and for all x, y. in its ...
  2. [2]
    [PDF] Convexity - Stanford AI Lab
    May 29, 2018 · We'll start with the definitions and then give some results. A function f is convex if. f(tx + (1 − t)y) ≤ tf(x) + (1 − t)f(y)
  3. [3]
    [PDF] 2. Convexity
    A function f : IRn → IR is convex if and only if its epigraph set epi f is convex in IRn ×IR, or equivalently, its strict epigraph set '(x, α) ((f(x) < α < ∞. ) ...
  4. [4]
    [PDF] Convex Optimization Overview - Stanford Engineering Everywhere
    Oct 19, 2007 · Convex optimization is a special class of optimization problems where finding the global solution is efficient. A convex function's graph lies ...
  5. [5]
    [PDF] Lecture 4: Convexity 4.1 Basic Definitions
    Definition 4.12 A convex set is strictly convex if for any two points in the set in general position, the line segment less the endpoints is contained in int C.
  6. [6]
    [PDF] Lecture Notes 7: Convex Optimization
    Convex functions are of crucial importance in optimization-based data analysis because they can be efficiently minimized. In this section we introduce the ...
  7. [7]
    [PDF] Introduction to Convex Constrained Optimization
    5.1 Convex Sets and Functions. Convex sets and convex functions play an extremely important role in the study of optimization models. We start with the ...
  8. [8]
    [PDF] Convex functions
    f is concave if −f is convex. ▷ f is strictly convex if dom f is convex and for x, y ∈ dom f, x ≠ y, 0 <𝜃< 1, f (𝜃x + (1 − 𝜃)y) < 𝜃f (x)+(1 − 𝜃)f (y).
  9. [9]
    [PDF] Nonsmooth Functions and Subgradients - People @EECS
    f is a proper convex function if f(x) < +∞ for some x ∈ Rn and f(x) > −∞ for all x ∈ Rn. All convex functions of practical interest are proper. • f is a closed ...
  10. [10]
    [PDF] Geometry of Convex Functions
    The link between convex sets and convex functions is via the epigraph: A function is convex if and only if its epigraph is a convex set. We limit our treatment ...
  11. [11]
    The origins of quasi-concavity: a development between mathematics ...
    The first modern formalization of the concept of convex function appears in Jensen [1905]. Since then, at first referring to “Jensen's convex functions ...
  12. [12]
    [PDF] A REVIEW OF QUASI-CONVEX FUNCTIONS
    The negative of a (quasi-) convex function is a (quasi-) concave function. The prefix 'quasi' means 'as if.' We thus expect quasi-convex functions to have ...
  13. [13]
    Some integral inequalities for logarithmically convex functions
    Due to their interesting properties, the log-convex (log-concave) functions frequently appear in many problems of classical analysis and probability theory.
  14. [14]
    None
    Below is a merged summary of Chapter 3: Convex Functions (One-Dimensional Case) from "Convex Optimization" by Boyd & Vandenberghe, based on the provided summaries from the book available at https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf. The response consolidates all information into a concise narrative format, supplemented by a detailed table in CSV format to capture the specifics (e.g., definitions, conditions, theorems, and page references) comprehensively. Since the system restricts "thinking tokens," I will avoid iterative refinement and present a direct, dense synthesis.
  15. [15]
    [PDF] Topic 9: Support Functions
    9.2.3 Exercise A homogeneous function is subadditive if and only it is convex. It is superadditive if and only if it is concave. The ...
  16. [16]
  17. [17]
    [PDF] CONVEX FUNCTIONS: CONSTRUCTIONS, CHARACTERIzATIONS ...
    Jan 2, 2023 · The authors explore the various classes and their characteristics, treating convex functions in both Euclidean and Banach spaces. They begin by ...
  18. [18]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  19. [19]
    [PDF] Conjugates and Legendre transforms of convex functions
    Essentially, the conjugate reduces to the Legendre transform if and only if the subdifferential of the convex function is a one-to- one mapping.
  20. [20]
    [PDF] Constrained Minimization Methods - UBC Computer Science
    Oct 7, 2014 · A uniformly convex function f(x) is bounded from below in E, while the set S = lx: f(z) < h) is bounded for all A. If, in addition, f(x).
  21. [21]
  22. [22]
    [PDF] arXiv:2102.03086v3 [math.FA] 11 May 2021
    May 11, 2021 · Example 2.7. A uniformly convex continuous function taking finite values which is unbounded on a bounded convex closed set.
  23. [23]
  24. [24]
    [PDF] Convex and Nonconvex Optimization Techniques for Multifacility ...
    Feb 6, 2020 · f(x) := x4 − 2x2 + 2x − 3 for x ∈ R. This function admits the DC representation f = g − h with g(x) := x4 and h(x) := 2x2 − ...
  25. [25]
    [PDF] 3.5 Quadratic Approximation and Convexity/Concavity
    Reminder: The cross terms like xy or yz are intrinsically indefinite (positive and negative in alternating quadrants!). Quadratic functions may fail to be ...