Fact-checked by Grok 2 weeks ago

Supporting hyperplane

In convex analysis, a supporting hyperplane of a convex set C \subseteq \mathbb{R}^n at a boundary point x_0 \in \partial C is defined as the affine hyperplane \{x \mid a^T x = a^T x_0\}, where a \neq 0 is a vector such that a^T x \leq a^T x_0 for all x \in C, ensuring the entire set lies in one of the closed half-spaces bounded by the hyperplane. This geometric concept captures the idea of a "tangent" hyperplane that touches the set without crossing into its interior, providing a linear boundary that respects the set's convexity. The existence of supporting hyperplanes is guaranteed by the supporting hyperplane , which states that for any nonempty C and any point x_0 \in \partial C, there exists a nonzero a such that the \{x \mid a^T x = a^T x_0\} supports C at x_0. This result, a cornerstone of , follows from the more general separating hyperplane theorem and the Hahn-Banach extension principle, allowing the characterization of convex sets via their supporting hyperplanes. Supporting hyperplanes play a fundamental role in optimization, duality theory, and algorithms like the simplex method, where they define feasible regions and dual variables. In finite-dimensional spaces, the of a supporting hyperplane with the yields a face of the set, linking the concept to polyhedral approximations and subdifferentials of convex functions, where the supporting hyperplane at a point corresponds to the of a subgradient. For sets with nonempty interior, the theorem extends to ensure strict separation from exterior points, underscoring the separation properties central to problems.

Definition and Fundamentals

Formal Definition

In finite-dimensional Euclidean space \mathbb{R}^n, consider a nonempty convex set C \subseteq \mathbb{R}^n. A hyperplane H is said to support C at a boundary point x_0 \in \partial C if H passes through x_0 and C lies entirely in one of the closed half-spaces bounded by H. Formally, the hyperplane H can be expressed as H = \{ x \in \mathbb{R}^n \mid \langle a, x \rangle = b \}, where \langle \cdot, \cdot \rangle denotes the standard Euclidean inner product, a \in \mathbb{R}^n is a nonzero normal vector (a \neq 0), and b \in \mathbb{R} is a scalar. The supporting condition requires that x_0 lies on H, so \langle a, x_0 \rangle = b, and \langle a, x \rangle \leq b for all x \in C. The inequality \langle a, x \rangle \leq b ensures that C is contained in the closed half-space \{ x \in \mathbb{R}^n \mid \langle a, x \rangle \leq b \}, with equality holding at least at x_0. The normal vector a defines the orientation of H, pointing toward the side opposite to where C lies, and the choice of a is not unique, as any positive scalar multiple yields an equivalent hyperplane. This algebraic formulation establishes the foundational role of supporting hyperplanes in convex geometry, relying on the prerequisite that C is convex to guarantee the set remains on one side of H.

Geometric Interpretation

A supporting hyperplane provides a geometric of the behavior of a in , where it touches the set at least at one point while ensuring that the entire set remains contained within one of the closed half-spaces defined by the . This "touching" configuration intuitively represents the as a tangent-like that bounds the set without penetrating its interior, much like a resting against a solid object from the outside. In this sense, the acts as a separating barrier, with the convex set lying strictly on one side, highlighting the set's convexity by preventing any points from crossing to the other half-space. To build , consider the two-dimensional of a supporting line for a , such as a disk or , where the line contacts the at a point or and keeps the entirely on one side of the line. This concept extends naturally to higher dimensions: a supporting in three or more dimensions serves the same role, separating the from the opposite half-space while touching it at points, thus generalizing the planar "support" to volumetric or higher-dimensional boundaries. The extension preserves the geometric essence, where the defines a flat, linear interface that constrains the set's extent in a particular direction. The notion of "" geometrically embodies the as a linear bound or "floor" for the , providing a maximal or minimal in the perpendicular to the . This linear bounding illustrates how the delineates the set's frontier, offering a foundational way to approximate or enclose the set with affine structures. For instance, multiple supporting can collectively outline the 's shape, akin to facets of a . A key distinction arises in the interaction between supporting hyperplanes and boundary points, particularly regarding exposed and non-exposed points. An exposed point is a boundary point where a supporting intersects the solely at that single point, uniquely isolating it as a "protruding" vertex-like feature. In contrast, non-exposed points, while still on the , are touched by supporting hyperplanes that may intersect the set along a larger face or edge, meaning the hyperplane does not uniquely determine the point but supports a whole . This difference underscores how supporting hyperplanes can reveal the of sets, with exposed points corresponding to strictly contacts and non-exposed ones to broader supportive contacts.

Key Properties

Existence Conditions

In , a fundamental existence condition for supporting s is that every point of a nonempty admits at least one such , specifically for points x \in C \setminus \operatorname{ri}(C), where \operatorname{ri}(C) denotes the relative interior of C. For a C \subseteq \mathbb{R}^n and such a point x \in C, there exists a nonzero vector a \in \mathbb{R}^n and a scalar b \in \mathbb{R} such that \langle a, x \rangle = b and \langle a, y \rangle \leq b for all y \in C. This result relies on the separating hyperplane theorem applied to the point x and points in the relative interior of C, ensuring the hyperplane touches C at x while containing the set on one side. Supporting s do not exist at interior points of a , as any passing through an interior point x \in \operatorname{int}(C) will intersect the set on both sides due to the openness of the interior. Thus, is confined to the relative C \setminus \operatorname{ri}(C), where the set touches the without crossing it. For non-closed sets, holds for points in C that are not in the relative interior, though the topological \partial C = \overline{C} \setminus \operatorname{int}(C) may include points outside C. In finite-dimensional spaces, the existence at boundary points is guaranteed through compactness arguments, particularly the compactness of the unit sphere in \mathbb{R}^n. To construct the supporting hyperplane, consider a sequence of points approaching the boundary from the exterior; separating hyperplanes for these points have normals on the compact unit sphere, allowing extraction of a convergent subsequence whose limit yields the supporting hyperplane via continuity of the inner product. This finite-dimensional compactness ensures the argument holds without additional topological assumptions, contrasting with infinite-dimensional settings where further conditions like reflexivity may be required. Local supporting hyperplanes, defined in a neighborhood of the boundary point, can be extended to global ones by taking limits of separating hyperplanes from nearby exterior points, leveraging the same compactness on the unit sphere to form a hyperplane that supports the entire set. This limit process bridges local geometric properties to global existence, ensuring that the supporting hyperplane separates the convex set from points outside its closure.

Uniqueness and Multiplicity

In convex sets with differentiable boundaries, a supporting at a point x \in C is unique and coincides with the tangent at that point. This occurs because the differentiability of the implies a unique direction for the or that satisfies the supporting condition C \subseteq \{ z \mid \nabla f(x)^T (z - x) \leq 0 \}, where the is locally represented by a differentiable f. Such uniqueness characterizes smooth portions of the , distinguishing them from non-smooth features where multiple supporting may exist. At non-smooth points, such as corners or edges of a , multiple supporting hyperplanes can touch the set at the same point. For polyhedral sets, defined by finitely many linear inequalities C = \{ x \mid A x \leq b \}, a x lies at the of at least d facets in \mathbb{R}^d, leading to a generated by the outward normals of those facets. Any of these normals provides a valid supporting , resulting in a of supporting hyperplanes. For example, consider the unit square C = [0,1]^2 at the corner x = (0,0); any in the cone \{ (a,b) \mid a \geq 0, b \geq 0 \} defines a supporting hyperplane \{ z \mid a z_1 + b z_2 = 0 \} containing x with C on the nonnegative side. A strictly supporting hyperplane at a point x \in C is defined by a a \neq 0 such that a^T z < a^T x for all z \in C with z \neq x, ensuring the hyperplane intersects C only at x. This strict inequality exposes x as an exposed point of C, a subset of the extreme points where the supporting hyperplane uniquely identifies x as the sole contact point. Exposed points arise in sets with non-empty interior and are particularly relevant in polyhedra at vertices, where strict support requires the normal to lie in the relative interior of the normal cone. The multiplicity or uniqueness of supporting hyperplanes at a point x \in C is captured by the normal cone N_C(x) = \{ g \mid g^T (y - x) \leq 0, \ \forall y \in C \}, which consists of all outward normals to supporting hyperplanes at x. This cone is a singleton ray (up to scaling) at smooth points, yielding uniqueness, but polyhedral with positive dimension at corners, allowing multiplicity. For convex functions f, the subdifferential \partial f(x) relates analogously via the epigraph, where \partial f(x) = N_{\mathrm{epi} f}(x, f(x)) \cap \{ (g, -1) \mid g \in \mathbb{R}^n \}, providing the set of possible subgradients that define supporting hyperplanes to the epigraph.

Supporting Hyperplane Theorem

Theorem Statement

The supporting hyperplane theorem asserts that if C \subseteq \mathbb{R}^n is a nonempty convex set and x_0 is a boundary point of C, then there exists a nonzero vector a \in \mathbb{R}^n such that a^\top x \leq a^\top x_0 \quad \forall x \in C, with the hyperplane H = \{ x \in \mathbb{R}^n : a^\top x = a^\top x_0 \} supporting C at x_0. This result follows from the applied to separate x_0 from the interior of C. In more general settings, the theorem extends to locally convex topological vector spaces: if C is a convex set with nonempty interior and x_0 lies on the boundary of C, then there exists a continuous linear functional f on the space, not identically zero on the affine hull of C, such that f(x) \leq f(x_0) for all x \in C. The finite-dimensional version of the theorem originates from work by in the early 1900s, with subsequent extensions via the in the 1930s generalizing it to infinite dimensions.

Proof Outline

The proof of the supporting hyperplane theorem relies on the geometric form of the Hahn-Banach theorem, which guarantees the existence of a separating hyperplane between a convex set and a point not in its interior, thereby constructing a supporting hyperplane at a boundary point x_0 of a convex set C. Assume C \subseteq \mathbb{R}^n (or a normed space) is convex with nonempty interior (for simplicity; the general case uses relative interior), and x_0 \in \partial C. Since x_0 lies on the boundary, it is not an interior point, allowing separation from points in the interior of the complement of C or, equivalently, considering directions pointing away from interior points of C. To initiate the construction, translate the set so that x_0 = 0 without loss of generality, and consider the convex set K = C - x_0 with $0 \in \partial K; select a point y \in \operatorname{int}(K) to define a base for separation. The normal vector a to the supporting hyperplane is constructed by maximizing a linear functional over K. Specifically, define a preliminary linear functional f on the one-dimensional subspace spanned by a direction vector (e.g., x_0 - y) such that f bounds above the values on K, then apply the Hahn-Banach theorem to extend f to a continuous linear functional \alpha on the entire space, dominated by the Minkowski gauge function p_K(z) = \inf\{\lambda > 0 \mid z \in \lambda K\} of K, ensuring \alpha(z) \leq p_K(z) for all z. Normalizing appropriately yields the nonzero vector a such that \sup_{z \in K} \langle a, z \rangle = \langle a, 0 \rangle = 0. To verify the supporting property, note that the extension guarantees \langle a, x - x_0 \rangle \leq 0 for all x \in C, as any x \in C satisfies \langle a, x - x_0 \rangle \leq p_K(x - x_0) \leq 1 after scaling, but the maximization at x_0 ensures the inequality holds with equality at x_0 and the set C lies in the half-space \{x \mid \langle a, x \rangle \leq \langle a, x_0 \rangle \}. For non-closed convex sets, apply the theorem to the closure \overline{C}, which is closed and convex containing C, to obtain a supporting hyperplane at x_0 \in \overline{C}; then, use continuity and limiting arguments on sequences in C approaching boundary points to confirm that the same hyperplane supports C, since C \subseteq \overline{C} implies the half-space inequality persists.

Examples and Illustrations

Low-Dimensional Cases

In two dimensions, consider a closed disk of radius r centered at the , defined as D = \{ (x, y) \in \mathbb{R}^2 : x^2 + y^2 \leq r^2 \}. At any point p = (r \cos \theta, r \sin \theta), there exists a unique supporting line, which is tangent to the disk at p. This line has the equation x \cos \theta + y \sin \theta = r, with outward (\cos \theta, \sin \theta), ensuring the entire disk lies on one side of the line. For a specific case, take the unit disk (r = 1) at the point (1, 0). The supporting line is x = 1, derived from the general form with \theta = 0, and it uniquely touches the disk at this point due to the strict curvature of the boundary. In contrast, for a convex polygon in two dimensions, such as a triangle with vertices at (0,0), (1,0), and (0,1), the supporting line at a vertex like (0,0) is not unique. Any line passing through (0,0) with outward normal in the cone of directions between the adjacent edges—specifically, normals (a, b) where a ≤ 0, b ≤ 0, and a + b = -1—serves as a supporting line, with the polygon lying entirely on one side. This multiplicity arises because the vertex is a corner where the boundary is not strictly curved. Extending to three dimensions, the closed unit ball B = \{ (x, y, z) \in \mathbb{R}^3 : x^2 + y^2 + z^2 \leq 1 \} has a unique supporting at any boundary point q with \|q\|_2 = 1, given by \langle q, (x, y, z) \rangle = 1. For instance, at (1, 0, 0), the plane is x = 1, touching solely at that point. For a , such as the unit cube C = \{ (x, y, z) \in \mathbb{R}^3 : |x| \leq 1, |y| \leq 1, |z| \leq 1 \}, the supporting at a point in the relative interior of a face, say (1, 0, 0) on the face x = 1, is uniquely the x = 1 itself. This supports the entire flat face, containing infinitely many points of the , in contrast to the single-point contact for the ball. At vertices or edges, multiple supporting planes are possible, reflecting the polyhedral structure.

Practical Illustrations in Optimization

In linear programming, the feasible region is defined as a polytope formed by the intersection of half-spaces corresponding to linear inequality constraints, such as \{x \mid Ax \leq b, x \geq 0\}, where the boundaries of this polytope are hyperplanes. At an optimal vertex of this polytope, the supporting hyperplanes associated with the active constraints (those inequalities that hold with equality) define the facets touching the optimum, ensuring that the objective function's level sets do not cross into the feasible region beyond that point. For instance, in standard form linear programs, these active constraints provide the geometric support that certifies optimality, as the objective gradient aligns with a non-negative combination of the normals to these hyperplanes. In , which minimizes a objective subject to linear constraints, the Karush-Kuhn-Tucker (KKT) conditions characterize the optimum at a point where the of the objective is balanced by the gradients of the active constraints via multipliers. Geometrically, this balance implies that the defined by the objective at the KKT point serves as a supporting to the feasible set, with the multipliers indicating the "force" each active exerts to keep the feasible. The supporting thus delineates the where the level curves are to the boundaries, ensuring no feasible direction improves the objective. A simple numerical illustration arises in the one-dimensional problem of minimizing x subject to x \geq 0; the optimum occurs at x = 0, where the x = 0 supports the feasible set [0, \infty) at the boundary point, and the objective gradient (which is 1) points away from the feasible region, satisfying the supporting condition \nabla f(0)^T (y - 0) \geq 0 for all y \geq 0. This basic case extends to higher dimensions, highlighting how supporting hyperplanes enforce feasibility at optima in constrained settings. Barrier methods in optimization approximate the boundaries of the by incorporating a , such as the logarithmic barrier -\sum \log(-g_i(x)) for inequalities g_i(x) \leq 0, which drives solutions toward the interior while asymptotically approaching the boundary as the barrier parameter decreases. Near infeasible regions, linearized supporting hyperplanes derived from the can approximate these boundaries, aiding in the generation of cutting planes that refine the feasible set approximation in interior-point algorithms. This connection enhances feasibility checks by using hyperplanes to bound regions close to constraint violations, as seen in the central path of self-concordant barriers.

Applications

In Convex Optimization

In convex optimization, a point x^* in a convex feasible set C is optimal for minimizing a differentiable convex function f if there exists a supporting to C at x^* whose is -\nabla f(x^*), ensuring that f(y) \geq f(x^*) for all y \in C. This geometric criterion follows from the first-order condition for convexity, where the defined by \nabla f(x^*)^T (y - x^*) \leq 0 supports C at x^*, implying no descent direction exists within the feasible set. Supporting hyperplanes play a central role in deriving the Karush-Kuhn-Tucker (KKT) conditions for programs with constraints. For a problem minimizing f_0(x) subject to f_i(x) \leq 0 (with f_0, f_i and differentiable), the L(x, \lambda) = f_0(x) + \sum \lambda_i f_i(x) yields multipliers \lambda \geq 0 via supporting hyperplanes to the epigraph of the constraint set at the optimal point, justifying the stationarity condition \nabla f_0(x^*) + \sum \lambda_i^* \nabla f_i(x^*) = 0. Under (strict feasibility), these hyperplanes ensure and that KKT points are global optima, as the supporting hyperplane at the primal- boundary aligns the gradients. In nonsmooth convex optimization, subgradient methods rely on supporting hyperplanes to the epigraph of the objective function. For a convex function f, a subgradient g \in \partial f(x) corresponds to a supporting hyperplane with normal (g, -1) at (x, f(x)) in the epigraph \epi f = \{ (z, t) \mid t \geq f(z) \}, satisfying f(z) \geq f(x) + g^T (z - x) for all z. Algorithms like the subgradient descent method iteratively use such hyperplanes to generate descent directions, projecting onto the feasible set if constrained, with convergence rates of O(1/\sqrt{T}) after T iterations for minimizing f over a bounded set. The for uses separating s (a special case of supporting hyperplanes at boundary points) to achieve polynomial-time bounds. Given a separation that provides a a^T y > b when the current center y_k violates a , the method updates the ellipsoid by deep-cutting the infeasible half, containing the optimum in a shrinking volume. For in n variables with bounded domain size R/r, it requires O(n^2 \log(R/r)) iterations, each costing O(n^2) arithmetic operations, establishing the first polynomial-time solvability.

In Separation Theorems and Duality

Supporting hyperplanes play a central role in separation theorems for convex sets, providing the geometric foundation for distinguishing disjoint convex regions in Euclidean space. The hyperplane separation theorem asserts that if two nonempty convex sets C and D are disjoint, then there exists a hyperplane such that C lies on one side and D on the other, with the hyperplane possibly touching the boundary of one or both sets. This result extends the concept of supporting hyperplanes, where the separating hyperplane acts as a supporting one to at least one of the sets at a boundary point. For strict separation, where the sets lie in open half-spaces defined by the hyperplane, additional conditions are required, such as one set being compact and the other closed; under these assumptions, a strictly separating hyperplane exists, ensuring a positive distance between the sets. A key connection arises through , which links the solvability of linear inequality systems to the existence of supporting s. The lemma states that for a A \in \mathbb{R}^{m \times n} and vector b \in \mathbb{R}^m, either there exists x \geq 0 such that Ax = b, or there exists y \in \mathbb{R}^m such that y^\top A \geq 0 and y^\top b < 0, but not both. Geometrically, this means that if b does not lie in the generated by the columns of A, then a through the supports this cone and separates it from b, with the normal vector given by y. This equivalence underscores how supporting s certify the infeasibility of linear systems, forming a cornerstone of proofs for separation results in finite dimensions. In , duality is intimately tied to supporting hyperplanes of the primal feasible set. Consider the primal problem \max \{ c^\top x \mid Ax \leq b, x \geq 0 \}; at an optimal point x^*, the objective direction c admits a supporting to the feasible set with normal c, touching at x^*. The dual variables \lambda^* and \mu^* (for inequality and nonnegativity constraints) define coefficients such that \lambda^{*\top} A + \mu^{*\top} = c and \lambda^{*\top} b = c^\top x^*, corresponding to a of constraints that forms this supporting hyperplane. This geometric view explains : the optimal dual value equals the primal optimum precisely when such a hyperplane aligns the objective with the binding constraints at optimality. Lagrange duality further illustrates the role of supporting hyperplanes in achieving zero for problems. For a problem \min f(x) subject to g_i(x) \leq 0 and h_j(x) = 0, the is L(x, \lambda, \nu) = f(x) + \sum \lambda_i g_i(x) + \sum \nu_j h_j(x). holds if the optimum (x^*, f^*) admits a nonvertical supporting hyperplane to the \{(x, t) \mid t \geq f(x)\} at (x^*, f^*), where the hyperplane's slope is given by the optimal multipliers \lambda^*, \nu^* at a of the . This alignment ensures the supporting hyperplane touches the epigraph exactly at the optimum, certifying that the function achieves the value without gap.

References

  1. [1]
    [PDF] sets.pdf - Convex Optimization
    ▷ supporting hyperplane theorem: if C is convex, then there exists a supporting hyperplane at every boundary point of C. Convex Optimization. Boyd and ...
  2. [2]
    [PDF] Lecture 3: September 4 3.1 Convex Sets
    A supporting hyperplane of the set C at a boundary point x0 ∈ bd C = cl C - int C is any hyperplane {x|aT x = b, a 6= 0} such that the entire convex set lies on ...
  3. [3]
    [PDF] 10 Convexity and Fenchel Duality
    We call a hyperplane H ⊂ X a supporting hyper-plane of a convex subset. K ⊂ X if K is contained in one of the closed halfspaces H≥ or H≤ and H contains a point ...
  4. [4]
    None
    Below is a merged response that consolidates all the information from the provided summaries into a single, comprehensive summary. To maximize density and clarity, I’ve organized the key details into a table format where appropriate, followed by a narrative summary for additional context. The response retains all information mentioned, including authors, publishers, content focus, definitions, page references, URLs, and contextual details.
  5. [5]
    [PDF] convex geometry - Stanford CCRMA
    Feb 13, 2010 · 2.23Rockafellar terms a strictly supporting hyperplane ... hyperplane separating two nonempty convex sets in Rn whose relative interiors are.
  6. [6]
    [PDF] ACM 204, FALL 2018: LECTURES ON CONVEX GEOMETRY JOEL ...
    most directions, a compact convex convex set has a supporting hyperplane that touches the set at a unique point. Afterward, we develop an application of ...
  7. [7]
    [PDF] 1 Review of convexity - DAMTP
    ... existence of supporting hyperplanes: If C is a closed convex subset of Rn and y is a point on the boundary C \ int(C) of C, a supporting hyperplane of C at y is ...<|control11|><|separator|>
  8. [8]
  9. [9]
    [PDF] Introduction to Optimization Theory
    Oct 22, 2020 · 𝐻( 𝑔, 𝑔'𝑥 called supporting hyperplane when 𝑥 ∈ 𝜕(𝑆). Epigraph of Differentiable Convex Function. • 𝑓: ℝ. 9 → ℝ convex and differentiable. • ...<|separator|>
  10. [10]
    [PDF] Introduction to Convexity
    Dec 9, 2019 · Let a = x − x? and let δ = ha, x?i. Note that a 6= 0 because x 6∈ C and x? ∈ C. Also note that ha, xi = ha, a + x?i = kak2 + δ>δ.
  11. [11]
    [PDF] The Hahn-Banach separation Theorem and other separation results
    Aug 18, 2014 · Theorem 3.5 (Supporting Hyperplane Theorem). Let A be a nonempty and convex subset of Rn. If x0 /∈ ˚A, then A is supported by a hyperplane at x0 ...
  12. [12]
    [PDF] Topic 8: Separation theorems
    Convex Analysis and Economic Theory. AY 2019–2020. Topic 8: Separation theorems. 8.1 Hyperplanes and half spaces. Recall that a hyperplane in Rm is a level set ...
  13. [13]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  14. [14]
    [PDF] The Hahn-Banach Theorem - webspace.science.uu.nl
    The Supporting Hyperplane Theorem follows as an application of the Hahn-Banach Theorem on the Minkowski functional. We start with a point x0 outside of a convex ...<|control11|><|separator|>
  15. [15]
    [PDF] Lecture 4. Supporting and Separating Hyperplane Theorem
    The theorem states that every convex set can be characterized by its supporting hyperplanes, and every two convex sets can be separated by a hyperplane.Missing: mathematics | Show results with:mathematics
  16. [16]
    [PDF] Convex Optimization – Lecture 2 - TTIC
    Theorem: If C is (open) convex, then there exists a supporting hyperplane at every boundary point of C. Clarification: the converse statement is also true. 5 ...
  17. [17]
    [PDF] Synopsis and Exercises for the Theory of Convex Sets
    Apr 28, 2009 · 8 SUPPORTING HYPERPLANES. 27. K. H. Figure 11: A supporting hyperplane H “touching” a convex set K. 8 Supporting Hyperplanes. Let K be a closed ...
  18. [18]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  19. [19]
    [PDF] Chapter 3 Linear Programs - Stanford University
    Note that the feasible region of a linear program is a polyhedron. Hence, a linear program involves optimization of a linear objective function over a.
  20. [20]
    [PDF] Week 7–8: Linear Programming 1 Introduction
    Any value of x that satisfies the constraints x ≥ 0 and Ax ≤ b is called feasible. The set of feasible x is called the feasible set or feasible region of the LP ...
  21. [21]
    [PDF] Lagrange Multiplier Method & Karush-Kuhn-Tucker (KKT) Conditions
    Recall the geometry of the Lagrange multiplier conditions: The gradient of the objective function must be orthogonal to the.
  22. [22]
    [PDF] Convex optimization problems
    ▷ if nonzero, ∇f0(x) defines a supporting hyperplane to feasible set X at x ... ▷ start with nonconvex problem: minimize h(x) subject to x ∈ C. ▷ find ...
  23. [23]
    [PDF] SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION
    a high cost for constraint violation. • Barrier methods add a term that favors points interior to the feasible ... • Supporting Hyperplane Algorithm: Page 40 ...
  24. [24]
    [PDF] 6.253 Convex Analysis and Optimization, Lecture 25
    Separating/supporting hyperplane theorem. • Strict and proper separation ... • Barrier method: Let xk = arg min f(x) + ekB(x) , k = 0,1,..., x⌥S where ...
  25. [25]
    [PDF] Lecture Notes 7: Convex Optimization
    The optimality condition in Corollary 2.6 has a very intuitive geometric interpretation in terms of the supporting hyperplane Hf,x. ∇f = 0 implies that Hf ...
  26. [26]
    [PDF] Duality - Convex Optimization
    ... supporting hyperplane to A at (0, p☆). ▷ for convex problem, A is convex, hence has supporting hyperplane at (0, p☆). ▷ Slater's condition: if there exist ...
  27. [27]
    [PDF] Nonsmooth Functions and Subgradients - People @EECS
    Each subgradient can be identified with a supporting hyperplane to the epigraph of f. We have the following result (illustrated in Figure 8.1). Theorem 8.3 ...
  28. [28]
    [PDF] The Ellipsoid Method and Its Efficiency Analysis - Stanford University
    The ellipsoid method discussed here is really aimed at finding an element of a polyhedral set Y given by a system of linear inequalities. Y = {y ∈ R m. : a. T j.
  29. [29]
    [PDF] THE ELLIPSOID METHOD AND ITS CONSEQUENCES IN ...
    The method is an adaptation of an algorithm proposed by Shor for non-linear optimization problems. In this paper we show that the method also yields interesting ...
  30. [30]
    [PDF] 1 Separating hyperplane theorems - Princeton University
    Feb 23, 2016 · [1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, http://stanford.edu/ boyd/cvxbook/, 2004.
  31. [31]
    [PDF] 1 Separating hyperplane theorems - Princeton University
    Geometric interpretation of the Farkas lemma: The geometric interpretation of the Farkas lemma illustrates the connection to the separating hyperplane theorem ...Missing: supporting | Show results with:supporting