Fact-checked by Grok 2 weeks ago

Conic optimization

Conic optimization is a class of problems that generalize (LP) by incorporating constraints defined over , allowing the modeling of a broader range of nonlinear convex problems while maintaining computational tractability. Formally, it seeks to minimize (or maximize) a linear objective function c^\top x subject to the constraints Ax = b and x \in \mathcal{K}, where A is a , b is a vector, and \mathcal{K} is a closed in a finite-dimensional real . This framework encompasses several important subclasses, including linear programming (when \mathcal{K} is the nonnegative \mathbb{R}^n_+), (SOCP, using the second-order or Lorentz cone), and (SDP, with the cone of positive semidefinite matrices). The modern formulation of conic optimization emerged in the late as an extension of . In the 1990s, researchers such as Yu. and A. Nemirovski developed the theoretical foundations, introducing self-dual or symmetric cones and demonstrating that certain conic problems could be solved in polynomial time using interior-point methods (IPMs). These advances built on earlier progress in and , with key contributions from F. Alizadeh and D. Goldfarb on SDP feasibility and efficiency. Duality plays a central role, with holding under mild conditions like Slater's, enabling the formulation of primal-dual pairs that facilitate numerical solution and . Key cones in conic optimization include the nonnegative cone \mathbb{R}^n_+, the second-order cone \mathcal{Q}^{n+1} = \{ (t, x) \in \mathbb{R} \times \mathbb{R}^n : \|x\|_2 \leq t \} for quadratic constraints, and the positive semidefinite cone \mathcal{S}^n_+ of n \times n symmetric matrices with nonnegative eigenvalues, which captures spectral properties and matrix inequalities. More general cones, such as copositive and completely positive cones, extend applicability to nonconvex problems via relaxations, though they often lack efficient polynomial-time solvers. Algorithms primarily rely on IPMs, which exploit the self-concordance of barrier functions for cones like those in SOCP and SDP, achieving high precision in practice; specialized software such as MOSEK, SDPT3, and CSDP implements these methods. Applications of conic optimization span , , and , including robust (via SOCP for risk measures like variance), sensor network localization (SDP for semidefinite relaxations), and combinatorial problems such as the or stable set via SDP hierarchies. It also enables sum-of-squares hierarchies for optimization and global approximations of nonconvex sets, making it a versatile tool for approximation algorithms in .

Introduction

Definition

Conic optimization is a framework within that addresses problems involving linear objectives and constraints defined over , generalizing classical to handle a broader class of convex constraints. It formulates optimization tasks where the feasible set is the intersection of an affine and a nonempty closed , enabling the modeling of nonlinear relationships in a structured, computationally tractable manner. The standard primal form of a conic optimization problem is to minimize the linear [objective c^T x](/page/objective_%5C(%20c) subject to the linear constraints Ax = b and the membership constraint x \in K, where x \in \mathbb{R}^n is the decision variable, A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, c \in \mathbb{R}^n, and K \subseteq \mathbb{R}^n is a . This setup distinguishes conic optimization from , which emerges as a special case when K is the non-negative \mathbb{R}^n_+. In contrast to , which typically features a objective or constraints, conic optimization preserves a linear objective while accommodating more general convex constraints through the cone K. A key motivation for conic optimization is its capacity to represent nonlinear constraints—such as those involving norms or positive semidefiniteness—in a convex form that supports efficient interior-point methods via self-concordant barrier functions. These barriers, introduced by and Nemirovski, ensure polynomial-time solvability for problems over cones with suitable barrier parameters, facilitating applications in fields like and .

Historical Background

Conic optimization developed as a generalization of linear programming, providing a framework for optimizing linear functions over convex cones rather than polyhedra. This extension built on the polynomial-time solvability of linear programming, first established by Leonid Khachiyan's 1979 ellipsoid method, which demonstrated that linear programs could be solved efficiently despite earlier pessimistic views on computational complexity. The foundational advances in conic optimization occurred in the 1980s, with early contributions from Jonathan Borwein and Henry Wolkowicz on facial reduction techniques. These methods addressed infeasibility and degeneracy in convex programs by iteratively reducing the feasible set to its minimal face, enabling more stable and efficient numerical solutions for problems over general convex sets. Borwein and Wolkowicz's work, published in 1981, laid the groundwork for handling pathological cases in optimization, influencing later developments in conic settings. The modern theory of conic optimization took shape in the late 1980s and early 1990s through the efforts of and Arkadi Nemirovski, who introduced self-concordant barrier functions and interior-point methods adaptable to general convex cones. Their 1994 monograph formalized polynomial-time interior-point algorithms for a broad class of cone programs, shifting the focus from the ellipsoid method's theoretical efficiency to practical, high-performance solvers. Key milestones followed, including Farid Alizadeh's 1991 PhD work and subsequent 1995 paper, which positioned as a powerful conic extension with applications to via interior-point techniques. In 2001, Aharon Ben-Tal and Arkadi Nemirovski's book further formalized conic , emphasizing its role in and engineering applications through detailed analysis and algorithms. These developments marked the transition to polynomial-time solvability for diverse cone classes, establishing conic optimization as a cornerstone of .

Mathematical Foundations

Convex Cones

A is a K \subseteq \mathbb{R}^n that is closed under nonnegative linear combinations, meaning for all x, y \in K and \theta, \lambda \geq 0, the point \theta x + \lambda y \in K. This structure combines the properties of a , which is closed under positive scaling, with , ensuring that the set remains invariant under addition and scaling by nonnegative scalars. Common examples include the nonnegative \mathbb{R}^n_+ = \{ x \in \mathbb{R}^n \mid x_i \geq 0, \, i=1,\dots,n \}, which captures componentwise nonnegativity, and the S^n_+ = \{ X \in \mathbb{S}^n \mid X \succeq 0 \}, consisting of symmetric matrices with nonnegative eigenvalues. Convex cones exhibit several important properties that underpin their utility in optimization. Convexity follows directly from the definition, guaranteeing that any between points in the cone lies within it. A is pointed if it contains no line through the , formally K \cap (-K) = \{0\}, which prevents the inclusion of both a vector and its negative except at zero. requires the cone to have a nonempty interior, \operatorname{int} K \neq \emptyset, enabling the distinction between weak and strict inequalities. The facial structure refers to the exposed faces of the cone, which are subcones arising from intersections with supporting hyperplanes, providing a hierarchical that aids in analyzing boundaries. A proper cone integrates these attributes by being , closed, pointed, and solid. The dual cone of K is defined as K^* = \{ y \in \mathbb{R}^n \mid y^T x \geq 0 \ \forall x \in K \}, which is itself a closed representing the set of linear functionals nonnegative on K. Self-dual cones satisfy K = K^*, a symmetry that simplifies duality mappings; notable instances are \mathbb{R}^n_+ and S^n_+, where the inner product aligns with the cone's ordering. Pointedness and are dual properties: the dual of a pointed cone is solid, and vice versa. In optimization, convex cones define feasible sets through generalized inequalities x \preceq_K y if y - x \in K, ensuring these sets are and thus permitting for convex objectives over them. This convexity facilitates efficient solvability via interior-point methods and guarantees under conditions like Slater's, where a strictly feasible point exists in the interior of the cone-constrained set.

Standard Formulations

The standard formulation of a conic optimization problem in primal form is given by \begin{align*} \min &\quad \mathbf{c}^\top \mathbf{x} \\ \text{s.t.} &\quad A \mathbf{x} = \mathbf{b}, \\ &\quad \mathbf{x} \in [K](/page/K), \end{align*} where \mathbf{x} \in \mathbb{R}^n is the optimization variable, \mathbf{c} \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n}, and \mathbf{b} \in \mathbb{R}^m are problem data, and K \subseteq \mathbb{R}^n is a proper (closed, convex, pointed, and with nonempty interior). The corresponding dual problem, derived via , takes the form \begin{align*} \max &\quad \mathbf{b}^\top \mathbf{y} \\ \text{s.t.} &\quad A^\top \mathbf{y} + \mathbf{s} = \mathbf{c}, \\ &\quad \mathbf{s} \in K^*, \end{align*} where \mathbf{y} \in \mathbb{R}^m and \mathbf{s} \in \mathbb{R}^n are the dual variables, and K^* = \{ \mathbf{z} \in \mathbb{R}^n \mid \mathbf{z}^\top \mathbf{x} \geq 0 \ \forall \mathbf{x} \in K \} denotes the dual cone of K. This duality links the primal minimization to a maximization over the dual cone, reflecting the conic structure's convexity. Under , which requires the existence of a strictly feasible point \mathbf{x}^\circ \in \mathrm{int}(K) satisfying A \mathbf{x}^\circ = \mathbf{b}, holds: the and optimal values are equal, provided the primal value is finite. Infeasibility and unboundedness of the can be detected using generalizations of to cones. Specifically, the primal is infeasible if and only if there exists \mathbf{y} \in \mathbb{R}^m such that A^\top \mathbf{y} \in K^* and \mathbf{b}^\top \mathbf{y} < 0; unboundedness occurs if there exists a recession direction \mathbf{d} \in K with A \mathbf{d} = \mathbf{0} and \mathbf{c}^\top \mathbf{d} < 0. These alternatives provide certificates for solver termination without solving the full problem.

Problem Variants

Linear Conic Programs

Linear conic programs, often abbreviated as CLPs, generalize classical linear programming by incorporating convex cone constraints while maintaining a linear objective function. The standard formulation is to minimize \mathbf{c}^T \mathbf{x} subject to A\mathbf{x} = \mathbf{b} and \mathbf{x} \in K, where A is a linear map from a finite-dimensional space to \mathbb{R}^m, \mathbf{b} \in \mathbb{R}^m, and K is a closed convex cone. This form defines the feasible set as the intersection of an affine subspace with the cone K, capturing a broad class of convex constraints in a unified manner. When the cone K = \mathbb{R}^n_+, the nonnegative orthant, the problem reduces to a classical linear program, which is solvable in polynomial time using methods like the or interior-point methods. This reduction highlights CLPs as a natural extension of , preserving the linearity of the objective and constraints while expanding the feasible region's structure through general cones. Seminal work by established the framework for such generalizations. CLPs are polynomial-time solvable provided the cone K admits a self-concordant barrier function, enabling efficient interior-point algorithms that achieve an \epsilon-optimal solution in O(\sqrt{\nu} \log(1/\epsilon)) iterations, where \nu is the barrier parameter measuring the cone's complexity (e.g., \nu = n for the nonnegative orthant in \mathbb{R}^n). This complexity bound, derived from path-following interior-point methods, ensures practical scalability for moderate-sized problems. A representative application arises in portfolio optimization, where conic constraints on returns can enforce robustness to uncertainty, such as minimizing risk subject to expected return thresholds defined over conic sets rather than simple linear bounds.

Second-Order Cone Programs

Second-order cone programming (SOCP) encompasses convex optimization problems where the feasible region is defined by the intersection of affine subspaces and Cartesian products of second-order cones, also known as Lorentz cones. The Lorentz cone \mathcal{L}^{n} in \mathbb{R}^{n} is given by \mathcal{L}^{n} = \{ (x, t) \in \mathbb{R}^{n-1} \times \mathbb{R} \mid \|x\|_2 \leq t \}, where \| \cdot \|_2 denotes the Euclidean norm. This structure bridges linear programming, which uses simpler polyhedral cones, and semidefinite programming, which involves more general spectrahedral cones, by incorporating quadratic constraints via norms. A standard formulation of an SOCP minimizes a linear objective subject to second-order cone inequalities of the form \|A_i x + b_i\|_2 \leq c_i^\top x + d_i for i = 1, \dots, m, along with possible linear equality constraints Ax = b. Here, x \in \mathbb{R}^n is the decision variable, c \in \mathbb{R}^n defines the objective \min c^\top x, and the matrices A_i, vectors b_i, c_i, and scalars d_i parameterize the constraints. This form naturally arises in problems requiring bounds on Euclidean norms of affine expressions, ensuring convexity since the feasible set is the intersection of second-order cones with an affine space. To express this standard form in conic form, introduce auxiliary variables t_i = c_i^\top x + d_i for each i, transforming each inequality \|A_i x + b_i\|_2 \leq t_i into the membership constraint (t_i, A_i x + b_i) \in \mathcal{L}^{k_i + 1}, where k_i is the row dimension of A_i. The full problem then becomes \min c^\top x subject to Ax = b and x augmented with the t_i belonging to the product of second-order cones \mathcal{L}^{k_1 + 1} \times \cdots \times \mathcal{L}^{k_m + 1}. This mapping, sometimes involving a rotational transformation for equivalent hyperbolic representations, aligns SOCP directly with the general conic optimization framework over Cartesian products of Lorentz cones. SOCP formulations are particularly suited to robust optimization tasks, such as ensuring linear constraints hold under ellipsoidal uncertainties bounded by norms, and to control problems involving stability margins or norm-based performance guarantees. These applications leverage the efficiency of SOCP solvers for handling such constraints without resorting to higher-dimensional semidefinite relaxations.

Advanced Forms

Semidefinite Programs

Semidefinite programming (SDP) is a subfield of conic optimization that involves optimizing a linear function over the cone of positive semidefinite matrices. In standard form, an SDP seeks to minimize the trace inner product \mathbf{C} \bullet \mathbf{X} subject to linear equality constraints \mathbf{A}_i \bullet \mathbf{X} = b_i for i = 1, \dots, m, and the semidefiniteness constraint \mathbf{X} \succeq 0, where \mathbf{X} is an n \times n symmetric matrix variable, \mathbf{C} and \mathbf{A}_i are given symmetric matrices, b_i are scalars, and the inner product is defined as \mathbf{A} \bullet \mathbf{B} = \trace(\mathbf{A}^\top \mathbf{B}). This formulation generalizes by replacing nonnegativity constraints on vector entries with positive semidefiniteness on matrix entries, enabling the modeling of problems involving quadratic forms and eigenvalues. The feasible region of an SDP is a spectrahedron, defined by the intersection of the semidefinite cone \mathbb{S}^n_+ = \{ \mathbf{X} \in \mathbb{S}^n \mid \mathbf{X} \succeq 0 \} with affine subspaces, where \mathbb{S}^n denotes the space of n \times n symmetric matrices equipped with the trace inner product. The semidefinite cone \mathbb{S}^n_+ is convex, closed, pointed, full-dimensional in \mathbb{S}^n, and self-dual under this inner product, meaning its dual cone coincides with itself: (\mathbb{S}^n_+)^* = \mathbb{S}^n_+. Its facial structure is determined by linear subspaces L \subseteq \mathbb{R}^n, where the faces consist of matrices whose range is contained in L, providing a rich geometric framework that influences the structure of SDP feasible sets. In the standard conic form, the linear mappings \mathbf{A}_i \bullet \mathbf{X} are implemented via symmetric matrix coefficients, ensuring compatibility with the symmetric structure of \mathbb{S}^n. Semidefinite programs can be solved in polynomial time using interior-point methods, which exploit the self-duality and barrier functions of \mathbb{S}^n_+ to achieve convergence in a number of iterations bounded by O(\sqrt{n} \log(1/\epsilon)) for accuracy \epsilon, with each iteration requiring O(m n^2) arithmetic operations in unstructured cases, where m is the number of equality constraints. Compared to , SDPs involve higher-dimensional variables since the semidefinite cone lives in a space of dimension n(n+1)/2, leading to increased computational demands for large n, despite the shared polynomial solvability.

General Conic Programs

General conic programs generalize the conic optimization framework by allowing the constraint cone K to be any proper convex cone, beyond the standard positive orthant, second-order Lorentz, or positive semidefinite cones, thereby enabling the representation of a wide variety of nonlinear convex constraints through affine mappings. The canonical form is to minimize c^\top x subject to F(x) \in K, where F is affine and K is closed, convex, pointed, and solid (with nonempty interior). This formulation unifies linear, second-order cone, and semidefinite programs under a single structure when K is chosen accordingly, while extending to more complex cones for advanced modeling. Non-standard cones, such as the exponential and power cones, extend the modeling capabilities of general conic programs to handle functions involving exponentials, logarithms, and powers that are not representable with simpler cones. The three-dimensional exponential cone is defined as the closure of \{(x, y, z) \in \mathbb{R}^3 \mid y > 0, \, y \exp(x/y) \leq z\}, which captures constraints like relative terms of the form x \log(x/y). Similarly, the power cone, often formulated in three dimensions as \{(x, y, z) \in \mathbb{R}^3_+ \mid z \geq x^\alpha y^{1-\alpha}, \, \alpha \in (0,1)\}, facilitates the representation of p-norms for p \neq 2, such as |t| \geq \|u\|_p for general p > 1. A key challenge in solving general conic programs over non-standard cones lies in constructing self-concordant s, which are essential for the efficiency of interior-point methods, as they ensure polynomial-time solvability when the barrier parameter \nu is bounded. For the exponential cone, a 3-self-concordant barrier function is -\log(y \log(z/y) - x) - \log y - \log z, enabling robust numerical implementation in solvers like MOSEK. For the power cone, barriers with parameter \nu = 3 have been developed, improving upon earlier \nu = 4 constructions and supporting extensions to generalized power cones. The modeling power of general conic programs is particularly evident in reformulating geometric programs and hyperbolic constraints into conic form, where exponential cones handle posynomial inequalities via logarithmic transformations, and power cones represent generalized means or hypograph constraints like \left( \sum_i x_i^p \right)^{1/p} \leq t. This allows solvers to tackle problems originally posed in non-convex forms, such as or , by exploiting the rich structure of these cones.

Duality Theory

Primal-Dual Framework

In conic optimization, the primal-dual framework establishes a symmetric relationship between a minimization problem and its maximization counterpart, enabling the of optimality and through paired structures. The standard primal conic program is formulated as minimizing \mathbf{c}^\top \mathbf{x} subject to A \mathbf{x} = \mathbf{b} and \mathbf{x} \in \mathcal{K}, where A is a linear , \mathbf{b} is a in the range space, \mathbf{c} is the objective coefficient , and \mathcal{K} is a closed . The associated dual program maximizes \mathbf{b}^\top \mathbf{y} subject to A^\top \mathbf{y} + \mathbf{s} = \mathbf{c} and \mathbf{s} \in \mathcal{K}^*, where \mathbf{y} is the variable and \mathcal{K}^* denotes the cone. This pair satisfies the relation that the primal optimal value is at least the dual optimal value under feasibility, forming the basis for duality theory in conic settings. A key element of this framework is the central path, which parameterizes a trajectory of approximate solutions converging to the optimum as a barrier parameter approaches zero. For a given \mu > 0, the central path consists of points (\mathbf{x}(\mu), \mathbf{y}(\mu), \mathbf{s}(\mu)) satisfying the feasibility A \mathbf{x}(\mu) = \mathbf{b}, feasibility A^\top \mathbf{y}(\mu) + \mathbf{s}(\mu) = \mathbf{c}, and the centering condition \mathbf{x}(\mu)^\top \mathbf{s}(\mu) = \mu, with \mathbf{x}(\mu) \in \text{int}(\mathcal{K}) and \mathbf{s}(\mu) \in \text{int}(\mathcal{K}^*). As \mu \to 0^+, these points approach the primal-dual optimal solutions, providing a smooth path that underlies path-following algorithms for solving conic programs. This structure generalizes the central path from to broader conic forms, leveraging barrier functions for self-concordance to ensure convergence properties. The symmetry inherent in the primal-dual framework is particularly pronounced when embedding problems into self-dual cones, allowing unified treatment of diverse conic programs. Self-dual cones satisfy \mathcal{K} = \mathcal{K}^* under an appropriate inner product, such as the nonnegative orthant, second-order cone, and semidefinite cone. The homogeneous self-dual embedding reformulates the - pair as a single convex feasibility problem in a product space: find a nonzero vector in the intersection of a and a self-dual cone, which encodes both problems symmetrically. This approach facilitates solving via operator-splitting methods, providing certificates of optimality, infeasibility, or unboundedness without distinguishing from , and extends to nonsymmetric cones through homogenization. Weak duality, a foundational result in this framework, asserts that for any primal feasible \mathbf{x} and dual feasible (\mathbf{y}, \mathbf{s}), the inequality \mathbf{c}^\top \mathbf{x} \geq \mathbf{b}^\top \mathbf{y} holds, establishing an upper bound on the dual objective by the primal. To prove this, note that dual feasibility implies \mathbf{s} = \mathbf{c} - A^\top \mathbf{y} \in \mathcal{K}^*, so \mathbf{x}^\top \mathbf{s} \geq 0 since \mathbf{x} \in \mathcal{K}. Substituting yields \mathbf{x}^\top (\mathbf{c} - A^\top \mathbf{y}) \geq 0, or \mathbf{c}^\top \mathbf{x} \geq \mathbf{x}^\top A^\top \mathbf{y} = (A \mathbf{x})^\top \mathbf{y} = \mathbf{b}^\top \mathbf{y}, relying solely on the nonnegativity of the duality pairing in cone properties. This proof extends the linear programming case and holds without regularity assumptions, underscoring the robustness of conic duality.

Duality Conditions

In conic optimization, holds when the optimal values of the and problems coincide, ensuring zero and attainment of optima under appropriate conditions. A key sufficient condition for is , which requires the existence of a strictly feasible point in the feasible set, meaning there exists x such that Ax = b and x \in \mathrm{ri}(K), and feasibility of the problem. This condition guarantees that both problems attain their optima if feasible, as established in the framework of general convex conic programs. For second-order cone and semidefinite programs, is particularly effective due to the rich interior structure of these cones, leading to reliable numerical duality in practice. When strict feasibility fails, the feasible sets may lie on lower-dimensional faces of the , potentially causing duality gaps or unattainability. Facial addresses this by iteratively identifying exposed faces of the that contain the feasible set, reducing the problem dimension to restore strict feasibility and ensure . The process begins by finding a reducing , a direction in the dual cone orthogonal to the affine constraints, which exposes a proper face; repeating this yields a minimal face . The degree of the problem is defined as the minimal number of such steps needed to reach the minimal face containing the feasible set, quantifying the extent of irregularity. This is crucial for ill-posed conic problems, such as those with singular in , where it can dramatically improve solver stability and convergence. Zero duality gap is assured under qualifiable constraints, such as when the primal and dual feasible sets have nonempty relative interiors intersecting appropriately with the , but failures can occur in lacking solid interiors. For instance, the exponential cone, a proper defined as the closure of \{(x,y,z) \in \mathbb{R}^3 \mid y e^{x/y} \leq z, y > 0\}, can exhibit positive when neither the primal nor dual problem is strictly feasible, highlighting the need for cone-specific regularity checks like even in proper such as the nonnegative or . At optimality, the Karush-Kuhn-Tucker (KKT) conditions for conic programs adapt the classical to include conic feasibility: primal feasibility Ax = b, x \in K; dual feasibility A^T y + s = c, s \in K^*; stationarity A^T y + s = c; and cone complementarity x \perp s, meaning \langle x, s \rangle = 0 with both x \in K and s \in K^*. These conditions are necessary and sufficient for optimality under strict feasibility, providing a of solution quality analogous to . In the primal-dual , the central approaches points satisfying these KKT conditions, aiding theoretical of .

Solution Approaches

Interior-Point Methods

Interior-point methods for conic optimization rely on s to navigate the interior of the feasible while approaching the optimal . These methods transform the conic —typically formulated as minimizing a linear objective subject to linear constraints and a conic —by incorporating a logarithmic barrier term that penalizes proximity to the boundary of the K. Specifically, the barrier subproblem is to minimize c^\top x + \mu \nu \psi_K(x), where \psi_K is a self-concordant for the K, \mu > 0 is the barrier parameter that decreases over iterations, and \nu is the complexity parameter of the barrier measuring its self-concordance degree. For the standard linear over the nonnegative , \nu = n where n is the dimension; for second-order s, \nu = 2 per ; and for semidefinite s over m \times m matrices, \nu = m. The central path of solutions to these barrier subproblems, parameterized by \mu, converges to the optimal set as \mu \to 0. -following algorithms approximate points on this central using steps to solve the optimality conditions of the barrier subproblem. The self-concordance property of \psi_K—which bounds the third relative to the second—ensures that these steps are affine-invariant and damped appropriately, maintaining progress toward the without line searches in short-step variants. This , introduced for general cones with universal self-concordant barriers, guarantees polynomial-time when \nu is moderate. Primal-dual interior-point methods extend this approach by simultaneously optimizing over and variables, leveraging the conic duality framework to solve the linearized Karush-Kuhn-Tucker (KKT) conditions. These methods compute search directions (\Delta x, \Delta y, \Delta s) by solving a symmetric derived from the Newton equations for the joint on the primal cone K and dual cone K^*, often using Nesterov-Todd scaling for self-dual cones to ensure centrality. This scaling preserves the self-concordance and enables full-step or predictor-corrector variants that avoid adaptive damping. The of these methods is , requiring O(\sqrt{\nu} \log(1/\epsilon)) iterations to achieve an \epsilon-optimal , where each solves a of size comparable to the problem . This bound holds for path-following schemes with self-concordant barriers and extends to primal-dual methods under similar assumptions. For instance, in , the complexity is O(\sqrt{m} \log(1/\epsilon)), establishing the practical efficiency of these algorithms for moderate-sized problems.

Cutting-Plane Methods

Cutting-plane methods provide an alternative to interior-point techniques for solving conic optimization problems, particularly when self-concordant barrier functions for the cone are unavailable or difficult to compute. These methods iteratively refine an outer approximation of the feasible set or the objective function by adding linear inequalities, known as cutting planes, generated via separation or subgradient oracles that certify infeasibility or provide information at trial points. By relying on such oracles rather than smooth barrier functions, cutting-plane approaches can handle a broad class of cones, including those arising in semidefinite and more general conic programs. The ellipsoid method represents a foundational cutting-plane algorithm applicable to conic programs through the use of a separation for the cone. It initializes a large containing the and, at each , queries the oracle at the 's center; if the point is infeasible, a separating (cut) is added, and the is updated to a smaller one containing the of the previous with the half-space defined by the cut. This process reduces the 's volume by a factor of at most $1 - \frac{1}{2n} per step, where n is the , ensuring polynomial-time to an \epsilon-optimal solution in O(n^2 \log(1/\epsilon)) . Despite its theoretical efficiency, the method is impractical for large-scale conic problems due to the high per- cost of updates and the dependence on . Bundle methods extend cutting-plane ideas to nonsmooth minimization, approximating the objective (often the ) with a piecewise linear lower bound constructed from a bundle of subgradients obtained at previous points, then solving a subproblem to select the next point and add new cuts if needed. In the context of conic optimization, the spectral bundle method adapts this for semidefinite programs by incorporating eigenvalue-based approximations in the , where cutting planes are derived from subproblems involving the maximum eigenvalue of affine functions, enabling efficient handling of large-scale structured instances. This approach stabilizes the model with proximity terms to ensure and global convergence. The analytic cutting-plane focuses on feasibility problems in conic programs by maintaining a polyhedral outer of the feasible set and its analytic —the maximizer of the logarithmic barrier over the current —as the trial point. For semidefinite programs, if the violates feasibility, a semidefinite cut is generated via an and added to tighten the , with all iterates maintained as positive definite matrices to facilitate numerical stability. The requires O^*(m^3 / \epsilon^2) total cuts in the worst case to find a point within distance \epsilon of the feasible set, where m is the matrix dimension. This framework generalizes to arbitrary conic programs using conic cuts, bounding the number of steps needed to recover the analytic after each cut addition. A key advantage of cutting-plane methods, including the ellipsoid, bundle, and analytic center variants, is their applicability to conic programs over cones lacking efficient self-concordant barriers, such as the copositive cone, where interior-point methods falter. For copositive optimization, these techniques approximate the cone with inner and outer polyhedral relaxations via simplicial partitions, iteratively adding cuts to refine the until , enabling solutions to problems with thousands of variables that would otherwise be intractable.

Applications

Engineering Design

Conic optimization plays a pivotal role in engineering design, particularly in fields requiring robust performance under uncertainty, such as systems and . By formulating design problems as semidefinite programs (SDPs) or second-order cone programs (SOCPs), engineers can optimize complex constraints that ensure stability, efficiency, and reliability in physical systems. These methods leverage the nature of conic forms to provide globally optimal solutions, often outperforming traditional approaches in handling nonlinearities and uncertainties. In robust control design, SDPs are extensively used to guarantee system stability via Lyapunov functions, where linear matrix inequalities (LMIs) model the constraints. A key application involves minimizing the H-infinity norm of a transfer function to achieve robust performance against disturbances; this is formulated as an SDP by seeking a positive semidefinite matrix that satisfies LMI conditions derived from the system's state-space representation. For instance, in aerospace applications like aircraft autopilot design, this approach ensures the closed-loop system remains stable for all perturbations within a bounded uncertainty set, as demonstrated in early works on LMI-based control synthesis. The resulting controllers often yield significant improvements in disturbance rejection compared to classical methods in benchmark tests. Sensor network localization problems, which determine the positions of nodes based on distance measurements, are effectively solved using SOCPs to minimize positioning errors under noisy constraints. The formulation treats distances as second-order cone constraints, where the objective is to minimize the sum of squared errors between measured and estimated distances, subject to inequalities ensuring geometric consistency. This approach is particularly valuable in wireless sensor networks for or structural health assessment, where it enables accurate 3D localization. Seminal implementations have shown that SOCP-based methods converge faster and achieve lower localization errors than least-squares alternatives, especially in non-convex scenarios approximated via convex relaxations. For truss design in , SDP relaxations address and stability constraints by enforcing positive semidefiniteness on the , allowing optimization of member cross-sections to minimize weight while satisfying and conditions. The problem is cast as an where the dual form provides bounds on the feasible distributions, ensuring the structure remains semidefinite under load variations. This technique has been applied to and tower designs, yielding lightweight configurations that meet safety margins over approximations, as validated in finite element simulations. By relaxing non-convex quadratic constraints into semidefinite ones, these methods provide tight approximations to the global optimum, facilitating practical implementations in software. Antenna array design for beamforming utilizes SOCPs to optimize signal patterns under power and interference constraints, formulating the beamformer weights as variables in a second-order cone to maximize the signal-to-interference-plus-noise ratio (SINR). The constraints model the array's response as norms bounded by cones, enabling robust designs that maintain directivity in the presence of steering errors or jammers. In radar and communication systems, such as 5G base stations, this results in beam patterns with enhanced sidelobe suppression, improving coverage and reducing interference in multi-user scenarios. SOCP solvers efficiently handle these problems for arrays with dozens of elements, outperforming SDP alternatives in computational speed for real-time applications.

Financial Modeling

Conic optimization plays a pivotal role in , particularly in addressing uncertainties inherent in market dynamics and asset behaviors. In , (SOCP) extends the classical Markowitz mean-variance framework by incorporating transaction costs through norm-based approximations. Linear transaction costs can be modeled directly within the optimization, leading to tractable SOCP formulations that balance expected returns against risk while penalizing excessive trading. For instance, fixed and proportional costs are handled via additional linear constraints, ensuring the problem remains and solvable efficiently. Semidefinite programming (SDP) finds application in option pricing through convex relaxations that provide tight bounds without relying on specific stochastic models. For American options, which allow early exercise and introduce non-convexity, SDP relaxations leverage moment problems to derive upper and lower price bounds by optimizing over matrices representing feasible distributions. These relaxations exploit the structure to ensure no-arbitrage conditions, offering model-free guarantees on pricing accuracy. Similarly, SDP is used for bounds, where it solves semidefinite programs to estimate surfaces consistent with observed option prices, aiding in calibration and risk assessment. Seminal work demonstrates that cutting-plane algorithms enhance SDP efficiency for these bounds, achieving near-optimal solutions for multi-asset options. Risk measures in , such as conditional value-at-risk (CVaR), admit conic representations that facilitate robust decisions under tail risks. CVaR, which captures expected losses beyond a value-at-risk , can be reformulated as an SOCP when integrated with mean-variance objectives, allowing optimization over second-order cones to minimize while maximizing returns. This conic form enables efficient handling of joint constraints on and variance. For robust hedging, SOCP models uncertainty in asset returns via ellipsoidal sets, formulating superhedging strategies that protect against worst-case scenarios; for example, hedging an index option involves solving an SOCP to minimize the maximum hedging error over uncertain return distributions. These approaches ensure portfolios are resilient to model misspecification, with duality providing interpretable certificates of robustness. In management, supports estimation under by enforcing positive semidefiniteness on estimated matrices, crucial for simulating correlated defaults in portfolios. Stress scenarios often distort historical covariances, leading to non-positive semidefinite estimates; resolves this by solving trace-minimization problems over the semidefinite cone to project data onto feasible matrices while preserving key statistical properties. This is particularly valuable in regulatory stress tests, where accurate informs capital requirements for portfolios exposed to economic shocks. Applications demonstrate that -based estimators reduce estimation errors in high-dimensional settings, improving the reliability of value-at-risk projections for credit exposures.

References

  1. [1]
    [PDF] A ten page introduction to conic optimization
    1.1. Optimization and computational hardness. Optimization is about maximizing or minimizing a function over a set.
  2. [2]
    [PDF] A Guide to Conic Optimisation and its Applications
    In this invited pa- per, we give a gentle introduction to conic optimisation, followed by a survey of applications in OR and related areas.
  3. [3]
    [PDF] Lecture 6 Conic optimization - MIT
    Feb 29, 2024 · 1 Conic optimization​​ A conic optimization problem is a nonlinear optimization problem whose feasible set is the intersec- tion between an ...
  4. [4]
    [PDF] Differentiating Through a Conic Program - Stanford University
    May 23, 2019 · A cone program is an optimization problem in which the objective is to minimize a linear function over the intersection of a subspace and a ...<|control11|><|separator|>
  5. [5]
    [PDF] 15. Conic optimization
    Outline. • conic linear program. • examples. • modeling. • duality. Page 7 ... Definition: for α = (α1,α2,...,αm) > 0 and m. P i=1 αi = 1. Kα = (x, y) ∈ R m.
  6. [6]
    [PDF] Interior-point methods for optimization
    To contrast with the general case, Nesterov and. Nemirovski listed a considerable number of important problems where com- putationally tractable self-concordant ...
  7. [7]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  8. [8]
    [PDF] CS295: Convex Optimization
    A set C is a convex cone if it is convex and a cone, i.e., x1,x2 ∈ C ... Definition (dual cones). Let K be a cone. The set. K∗ = {y ∣ xT y ≥ 0 ∀x ...
  9. [9]
    [PDF] Conic optimization: an elegant framework for convex optimization
    The additional notions of solidness and pointedness also behave well when taking the dual of a convex cone: indeed, these two properties are dual to each other ...
  10. [10]
    8 Duality in conic optimization — MOSEK Modeling Cookbook 3.4.0
    Duality theory is a rich and powerful area of convex optimization, and central to understanding sensitivity analysis and infeasibility issues.
  11. [11]
    [PDF] Conic Linear Programming - Stanford University
    graduate student, Farid Alizadeh. He, working then on combinatorial optimiza- tion, introduced me “semidefinite optimization” or linear programming over the.
  12. [12]
    [PDF] Second-order cone programming - Department of Statistics
    Aug 18, 2001 · Second-order cone programming (SOCP) problems are convex optimization problems in which a linear function is minimized over the intersection ...
  13. [13]
    [PDF] SOCP and SDP 6.1 Recap 6.2 Second-order Cone Optimization
    Feb 2, 2012 · In this lecture we focus on a cone that involves second-order cones only (second-order cone programming, or SOCP) or the semi-definite cone only ...
  14. [14]
    3 Conic quadratic optimization — MOSEK Modeling Cookbook 3.4.0
    We leave it for the reader to check that the intersection of convex cones is a convex cone; this property enables us to assemble complicated optimization models ...
  15. [15]
    [PDF] 16 Linear programming, second-order cone programming, semidef
    A linear program in standard form is an optimization problem of the form min ... A second-order cone program (SOCP) is an optimization problem of the form.
  16. [16]
    [PDF] Applications of second-Order cone programming '
    The main goal of the Paper is to present an overview of examples and appli- cations of second-Order cone programming. We Start in Section 2 by describing.
  17. [17]
    [PDF] Introduction to Semidefinite Programming (SDP)
    With this notation, we are now ready to. define a semidefinite program. A semidefinite program (SDP) is an opti- mization problem of the form: SDP : minimize C ...
  18. [18]
    [PDF] Semidefinite Programming - Stanford University
    Most interior-point methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial ...
  19. [19]
    Chapter 2 Semidefinite Optimization
    ... Formulation and Duality. Semidefinite programs (SDPs) are linear optimization problems over spectrahedra. A standard SDP in primal form is written as p⋆=minX∈ ...
  20. [20]
    Semidefinite programming - ScienceDirect.com
    Mar 16, 2002 · Consequently, the facial structure of the semidefinite cone has a strong influence on the facial structure of the feasible set. One such ...
  21. [21]
    5 Exponential cone optimization — MOSEK Modeling Cookbook 3.4.0
    ### Summary of Exponential Cone from https://docs.mosek.com/modeling-cookbook/expo.html
  22. [22]
    [PDF] On self-concordant barriers for generalized power cones
    Jan 30, 2018 · In the study of interior-point methods for nonsymmetric conic optimization and their applications, Nesterov [5] introduced the power cone, ...
  23. [23]
  24. [24]
    [PDF] Conic Geometric Programming - People
    Oct 2, 2013 · We introduce and study conic geometric programs (CGPs), which are convex optimization problems that unify geometric programs (GPs) and conic ...
  25. [25]
    [PDF] Conic Programming
    The first topic we will discuss in the course is conic programming, which is a valu- able tool for the study of quantum information.<|control11|><|separator|>
  26. [26]
    [PDF] A Homogeneous Interior-Point Algorithm for Nonsymmetric Convex ...
    Nesterov proposed in [16] a barrier function for the three-dimensional power cone with parameter ν = 4. Our computational experience shows that Fα is better ...
  27. [27]
    [PDF] Conic Optimization via Operator Splitting and Homogeneous Self ...
    Jul 25, 2016 · Abstract We introduce a first order method for solving very large convex cone programs. The method uses an operator splitting method, ...
  28. [28]
    Primal-Dual Interior-Point Methods for Self-Scaled Cones - SIAM.org
    In this paper we continue the development of a theoretical foundation for efficient primal-dual interior-point algorithms for convex programming problems ...Missing: Nemirovski | Show results with:Nemirovski<|separator|>
  29. [29]
    [PDF] Localization and Cutting-Plane Methods
    The goal of cutting-plane and localization methods is to find a point in a convex set X ⊆ Rn, which we call the target set, or, in some cases, to determine ...
  30. [30]
    The Ellipsoid Method - PubsOnLine
    In February 1979 a note by L. G. Khachiyan indicated how an ellipsoid method for linear programming can be implemented in polynomial time. This.Missing: URL | Show results with:URL
  31. [31]
    [PDF] Oracles, Ellipsoid method and their uses in convex optimization
    To sum up, the importance of the Ellipsoid method is that it allows you to see at a glance that a convex optimization problem is solvable in polynomial time: ( ...
  32. [32]
    A Spectral Bundle Method for Semidefinite Programming - SIAM.org
    We present a method that allows us to compute acceptable approximations to the optimal solution of large problems within reasonable time.
  33. [33]
    An Analytic Center Cutting Plane Method for Semidefinite Feasibility ...
    All iterates generated by the algorithm are positive definite matrices. The algorithm has a worst-case complexity of O*(m3/ε2) on the total number of cuts to be ...
  34. [34]
    An Analytic Center Cutting Plane Approach for Conic Programming
    Aug 1, 2008 · We generalize the results obtained for the linear programming (LP), semidefinite programming (SDP), and second-order core programming (SOCP) ...<|separator|>
  35. [35]
    [PDF] Copositive Programming – a Survey - Optimization Online
    As far as we are aware, there are two approaches to solve copositive pro- grams directly: one is a feasible descent method in the completely positive cone C∗, ...
  36. [36]
    [PDF] Portfolio optimization with linear and fixed transaction costs
    We consider the problem of portfolio selection, with transaction costs and constraints on exposure to risk. Linear transaction costs, bounds on the vari-.
  37. [37]
    [PDF] Robust Optimization Approaches for Portfolio Selection - arXiv
    Oct 26, 2020 · Many robust portfolio allocation problems can be formulated as Second Order Cone Programming (SOCP) problems [13]. 2.2 Robust Linear ...
  38. [38]
    Financial applications of semidefinite programming: a review and ...
    Sep 27, 2019 · Gotoh, J., and H. Konno, 2002, Bounding option prices by semidefinite programming: a cutting plane algorithm, Management Science 48, 665–678.
  39. [39]
    Portfolio optimization using second order conic programming ...
    Mar 22, 2020 · We demonstrate that, when using CVaR, several common nonlinear models can be expressed as second order cone programming problems and therefore ...
  40. [40]
    Robust One-Period Option Hedging | Operations Research
    ... robust problem is a second-order cone program that can be solved efficiently. We apply the approach to find an optimal portfolio to hedge an index option.
  41. [41]
    [PDF] Covariance Matrix Estimation for Interest-Rate Risk Modeling via ...
    We propose a convex optimization (semi-definite programming) formulation for this estimation problem, and develop efficient algorithms. We apply our framework ...