Fact-checked by Grok 2 weeks ago

Barrier function

In , a is a continuous, function defined on the interior of a feasible set whose value tends to positive as its approaches the of the set, serving to enforce constraints implicitly in interior-point methods. These functions are integral to barrier methods, which transform constrained problems into a sequence of unconstrained ones by adding a barrier term to the objective function, with the barrier parameter adjusted to approximate the original problem as it increases. A example is the logarithmic barrier function for constraints h_i(x) \leq 0, given by \phi(x) = -\sum_{i=1}^m \log(-h_i(x)), which is and defined on the strictly feasible set where all h_i(x) < 0. Barrier functions play a pivotal role in efficient algorithms for solving convex optimization problems, particularly through self-concordance, a property that ensures the third derivative is bounded relative to the Hessian, enabling polynomial-time convergence via Newton's method despite the singularity at the boundary. For instance, the logarithmic barrier is self-concordant for linear inequalities and extends to more complex cones, such as the positive orthant with -\sum \log x_i or the semidefinite cone with -\log \det X. The central path, traced by minimizers of t f(x) + \phi(x) as the barrier parameter t varies, guides the algorithm toward the optimal solution of the original constrained problem. Beyond optimization, the term "barrier function" also arises in control theory as control barrier functions (CBFs), which are used to certify and enforce forward invariance of safe sets for dynamical systems by ensuring that the Lie derivative along system trajectories satisfies a decay condition. In this context, a function h(x) \geq 0 defines a safe set, and the CBF condition \sup_u [L_f h(x) + L_g h(x) u] \geq -\alpha(h(x)) (with \alpha an extended class \mathcal{K} function) guarantees safety-critical behavior in applications like robotics and autonomous vehicles. These concepts, while sharing the "barrier" motif of repelling trajectories from unsafe regions, differ fundamentally from their optimization counterparts in focusing on real-time control rather than static problem-solving.

Introduction and Motivation

Overview of Barrier Methods

Barrier methods serve as a class of interior-point techniques in constrained optimization, designed to convert problems with inequality constraints into a series of unconstrained minimization problems. By appending a barrier term to the objective function, these methods impose a penalty that increases sharply as the solution nears the boundaries of the feasible region, ensuring that all iterates remain strictly interior to the feasible set throughout the process. This approach avoids the need for explicit handling of constraints via projections or active-set strategies, allowing for efficient navigation through the interior of the feasible domain. Central to barrier methods is the barrier parameter, typically denoted μ, which governs the intensity of the penalty enforced by the barrier term. The algorithm proceeds by solving a sequence of unconstrained subproblems, where μ is progressively reduced—often in a controlled manner—to draw the solution closer to the optimal point on the boundary of the feasible region while preserving strict feasibility at each step. This parametric continuation strategy enables the method to approximate the solution of the original constrained problem with increasing accuracy as μ approaches zero. To illustrate, consider minimizing an objective function subject to a simple inequality constraint such as x ≥ 0. The barrier term discourages the optimizer from approaching x = 0 too aggressively, preventing constraint violations without requiring separate projection operations onto the feasible set after each update. In this way, the method maintains feasibility inherently through the penalized objective. A key aspect of barrier methods is the inherent trade-off between enforcing feasibility and minimizing the objective: higher values of μ prioritize staying well inside the feasible region, potentially yielding conservative objective values, while lower μ values permit closer approximation to the optimum but demand careful step control to avoid infeasibility. This balance is managed through the iterative decrease of μ, facilitating convergence to the constrained optimum. One widely used form of barrier term is the logarithmic barrier, which exemplifies these principles and is explored in greater detail later.

Historical Context and Development

The concept of barrier functions in optimization originated in the 1960s as part of efforts to solve constrained nonlinear programming problems through sequential unconstrained minimization techniques (SUMT). Anthony V. Fiacco and Garth P. McCormick introduced the logarithmic barrier function within SUMT, transforming constrained problems into a sequence of unconstrained ones by adding a barrier term that penalizes proximity to constraint boundaries, as detailed in their seminal 1968 book. This approach built on earlier ideas like Ragnar Frisch's logarithmic penalty methods from the 1950s but formalized the barrier mechanism for practical computation in nonlinear contexts. The development gained renewed momentum in 1984 with Narendra Karmarkar's invention of a polynomial-time interior-point algorithm for linear programming, which leveraged to navigate the interior of the feasible region efficiently, achieving complexity bounds previously unattainable by simplex methods. This breakthrough, published in Combinatorica, sparked widespread interest in barrier-based interior-point methods, shifting focus from theoretical constructs to algorithms capable of handling large-scale problems. In the 1990s, Arkadi Nemirovski and Yurii Nesterov extended barrier functions to general convex optimization through the theory of self-concordant barriers, introduced in their 1994 book, enabling polynomial-time solutions for broad classes of convex programs by ensuring desirable analytic properties like affine invariance. This framework influenced the integration of barrier methods into commercial solvers; for instance, IBM's CPLEX introduced its barrier solver in 1994, version 3.0, marking a key milestone in practical adoption. Similarly, MATLAB's Optimization Toolbox incorporated interior-point algorithms using barriers by the early 2000s, facilitating their use in diverse applications.

Core Concepts and Formulations

General Definition of Barrier Functions

In convex optimization, the standard setup involves minimizing a convex objective function f: \mathbb{R}^n \to \mathbb{R} subject to convex inequality constraints g_i(x) \leq 0 for i = 1, \dots, m and linear equality constraints h_j(x) = 0 for j = 1, \dots, p, where the feasible set D = \{ x \mid g_i(x) \leq 0, \, i=1,\dots,m; \, h_j(x)=0, \, j=1,\dots,p \} is assumed nonempty with nonempty interior. A barrier function B: \mathrm{int}(D) \to \mathbb{R} provides a mathematical mechanism to enforce feasibility by penalizing proximity to the boundary of the feasible region; it is defined as a strictly convex function on the interior \mathrm{int}(D) = \{ x \mid g_i(x) < 0, \, i=1,\dots,m; \, h_j(x)=0, \, j=1,\dots,p \} that is coercive near the boundaries, meaning B(x) \to +\infty as x approaches any point on \partial D. For a barrier function to be valid in this context, it must be defined exclusively on \mathrm{int}(D), be continuously differentiable on its domain, and exhibit unbounded growth as any constraint boundary is approached (i.e., as g_i(x) \to 0^- for any i, while respecting the equalities). The core utility of a barrier function arises in the formulation of barrier subproblems, where the original objective is augmented to form \phi_\mu(x) = f(x) + \mu B(x) for a barrier parameter \mu > 0; these subproblems are minimized over \mathrm{int}(D), and the central path is defined as the continuous curve traced by the sequence of minimizers x^*(\mu) = \arg\min_{x \in \mathrm{int}(D)} \phi_\mu(x) as \mu \to 0^+. The logarithmic barrier exemplifies this general framework in practice.

Logarithmic Barrier Function in One Dimension

The logarithmic serves as a fundamental tool for handling constraints in optimization problems by penalizing solutions that approach the boundary of the . For the simple one-dimensional constraint x > 0, the barrier is defined as B(x) = -\log(x). This is strictly on (0, \infty), as its B''(x) = 1/x^2 > 0 for all x > 0, ensuring that minimizers remain in the strict interior of the . As x \to 0^+, B(x) \to +\infty, which enforces the by making boundary violations infinitely costly, while the first B'(x) = -1/x grows negatively without bound near the , creating a steep repulsive force that drives solutions away from the boundary. To incorporate this barrier into an optimization problem, consider minimizing an objective f(x) subject to x \geq a where a > 0 and x is strictly feasible (i.e., x > a). The augmented barrier objective is formed as \phi_\mu(x) = f(x) - \mu \log(x - a), where \mu > 0 is the barrier parameter that controls the strength of the penalty. This formulation transforms the constrained problem into a sequence of unconstrained minimizations over decreasing values of \mu, with the barrier term -\mu \log(x - a) (equivalent to \mu B(x - a)) ensuring interiority. The convexity of \phi_\mu(x) follows from the convexity of f(x) (assuming it is convex) and the barrier term, as the composition preserves convexity. As \mu \to 0^+, the minimizer of \phi_\mu(x) approaches the solution of the original constrained problem. A illustrative example is the problem of minimizing the quadratic objective f(x) = x^2 subject to x \geq 1, whose exact solution is x^* = 1 with objective value 1. Using the logarithmic barrier, the augmented objective becomes \phi_\mu(x) = x^2 - \mu \log(x - 1), defined for x > 1. The minimizer x(\mu) satisfies the first-order condition $2x(\mu) - \mu / (x(\mu) - 1) = 0, or equivalently, x(\mu) = \frac{1 + \sqrt{1 + 2\mu}}{2}. As \mu decreases, x(\mu) shifts toward the boundary: for \mu = 1, x(1) \approx 1.366 with \phi_1(x(1)) \approx 2.871; for \mu = 0.1, x(0.1) \approx 1.048 with \phi_{0.1}(x(0.1)) \approx 1.402; and for \mu = 0.01, x(0.01) \approx 1.005 with \phi_{0.01}(x(0.01)) \approx 1.063. These barrier iterations demonstrate how successively smaller \mu values yield approximations that converge interiorly to the optimal point, with the objective values approaching 1 from above. In practice, each \phi_\mu minimization can be solved using Newton's method, leveraging the positive definiteness of the Hessian \phi_\mu''(x) = 2 + \mu / (x - 1)^2 > 0.

Extensions and Properties

Higher-Dimensional Logarithmic Barriers

In higher-dimensional optimization problems involving polyhedral constraints defined by linear inequalities Ax \leq b, where A \in \mathbb{R}^{m \times n} and b \in \mathbb{R}^m, the logarithmic barrier function generalizes to B(x) = -\sum_{i=1}^m \log(b_i - a_i^T x), which is defined on the open interior feasible set \{ x \in \mathbb{R}^n \mid Ax < b \}. This formulation prevents solutions from reaching the boundary by assigning increasingly large values as any slack b_i - a_i^T x approaches zero, thereby enforcing strict feasibility. The construction aggregates individual logarithmic barriers for each constraint, extending the scalar case to multivariate domains. The convexity of B(x) follows from the properties of its components: each term -\log(b_i - a_i^T x) is convex as the composition of the convex and decreasing function -\log(t) (for t > 0) with the affine function b_i - a_i^T x, and the finite sum of convex functions preserves convexity. Thus, B(x) is a on the interior, ensuring that barrier-augmented objectives remain convex when added to a convex objective function. This is essential for guaranteeing the existence of minimizers in interior-point methods. For more general convex constraints g(x) \leq 0, where each g_i: \mathbb{R}^n \to \mathbb{R} is , the logarithmic barrier extends to B(x) = -\sum_{i=1}^m \log(-g_i(x)), defined on the strict interior \{ x \mid g(x) < 0 \}. Here, convexity holds because each term −log(−g_i(x)) is , as it is the composition of the convex nondecreasing function t \mapsto -\log(-t) (for t \leq 0) with the convex function g_i(x); the sum of convex functions is . This form accommodates nonlinear convex inequalities while maintaining the barrier's protective role against boundary violations. In n-dimensional settings, the central path traces the solutions to barrier subproblems of the form x(\mu) = \arg\min_x \left\{ f(x) - \mu \sum_{j=1}^m \log(s_j) \right\}, where \mu > 0 is a barrier parameter and s_j denote slack variables associated with the constraints. As \mu decreases toward zero, these points approximate the optimal of the original constrained problem, forming a smooth trajectory through the interior. This path leverages the logarithmic terms to balance objective minimization with feasibility maintenance. A concrete example arises in standard-form , \min c^T x subject to Ax = b, x \geq 0, which can be recast with nonnegative slacks s via Ax + s = b, s \geq 0. The corresponding barrier function is then B(x, s) = -\sum_{j=1}^m \log(s_j), defined where s_j > 0 for all j, ensuring the equality constraints are satisfied while penalizing small slacks. This setup integrates the barrier directly into the equality-constrained subproblem, facilitating Newton-based solution methods along the central path.

Key Properties and Convergence

Barrier functions in convex optimization possess key mathematical properties that enable efficient interior-point methods, with self-concordance being paramount for ensuring the effectiveness of Newton-based iterations. A thrice continuously differentiable F defined on the interior of a Q is self-concordant if it satisfies the inequality |D^3 F(x)[h, h, h]| \leq 2 (D^2 F(x)[h, h])^{3/2} for all x \in \operatorname{int} Q and directions h, where D^2 F(x)[h, h] and D^3 F(x)[h, h, h] denote the second and third directional , respectively. This condition implies that the third derivative is controlled relative to the of the second derivative raised to the third , providing a measure of the function's "barrier strength" parameterized by \nu, often equal to the for standard barriers. Self-concordance guarantees that Newton steps for minimizing barrier-augmented objectives are affine-invariant, meaning the steps and associated decrement remain unchanged under nonsingular affine transformations of the , which enhances the method's robustness across problem formulations. In the context of barrier methods, self-concordance facilitates a unified of iterations for solving the barrier subproblem \min_x \phi_\mu(x) = f(x) + \mu B(x), where B is the barrier function and \mu > 0 is the barrier parameter. The step \Delta x is computed by solving the \nabla^2 \phi_\mu(x) \Delta x = -\nabla \phi_\mu(x), which leverages the and controlled variation of the induced by self-concordance to ensure quadratic local convergence near the minimizer. For global convergence, short-step methods incorporate damping, taking partial steps along the direction (e.g., \alpha \Delta x with \alpha \leq 1) to maintain proximity to the central , while path-following methods adjust \mu sequentially to trace the central defined by the minimizers of \phi_\mu. Logarithmic barriers, such as those for simplices or positive orthants, are prototypical examples of self-concordant functions with \nu linearly with the problem dimension. Barrier methods exhibit strong connections to duality through the structure of the barrier subproblems. Specifically, the dual associated with the barrier-augmented primal problem is g_\mu(y) = \sup_x \{ y^T Ax - b^T y - f(x) - \mu B(x) \}, which itself admits a self-concordant barrier via the Legendre transform of the primal barrier, preserving the \nu and enabling symmetric primal- algorithms that exploit complementary slackness at optimality. The self-concordance property underpins polynomial-time convergence guarantees for interior-point methods. For a \nu-self-concordant barrier, path-following algorithms require O(\sqrt{\nu} \log(1/\epsilon)) Newton iterations to achieve an \epsilon-accurate solution, where the logarithm accounts for the initial distance to the optimum and the barrier parameter \nu measures the complexity of the feasible set. This bound holds uniformly across affine-equivalent problems due to the affine-invariance of self-concordance, establishing that barrier methods solve broad classes of convex programs in time polynomial in the input size and reciprocal accuracy.

Applications in Optimization

Use in Linear Programming

In linear programming, barrier functions, particularly the logarithmic barrier, are employed in primal-dual interior-point methods to solve problems of the form \min \{ c^T x \mid Ax = b, x \geq 0 \}, where A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, and c \in \mathbb{R}^n. The associated dual problem is \max \{ b^T y \mid A^T y + s = c, s \geq 0 \}, with y \in \mathbb{R}^m and s \in \mathbb{R}^n. To handle the non-negativity constraints, the logarithmic barrier function -\sum_{i=1}^n \log x_i - \sum_{i=1}^n \log s_i is incorporated into the objective, leading to the perturbed primal-dual problem \min_x \max_y \{ c^T x - b^T y - \mu (\sum_{i=1}^n \log x_i + \sum_{i=1}^n \log s_i) \mid Ax = b, A^T y + s = c \}, where \mu > 0 is a barrier parameter. The central path consists of solutions (x(\mu), y(\mu), s(\mu)) to this problem for decreasing \mu, satisfying the centrality condition X S e = \mu e, where X = \diag(x), S = \diag(s), and e is the vector of ones; as \mu \to 0, these points approach the optimal primal-dual solution. The predictor-corrector algorithm, a key variant of these methods, computes directions to follow the central efficiently. In the predictor step, an affine-scaling direction is calculated by solving the for the KKT conditions while ignoring the barrier term (setting \mu = 0), predicting progress toward the optimum. Maximum step lengths are then determined to maintain feasibility, and an affine centrality measure \mu_{\aff} is computed. The corrector step adjusts by solving a modified with centering parameter \sigma \in (0,1], often adaptively set as \sigma = (\mu_{\aff}/\mu)^3, to recenter the iterate toward the central . The full step combines these directions, with ensuring positivity of x and s, and \mu is updated (typically reduced by a factor) after each iteration until the duality gap c^T x - b^T y \leq \epsilon. This approach leverages the self-concordance of the logarithmic barrier for damped steps. Theoretically, these methods achieve polynomial-time complexity, requiring O(\sqrt{n} L) iterations to reach an \epsilon-optimal solution, where L is the bit length of the input data; each iteration solves a of size O(n), costing O(n^3) arithmetic operations via Cholesky factorization, yielding overall complexity O(n^{3.5} L). This bound relies on the self-concordance property of the logarithmic barrier, ensuring quadratic convergence near the optimum and controlled progress along the path. A simple illustrative example is the linear program \max \{ 3x_1 + 3x_2 \mid x_1 + x_2 \leq 4, x_1 \geq 0, x_2 \geq 0 \}, whose optimal solution is x_1 = x_2 = 2 with objective value 12. Applying the primal logarithmic barrier P(x, \mu) = 3x_1 + 3x_2 + \mu [\log(4 - x_1 - x_2) + \log x_1 + \log x_2], steps solve for central points by setting partial derivatives to zero, yielding symmetric solutions x_1 = x_2. Starting from a feasible interior point and decreasing \mu (e.g., from 1000 to 0.01), the iterates trace the : for \mu = 1000, x_1 = x_2 \approx 1.335 (objective \approx 8.01); for \mu = 1, x_1 = x_2 \approx 1.859 (objective \approx 11.15); for \mu = 0.01, x_1 = x_2 \approx 1.998 (objective \approx 11.99), converging to the optimum as \mu \to 0. In primal- extensions, dual slacks follow similarly, tightening complementarity. Compared to the method, interior-point methods using barriers polynomial time in the worst case, whereas can require exponentially many pivots; additionally, they support warm starts from previous solutions, enabling efficient handling of or sequential LPs.

Role in Convex Optimization

Barrier functions extend the interior-point framework from to general , enabling efficient solutions for conic, semidefinite, and nonlinear problems by incorporating self-concordant barriers that enforce feasibility while allowing polynomial-time . These methods construct a sequence of barrier subproblems whose solutions approximate the original optimum as the barrier decreases, leveraging the of self-concordant functions to theoretical performance bounds. In conic programming, universal self-concordant barrier functions provide a unified approach for self-dual cones, such as those arising in second-order cone programs (SOCPs). Introduced by Nesterov and Nemirovski, these barriers exist for any proper convex cone and have a self-concordance parameter proportional to the cone's dimension, facilitating path-following algorithms with iteration complexity O(\sqrt{\nu} \log(1/\epsilon)), where \nu is the parameter and \epsilon is the desired accuracy. For the second-order cone \mathcal{Q}^{m} = \{(x, t) \in \mathbb{R}^{m-1} \times \mathbb{R} \mid \|x\|_2 \leq t \}, the standard barrier is B(x, t) = -\log(t^2 - \|x\|_2^2), which is 2-self-concordant and repels iterates from the boundary. Semidefinite programming (SDP), a key subclass of conic programming, employs the barrier B(X) = -\log \det(X) for positive definite matrices X \succ 0 in the semidefinite cone \mathcal{S}_{+}^n. This n-self-concordant function, where n is the matrix dimension, underpins primal-dual interior-point methods that solve SDPs in polynomial time, with each Newton step requiring the solution of a derived from the barrier's . The barrier ensures strict feasibility and drives along the central path. For nonlinear convex problems of the form \min f(x) subject to g(x) \in \mathcal{K}, where f is and \mathcal{K} is a proper , composite barriers combine the objective with a barrier: B_{\mu}(x) = f(x) + \mu B_{\mathcal{K}}(g(x)). Under self-concordance of f and B_{\mathcal{K}}, this yields a sequence of smooth approximations solvable via iterations, generalizing barrier methods to broader settings like . These techniques are implemented in optimization software such as CVX and MOSEK, which model and solve conic and semidefinite programs using barrier-based interior-point solvers. For example, in , constraints on or variance can be cast as SOCPs, with the second-order barrier enabling efficient risk-return trade-off computations in large-scale instances. Limitations arise in high-dimensional problems, where the self-concordance \nu scales with dimension, potentially requiring more iterations, and evaluating the barrier's —often O(n^3) for SDPs—becomes prohibitive without sparsity . Problem-specific barriers are thus preferred over ones to reduce \nu and computational cost.

References

  1. [1]
    [PDF] Barrier Method
    log(ei − dT. i x) The barrier function corresponds to polyhedral constraint Dx ≤ e. Stationarity or centrality condition: 0 = tc −
  2. [2]
    [PDF] Lecture 7 Log Barrier Functions - UCSD Math
    Definition 1. A set K ⊂ Rn is called a cone if for every x ∈ K we have t · x ∈ K for any t ≥ 0. If a cone K is also convex, we call it a convex cone.
  3. [3]
    None
    ### Definition and Importance of Self-Concordant Barrier Functions in Convex Optimization
  4. [4]
    [PDF] Control Barrier Functions: Theory and Applications - Sam Coogan
    As a means to extend the safety guarantees beyond the boundary of the set, there have been a variety of approaches that can be best described as “Lyapunov-like.
  5. [5]
    [PDF] Interior Methods for Nonlinear Optimization - CCoM
    Interior methods, like barrier methods, are used in constrained optimization. They use continuously parameterized families of approximate solutions that ...
  6. [6]
    [PDF] Interior-point methods for optimization
    Interior-point methods (IPMs) are used for convex, conic, and nonlinear optimization, and have revolutionized the field of optimization.
  7. [7]
    [PDF] Interior-point methods - Convex Optimization
    ▷ needed for complexity analysis; barrier method works even when self-concordance assumption does not apply. Convex Optimization. Boyd and Vandenberghe. 11.26 ...
  8. [8]
    A new polynomial-time algorithm for linear programming
    Nov 9, 1984 · We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n 3.5 L) arithmetic operations onO(L) bit numbers.
  9. [9]
    SUMT (Revisited) | Operations Research - PubsOnLine
    Cited 10 times. Information. Published Online:December 01, 1998. © 1998 INFORMS. Cite as. Stephen G. Nash, (1998) SUMT (Revisited). Operations Research 46(6): ...
  10. [10]
    Path-Following Methods for Linear Programming - jstor
    ... barrier function method. (SUMT) developed for nonlinear programming by Fiacco and McCormick [18], exactly as implemented in 1968, solves linear and quadratic ...
  11. [11]
    CPLEX - Wikipedia
    Release history ; 3.0, April, 1994, CPLEX Barrier Solver is introduced. ; 2.1, March, 1993, Introduction of CPLEX Presolve algorithms. ; 2.0, April, 1992 ...
  12. [12]
    [PDF] Interior-Point Polynomial Algorithms in Convex Programming
    Nesterov and Nemirovskii's work has profound implications for the appli- cations of convex programming. In many fields of engineering we find con- vex ...
  13. [13]
    None
    Below is a merged summary of Chapter 11: Interior-point Methods from "Convex Optimization" by Boyd and Vandenberghe, based on the provided segments. To retain all information in a dense and organized manner, I will use a combination of narrative text and a table in CSV format for key details. The narrative will provide an overview and context, while the table will capture specific details such as definitions, derivatives, proofs, and examples across all segments.
  14. [14]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  15. [15]
    Interior-Point Polynomial Algorithms in Convex Programming
    Yurii Nesterov and; Arkadii Nemirovskii. Select. Book Series. Advances in Design ... Pricing Options. E-book (Online Access, No PDF download). MEMBER $89.78.
  16. [16]
    [PDF] Linear Programming: Interior-Point Methods - cs.wisc.edu
    We describe in some detail a practical predictor-corrector algorithm proposed by Mehrotra, which is the basis of much of the current generation of software.
  17. [17]
    Primal-dual algorithms for linear programming based on the ...
    In this paper, we deal with primal-dual interior point methods for solving the linear programming problem ... primal-dual logarithmic barrier function.
  18. [18]
    Interior-point methods - ScienceDirect.com
    The modern era of interior-point methods dates to 1984, when Karmarkar proposed his algorithm for linear programming. In the years since then, ...
  19. [19]
    [PDF] Complexity of primal-dual interior-point algorithm for linear ...
    Jun 5, 2023 · In this paper, we first present a polynomial-time primal-dual interior-point method (IPM) for solving linear programming (LP) problems, ...
  20. [20]
    Interior-point method for LP - Optimization Wiki
    Dec 21, 2020 · The primal-dual interior point method is a good alternative to the simplex methods for solving linear programming problems.Introduction · Iterations using Newton's Method · Numerical Example
  21. [21]
  22. [22]
    [PDF] 16 Introduction to Semidefinite Programming (SDP)
    λi(X)) = −ln(det(X)). This function is called the log-determinant function or the logarithmic barrier function for the semidefinite cone. It is not too ...