Fact-checked by Grok 2 weeks ago

Quadratic programming

Quadratic programming () is a type of problem in which the objective function is quadratic and the constraints are linear, making it a special case of that generalizes by incorporating quadratic terms in the objective. The standard formulation of a quadratic program seeks to minimize \frac{1}{2} x^T Q x + c^T x, where x is the of decision variables, Q is a symmetric n \times n , and c is a , subject to linear constraints such as Ax \leq b, A_{eq}x = b_{eq}, and bounds l \leq x \leq u. If the Q is , the problem is , guaranteeing a global optimum and enabling efficient solution methods; otherwise, it may be nonconvex with multiple local minima. Quadratic programming has broad applications across fields, including finance for optimal portfolio selection as pioneered by , who received the 1990 Nobel Prize in for this work; machine learning for training support vector machines; logistics in solving quadratic knapsack problems; and engineering for power management in plug-in hybrid electric vehicles. Common algorithms for solving quadratic programs include the , which iteratively adjusts the set of active constraints; interior-point methods, such as predictor-corrector approaches that navigate the interior; and trust-region-reflective algorithms that solve subproblems within a trust region using preconditioned conjugate gradients. The origins of quadratic programming trace back to 1943 with H.B. Mann's work, with significant advancements in the by researchers including Barankin, Dorfman, Wolfe, and , establishing it as a foundational tool in optimization .

Problem

Form

Quadratic programming () in its form involves minimizing a quadratic to linear equality constraints and simple bound constraints on the variables. The problem is formulated as \min_x \ \frac{1}{2} x^T Q x + c^T x subject to A x = b, \quad l \leq x \leq u, where x \in \mathbb{R}^n is the vector of decision variables, Q \in \mathbb{R}^{n \times n} is a symmetric matrix, c \in \mathbb{R}^n is a constant vector, A \in \mathbb{R}^{m \times n} (with m \leq n) is the constraint matrix of full row rank, b \in \mathbb{R}^m is a constant vector, and l, u \in \mathbb{R}^n define the lower and upper bounds on x (which may be -\infty or \infty for unbounded variables). The matrix Q must be symmetric to ensure the quadratic term \frac{1}{2} x^T Q x defines a valid , as symmetry guarantees that the off-diagonal elements are consistently accounted for without duplication in the expansion. The factor of \frac{1}{2} in the objective is included for computational and analytical convenience: the of the objective with respect to x is then Q x + c, simplifying derivatives and updates in solution algorithms compared to the form without \frac{1}{2}, where the would be $2 Q x + c. This scaling also aligns the objective with the canonical representation of models in optimization, where the approximation (here exactly Q) appears without a factor of 2 in the first-order conditions. For the problem to be convex, Q must be positive semidefinite (PSD), meaning that for all x \neq 0, x^T Q x \geq 0. To see why this ensures convexity, note that the objective function f(x) = \frac{1}{2} x^T Q x + c^T x is twice continuously differentiable, with Hessian matrix \nabla^2 f(x) = Q. A twice-differentiable function is convex if and only if its Hessian is PSD everywhere, which holds here since Q is constant. Thus, if Q \succeq 0, f(x) is convex, and the QP admits a global minimum over a convex feasible set. If Q is not PSD, the problem may be nonconvex, potentially having multiple local minima. This form is canonical because it generalizes (where Q = 0) while preserving structure for efficient solution methods, and it directly arises in subproblems of nonlinear optimization. The dimensions reflect a problem with n variables and m equality constraints, assuming A has full row rank to avoid redundancy. Feasibility requires that the set \{x \in \mathbb{R}^n \mid A x = b, \, l \leq x \leq u\} is nonempty, which can be checked via or phase-one methods. For the minimum to exist and be finite, the objective must be bounded below on the feasible set; this holds if Q \succeq 0 and the feasible set is nonempty (for coercive objectives) or bounded (ensuring no recession directions where f(x) decreases unboundedly). If the feasible set is unbounded and Q has directions of negative , the problem may be unbounded below. This standard form originated in the 1950s as an extension of to handle quadratic objectives, with seminal work by Marguerite Frank and Philip Wolfe introducing an for solving such problems in 1956. A special case arises in , where Q = A^T A.

Constrained Least Squares

arises in data fitting scenarios where a must satisfy additional linear equality constraints, such as fixed parameters or conditions. The problem is to minimize the squared of the subject to these constraints: \min_x \| A x - b \|_2^2 \quad \text{subject to} \quad C x = d, where A \in \mathbb{R}^{m \times n} is the design matrix, b \in \mathbb{R}^m is the observation vector, C \in \mathbb{R}^{p \times n} specifies the constraints with right-hand side d \in \mathbb{R}^p, and typically m \geq n > p to ensure feasibility and relevance in underdetermined systems. This formulation is directly equivalent to an equality-constrained quadratic program, as the objective expands algebraically into a . Expanding the gives \| A x - b \|_2^2 = x^T A^T A x - 2 b^T A x + b^T b. The constant b^T b does not affect the minimizer, so the problem reduces to minimizing the quadratic \frac{1}{2} x^T Q x + c^T x subject to C x = d, where Q = 2 A^T A (positive semidefinite) and c = -2 A^T b. This matches the standard quadratic programming form with a Hessian, ensuring the problem is and amenable to QP solvers. A representative application is fitting a straight line y = m x + k to data points while fixing the y-intercept k to a known value, such as k = 0 for models through the . Consider the data points (x_i, y_i) = (1, 2), (2, 3), (3, 5). With k = 0, the model simplifies to y = m x, and the is the second component of x = [m, k]^T fixed at 0. Reformulating for the free parameter m, we have A = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, b = \begin{bmatrix} 2 \\ 3 \\ 5 \end{bmatrix}, yielding the least squares m = (A^T A)^{-1} A^T b = 14^{-1} \cdot 23 \approx 1.64. The residual sum of squares is \| A (23/14) - b \|_2^2 \approx 0.21. For comparison, the unconstrained line fit (allowing free k) gives m = 1.5, k \approx 0.33, with a lower residual of approximately 0.17, illustrating the trade-off imposed by the . This equivalence allows solving via QP methods like active-set or interior-point algorithms. The solution to the problem is unique provided the constraints are consistent (i.e., there exists some x satisfying C x = d) and the \begin{bmatrix} A \\ C \end{bmatrix} has full column rank, ensuring the quadratic objective is over the affine feasible set. In such cases, the Karush-Kuhn-Tucker (KKT) conditions yield a unique primal-dual pair via the nonsingular \begin{bmatrix} 2 A^T A & C^T \\ C & 0 \end{bmatrix} \begin{bmatrix} x \\ \lambda \end{bmatrix} = \begin{bmatrix} 2 A^T b \\ d \end{bmatrix}, where \lambda are the Lagrange multipliers.

Generalizations to Inequalities

The of quadratic programming to constraints allows for modeling real-world scenarios involving bounds on variables or one-sided restrictions, building directly on the equality-constrained case by incorporating linear inequalities alongside equalities. The full problem formulation is to minimize \frac{1}{2} x^T Q x + c^T x subject to A x \leq b, G x = h, and l \leq x \leq u, where x \in \mathbb{R}^n is the decision variable, Q \in \mathbb{R}^{n \times n} is symmetric, c \in \mathbb{R}^n, A \in \mathbb{R}^{m_A \times n}, b \in \mathbb{R}^{m_A}, G \in \mathbb{R}^{m_G \times n}, h \in \mathbb{R}^{m_G}, and l, u \in \mathbb{R}^n define componentwise bounds. A standard technique to handle these inequalities is the introduction of nonnegative slack variables, which transform the problem into an equivalent equality-constrained form. For the inequality A x \leq b, slack variables s \geq 0 are added such that A x + s = b; similarly, the bounds l \leq x \leq u can be expressed using additional slacks. This results in an augmented equality-constrained in the extended variables (x, s), preserving the original structure while enforcing feasibility through nonnegativity. The for this generalized problem, defined by the linear equalities and inequalities, forms a as the of half-spaces and hyperplanes in \mathbb{R}^n. This polyhedral structure ensures the feasible set is , regardless of the objective . When Q is , the quadratic objective is , rendering the entire problem and guaranteeing that any local minimizer is . This extension to inequalities was pioneered in the late 1950s by E. M. L. Beale to address practical optimization challenges with linear constraints. Solutions to these problems satisfy the Karush-Kuhn-Tucker optimality conditions, which extend the equality case to account for active .

Theoretical Foundations

Optimality Conditions

In quadratic programming (QP), the first-order optimality conditions for a local minimum are given by the Karush-Kuhn-Tucker (KKT) conditions, which extend the method of Lagrange multipliers to problems with constraints. These conditions are necessary under appropriate constraint qualifications and sufficient for global optimality when the QP is . Consider the standard QP formulation: minimize \frac{1}{2} x^T Q x + c^T x subject to equality constraints A x = b and inequality constraints G x \leq h, where Q \in \mathbb{R}^{n \times n}, c \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, G \in \mathbb{R}^{p \times n}, and h \in \mathbb{R}^p. The KKT conditions for a point x^* to be optimal require the existence of multipliers \lambda^* \in \mathbb{R}^m and \mu^* \in \mathbb{R}^p such that:
  • Stationarity: Q x^* + c + A^T \lambda^* + G^T \mu^* = 0. This equates the gradient of the objective to a nonpositive combination of the constraint gradients.
  • Primal feasibility: A x^* = b and G x^* \leq h. These ensure x^* lies in the feasible set.
  • Dual feasibility: \mu^* \geq 0. This requires nonnegative multipliers for inequalities.
  • Complementary slackness: \mu^{*T} (G x^* - h) = 0, or componentwise \mu_i^* (G_i x^* - h_i) = 0 for i = 1, \dots, p. This implies that if an inequality is inactive (G_i x^* < h_i), then \mu_i^* = 0.
For convex QPs, where Q is positive semidefinite, the KKT conditions are sufficient for global optimality: any feasible point satisfying the KKT conditions achieves the minimum value of the objective. This follows because the objective is convex, the feasible set is convex, and the stationarity condition, combined with the other KKT requirements, implies no descent direction exists from x^*. The KKT conditions are derived from the Lagrangian \mathcal{L}(x, \lambda, \mu) = \frac{1}{2} x^T Q x + c^T x + \lambda^T (A x - b) + \mu^T (G x - h). For equality-constrained problems, optimality requires \nabla_x \mathcal{L} = 0 and primal feasibility. To handle inequalities, the conditions introduce dual feasibility and complementary slackness, ensuring the Lagrangian's saddle-point property holds at the optimum. Necessity relies on a constraint qualification, such as the linear independence constraint qualification (LICQ), which states that the gradients \{ \nabla (A_i x - b_i) \}_{i=1}^m \cup \{ \nabla (G_j x - h_j) \}_{j \in \mathcal{A}(x^*)} are linearly independent at x^*, where \mathcal{A}(x^*) = \{ j \mid G_j x^* = h_j \} is the active set. Under LICQ, any local minimum satisfies the KKT conditions.

Lagrangian Duality

In quadratic programming, the Lagrangian duality framework provides a means to derive a dual optimization problem from the primal formulation, offering lower bounds on the primal objective and insights into optimality. Consider the standard primal quadratic program: \min_x \ \frac{1}{2} x^T Q x + c^T x subject to A x = b (equality constraints) and G x \leq h (inequality constraints), where Q is symmetric positive semidefinite to ensure convexity. The Lagrangian is formed by incorporating the constraints via multipliers \lambda (unrestricted for equalities) and \mu \geq 0 (for inequalities): L(x, \lambda, \mu) = \frac{1}{2} x^T Q x + c^T x + \lambda^T (A x - b) + \mu^T (G x - h). This associates dual variables with each constraint, transforming the constrained problem into an unconstrained one over x for fixed multipliers. The dual function is defined as the pointwise infimum of the Lagrangian over the primal variable: g(\lambda, \mu) = \inf_x L(x, \lambda, \mu). Assuming Q \succ 0, the infimum is finite and attained at x^*(\lambda, \mu) = -Q^{-1} (c + A^T \lambda + G^T \mu), yielding a concave quadratic form for g. The dual problem then maximizes this function subject to dual feasibility (\mu \geq 0, \lambda unrestricted), resulting in another quadratic program: \max_{\lambda, \mu \geq 0} \ g(\lambda, \mu). This dual QP provides a tractable lower bound on the primal optimum and links to the Karush-Kuhn-Tucker (KKT) conditions, which equate primal and dual solutions under appropriate qualifications. Weak duality holds universally for the convex case: for any primal feasible x and dual feasible (\lambda, \mu), the primal objective satisfies \frac{1}{2} x^T Q x + c^T x \geq g(\lambda, \mu), implying the primal optimum p^* \geq d^* (dual optimum). Strong duality, where p^* = d^* and zero duality gap occurs, holds for convex QPs under Slater's condition: there exists a strictly feasible point x such that A x = b and G x < h. This condition ensures the existence of optimal dual multipliers and simplifies sensitivity analysis. For the equality-constrained case (no inequalities, so \mu = 0, G x \leq h absent), the dual simplifies explicitly. The dual function becomes g(\lambda) = \inf_x \left[ \frac{1}{2} x^T Q x + c^T x + \lambda^T (A x - b) \right], and assuming Q \succ 0, the dual problem is \max_\lambda \ -\frac{1}{2} \lambda^T A Q^{-1} A^T \lambda + (c^T Q^{-1} A^T - b^T) \lambda, with the constant term -\frac{1}{2} c^T Q^{-1} c omitted as it does not affect the maximization (strong duality holds without further conditions). This form highlights the quadratic structure of the dual, mirroring the primal.

Solution Methods

Methods for Equality-Constrained QP

Equality-constrained quadratic programming (QP) involves minimizing a quadratic objective function subject to linear equality constraints, formulated as \min_x \frac{1}{2} x^T Q x + c^T x subject to A x = b, where Q \in \mathbb{R}^{n \times n} is symmetric, c \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n} with m < n, and b \in \mathbb{R}^m. Under suitable assumptions, this problem admits a closed-form solution derived from the Karush-Kuhn-Tucker (KKT) conditions, which form a linear system solvable analytically. The analytical solution proceeds by introducing Lagrange multipliers \lambda \in \mathbb{R}^m for the constraints, leading to the KKT system: \begin{bmatrix} Q & A^T \\ A & 0 \end{bmatrix} \begin{bmatrix} x \\ \lambda \end{bmatrix} = \begin{bmatrix} -c \\ b \end{bmatrix}. Assuming Q is invertible, the solution for x can be expressed in terms of the unconstrained minimizer x_{\text{free}} = -Q^{-1} c and a projection correction onto the constraint manifold: x = x_{\text{free}} - Q^{-1} A^T (A Q^{-1} A^T)^{-1} (A x_{\text{free}} - b). This formula adjusts the unconstrained solution to satisfy the equalities by solving for \lambda = (A Q^{-1} A^T)^{-1} (A x_{\text{free}} - b) and substituting back. The matrix A Q^{-1} A^T must be invertible, which holds if A has full row rank. This approach is computationally direct for small to moderate dimensions but requires matrix inversions, which can be expensive for large n. An alternative elimination method reduces the problem to an unconstrained QP in fewer variables by parameterizing the feasible set. First, find a particular solution x_0 to A x = b, for example, x_0 = A^T (A A^T)^{-1} b assuming A has full row rank. Then, let Z \in \mathbb{R}^{n \times (n-m)} be a basis for the null space of A (i.e., A Z = 0), so the general solution is x = x_0 + Z z for z \in \mathbb{R}^{n-m}. Substituting into the objective yields the reduced unconstrained QP: \min_z \frac{1}{2} z^T (Z^T Q Z) z + (Q x_0 + c)^T Z z + \frac{1}{2} x_0^T Q x_0 + c^T x_0, which has closed-form solution z = -(Z^T Q Z)^{-1} Z^T (Q x_0 + c), assuming Z^T Q Z is invertible (positive definite if Q is). The original solution is then x = x_0 + Z z. This method avoids explicit Lagrange multipliers and is useful when a null-space basis is readily available, though computing Z can be costly for large sparse problems. These methods rely on key assumptions: Q must be positive definite (or at least the reduced Hessian Z^T Q Z > 0) to ensure a unique global minimum, and A must have full row rank to make the constraints consistent and . If Q is singular, the solution may not be unique, but a minimum exists if the unconstrained problem is bounded on the feasible set; pseudoinverses can be used, such as replacing Q^{-1} with the Moore-Penrose pseudoinverse Q^\dagger in the projection formula, yielding x = x_{\text{free}} - Q^\dagger A^T (A Q^\dagger A^T)^\dagger (A x_{\text{free}} - b) where x_{\text{free}} solves Q x_{\text{free}} + c \perp \ker Q, provided compatibility conditions hold. Singularity in A Q^{-1} A^T (or A A^T) indicates linearly dependent constraints, resolvable by rank-revealing decompositions like QR factorization. Early analytical and elimination-based methods for equality-constrained QP emerged in the 1950s, building on techniques; for instance, Philip Wolfe extended the method to quadratic objectives in 1959, laying groundwork for handling constraints through and reduced systems. These direct approaches remain foundational, though modern implementations often incorporate iterative refinements for in large-scale settings.

Interior-Point Methods

Interior-point methods for quadratic programming (QP) address convex QP problems by incorporating barrier functions to handle inequality constraints while staying within the interior of the feasible region. These methods transform the constrained optimization into a sequence of unconstrained subproblems by adding a barrier term that penalizes approaches to the boundary, ensuring iterates remain strictly feasible. For a standard convex QP of the form minimize (1/2) x^T Q x + c^T x subject to A x ≤ b, where Q is positive semidefinite, the logarithmic barrier function is introduced to manage the inequalities. The logarithmic barrier for the inequalities is constructed by introducing variables s = b - A x > 0 and adding the term -μ ∑ log(s_i) to the objective, where μ > 0 is a barrier parameter that decreases over iterations. This yields the barrier subproblem: minimize (1/2) x^T Q x + c^T x - μ ∑_{i=1}^m (s_i), subject to s = b - A x. The grows to positive infinity as any s_i approaches zero, effectively enforcing the constraints. Each subproblem is solved approximately using , which computes search directions by solving a derived from the of the barrier objective. The solution trajectory for decreasing μ traces the central path, a curve of optimal points for the barrier subproblems that converges to the optimal solution of the original as μ → 0. - interior-point methods enhance efficiency by simultaneously optimizing and variables, solving the Karush-Kuhn-Tucker (KKT) augmented with centering directions to follow the central path. These methods compute a predictor step to approximate the path and a corrector step to recenter, using variables to incorporate duality in maintaining complementarity. The predictor-corrector variant, in particular, adaptively estimates the centering parameter for faster convergence. Under suitable assumptions, such as strict feasibility and bounded , primal-dual interior-point methods achieve polynomial-time convergence, requiring O(√n log(1/ε)) iterations to reach an ε-optimal solution, where n denotes the problem dimension. Each iteration involves solving a of size comparable to the number of variables and constraints, making the methods practical for large-scale problems. Interior-point methods were popularized in the 1980s by Karmarkar's projective algorithm for , with theoretical foundations for convex programs, including , developed by Nesterov and Nemirovski using self-concordant barriers; adaptations for , such as the predictor-corrector approach, were advanced by .

Active-Set Methods

Active-set methods quadratic programming problems by iteratively an estimate of the active constraints—those inequalities that bind at the optimal —while maintaining feasibility with respect to the current of constraints. The approach transforms the original problem into a sequence of equality-constrained subproblems by treating the working set as equalities and ignoring the remaining inequalities temporarily. This strategy leverages the structure of the Karush-Kuhn-Tucker (KKT) conditions to guide updates, ensuring progress toward optimality by adjusting the working set based on diagnostic information from each subproblem solution. The core initializes with a feasible point and an initial , often empty or based on a simple feasible solution. It then solves the -constrained QP subproblem on the current to obtain a search and associated Lagrange multipliers. Specifically, the subproblem minimizes the objective subject to the constraints as , yielding a d and multipliers \lambda. If the multipliers for all active inequalities are non-negative and the is zero, optimality is achieved. Otherwise, if a multiplier \lambda_j < 0 for an active , that is removed from the (dropped), as it indicates the solution would improve by relaxing it. To compute the step, a along d is performed, limited by the smallest positive that keeps inactive constraints feasible; if the step is less than one, the blocking inactive is added to the . This process repeats, solving reduced QPs as a subroutine until . For computational efficiency, particularly in updating solutions between iterations, block pivoting techniques maintain a of the reduced or the KKT without full recomputation. When adding or removing constraints, block elimination or updates adjust the basis, allowing reuse of prior factorizations to avoid the O(n^3) cost of solving from scratch in each ; this is especially beneficial for medium-scale problems where the active set changes gradually. In convex quadratic programming, active-set methods terminate in a finite number of steps at the global optimum under nondegeneracy assumptions, as each strictly decreases the objective or detects infeasibility. For non-convex cases, is to a local optimum satisfying second-order necessary conditions, though global guarantees do not hold. These methods were developed in the , with foundational numerically stable algorithms introduced by Gill and Murray, and have been implemented in influential software such as QPOPT, which applies them to dense linear and programs.

Trust-Region-Reflective Methods

Trust-region-reflective methods solve programs, particularly those with bounds or simple inequalities, by iteratively approximating the objective with a model within a trust region around the current iterate. The algorithm reflects the search direction across constraint boundaries to maintain feasibility, using a trust-region approach to compute steps. For bound-constrained QPs, it applies reflections to handle violations and solves subproblems via preconditioned conjugate gradient methods, which exploit the positive semidefiniteness of Q for efficiency. This method is effective for large-scale problems with sparse structure, as implemented in software like MATLAB's quadprog, and converges globally for cases under suitable conditions.

Computational Complexity

Convex Quadratic Programming

Convex quadratic programming (QP) instances, characterized by a convex quadratic objective function subject to linear constraints, admit polynomial-time algorithms for finding the global optimum. This establishes that convex QP belongs to the complexity class P. Like linear programming, convex QP lacks a known strongly polynomial-time algorithm as of 2025, meaning the running time depends on the input's bit length rather than solely on the problem dimensions. The and interior-point methods provide strong polynomial-time solvability for convex QP, requiring O(n^{3.5} L) arithmetic operations, where n denotes the number of variables and L the total of the input data. These approaches ensure theoretical guarantees for exact solutions within this bound, leveraging the self-concordant barrier functions inherent to the quadratic structure. Interior-point methods, in particular, demonstrate near-optimal efficiency by performing each iteration in O(n^3) arithmetic operations, typically requiring O(\sqrt{n} L) iterations overall to achieve the stated complexity.

Non-Convex Quadratic Programming

Non-convex quadratic programming refers to the where the objective function's Q is indefinite, meaning it has both positive and negative eigenvalues. This indefiniteness introduces the possibility of multiple local minima and saddle points, making the significantly more challenging than in the case. Unlike quadratic programming, where a unique global minimum is guaranteed under suitable constraints, non-convex instances do not admit polynomial-time algorithms for exact global solutions unless P = . Specifically, even when Q has only one negative eigenvalue, the problem of minimizing the quadratic function subject to linear constraints is NP-hard. Local minima in non-convex quadratic programming are characterized by second-order optimality conditions that account for the indefiniteness of Q. At a candidate point, the first-order condition requires the $2Qx + c to lie within the of the feasible set, while the second-order necessary condition demands that Q be on the (the tangent to the feasible set where the objective does not increase to ). However, since Q is not positive semidefinite globally, these conditions can hold at multiple points, leading to several local minima separated by regions of descent. Sufficient second-order conditions, such as strict of Q on the critical cone, ensure a strict local minimum with quadratic growth away from the point, but verifying them requires solving challenging eigenvalue problems. Active-set methods can identify such local solutions efficiently in practice, though they do not guarantee global optimality. Approximation strategies for non-convex quadratic programming often rely on convex relaxations, particularly (SDP) lifts, to obtain bounded suboptimal solutions. For certain unconstrained cases, such as the problem—which can be formulated as maximizing an indefinite quadratic over binary variables—SDP relaxations provide strong approximation guarantees. The seminal Goemans-Williamson algorithm uses an SDP relaxation of the problem, followed by randomized , to achieve an expected approximation ratio of approximately 0.878 relative to the optimal value, improving upon the 0.5 ratio from simpler methods like random rounding. This approach exploits the structure of the to yield near-optimal cuts in polynomial time, though extensions to general continuous non-convex quadratics remain less tight, often providing only constant-factor approximations in specific structured instances. In practice, global solvers for non-convex quadratic programming frequently employ branch-and-bound frameworks, which recursively partition the and solve convex relaxations at each node to prune branches based on bound tightening. These methods, enhanced by techniques like spatial branching and reformulation-linearization, can handle moderate-sized problems effectively but exhibit worst-case due to the need to explore a potentially vast number of branches in highly non-convex landscapes. As of 2025, quantum-inspired classical solvers, drawing from principles and methods, have emerged to accelerate these processes for large-scale instances, offering heuristic improvements in solution quality and speed for problems like unconstrained quadratics; however, they retain scaling in the worst case and do not resolve the fundamental .

Variants and Extensions

Mixed-Integer Quadratic Programming

Mixed-integer quadratic programming (MIQP) is a generalization of quadratic programming where a of the decision variables are constrained to be s, introducing discreteness that significantly increases problem complexity. This formulation arises naturally in applications requiring indivisible choices, such as selecting discrete assets in optimization problems. The standard MIQP problem seeks to minimize the objective function \frac{1}{2} \mathbf{x}^T Q \mathbf{x} + \mathbf{c}^T \mathbf{x}, where \mathbf{x} \in \mathbb{R}^n includes both continuous and components, subject to linear constraints A\mathbf{x} = \mathbf{b}, G\mathbf{x} \leq \mathbf{h}, and bounds \mathbf{l} \leq \mathbf{x} \leq \mathbf{u}, with integrality enforced on selected variables (e.g., \mathbf{x}_I \in \mathbb{Z}^{|I|}) for some I \subseteq \{1, \dots, n\}. The matrix Q may be positive semidefinite ( case) or indefinite (nonconvex case), but the presence of constraints renders the problem NP-hard even when Q is , due to the from enumerating feasible points. Solving MIQPs typically relies on branch-and-bound algorithms, which recursively partition the variable space and solve continuous quadratic programming relaxations at each node to obtain lower bounds, branches where these bounds exceed known upper bounds from feasible solutions. These relaxations treat variables as continuous, reducing to standard QP subproblems solvable via interior-point or active-set methods, though nonconvexity in Q requires techniques like spatial branching or reformulation-linearization to ensure valid bounds. A special case is quadratic 0-1 programming, where all variables are (\mathbf{x} \in \{0,1\}^n), which models numerous problems such as the in graphs, where the objective encodes pairwise interactions via the quadratic terms. MIQP emerged in the mid-1960s as a tool for and selection problems involving indivisible projects, building on Markowitz's mean-variance framework by incorporating constraints to handle realistic asset allocations. Early formulations appeared in Weingartner's work on interrelated project selection, where quadratic objectives captured risk-variance trade-offs under budget limits. By the 1970s, these models gained traction in for optimizing with transaction lot sizes or constraints. Modern commercial solvers like CPLEX and Gurobi can handle instances with thousands of variables and constraints, leveraging techniques such as Benders or methods to exploit problem structure and parallelize the branch-and-bound tree for scalability. The continuous QP relaxation provides a foundational bound, but integrality gaps often necessitate advanced cutting planes and heuristics to close them efficiently.

Quadratically Constrained Quadratic Programming

Quadratically constrained programming (QCQP) generalizes the programming problem by incorporating quadratic functions not only in the objective but also in the constraints. The of a QCQP is to minimize \frac{1}{2} x^T Q_0 x + c_0^T x over x \in \mathbb{R}^n, subject to the quadratic inequalities \frac{1}{2} x^T Q_i x + c_i^T x \leq d_i for i = 1, \dots, m, where Q_0, Q_i \in \mathbb{R}^{n \times n} are symmetric matrices, c_0, c_i \in \mathbb{R}^n, and d_i \in \mathbb{R}. This formulation, which emerged as a natural extension of quadratic programs , captures a broad class of nonconvex optimization problems arising and . Unlike convex QPs with linear constraints, QCQPs are NP-hard due to the potential nonconvexity of both the objective and the . The nonconvexity in QCQPs stems primarily from quadratic constraints where one or more Q_i are indefinite, leading to nonconvex feasible sets that can include disconnected components or local minima. To address this, a seminal convex relaxation transforms the QCQP into a semidefinite program (SDP) by introducing a matrix variable X = x x^T and relaxing the nonconvex rank-one constraint \operatorname{rank}(X) = 1 to the convex condition X \succeq 0. This SDP relaxation, originally proposed by Shor, introduces the lifted variables X \succeq x x^T (relaxed to X \succeq 0) and linearizes the quadratic constraints as \frac{1}{2} \operatorname{Tr}(Q_i X) + c_i^T x \leq d_i, with the full relaxation often formulated using the homogeneous matrix \begin{bmatrix} 1 & x^T \\ x & X \end{bmatrix} \succeq 0. This provides a semidefinite program solvable by interior-point methods and often yields near-optimal solutions, with the Lagrangian dual providing certificates of optimality in convex cases. A prominent special case of QCQP is the trust-region subproblem, which minimizes a objective subject to a single spherical constraint \|x\|^2 \leq \Delta^2, arising frequently in nonlinear optimization algorithms like . This one-constraint instance, while still nonconvex, admits efficient exact solutions through eigenvalue decompositions or relaxations that achieve rank-one optimality, with the global minimizer lying on the boundary except in trivial cases. Trust-region subproblems highlight how structured QCQPs can be solved globally despite nonconvexity. QCQPs play a key role in , where quadratic constraints model uncertainty sets such as ellipsoids, ensuring solutions remain feasible under parameter perturbations; for instance, robust counterparts of linear programs with quadratic uncertainty often reduce to QCQPs. Exact solvability via relaxation holds under specific conditions, such as when the constraint matrices Q_i exhibit low- structure (e.g., rank at most two), allowing reformulation into equivalent problems with aggregate constraints where the relaxation attains rank-one optimality and recovers the global solution. These properties make low-rank QCQPs particularly amenable to exact methods in applications like sensor network localization. duality further aids in analyzing these relaxations by providing tight bounds for non-exact cases.

Applications

In Optimization and Control

Quadratic programming (QP) plays a pivotal role in , particularly through the mean-variance framework introduced by in 1952. In this model, investors seek to allocate assets to minimize portfolio variance (risk) for a given , subject to linear constraints such as budget limits and non-negativity. The problem is formulated as a convex , where the objective function is a representing variance, and the expected return acts as a linear constraint, enabling efficient computation of the for . In , QP is essential for (MPC), a receding-horizon strategy used in engineering systems to optimize future trajectories while respecting constraints on states and inputs. At each time step, MPC solves a finite-horizon problem, typically formulated as a QP when the system dynamics are linear and the quadratic, such as in tracking or regulation tasks for processes like chemical plants or . This approach ensures and feasibility by incorporating constraints directly into the optimization, with convex QP guaranteeing global optima and real-time solvability. Resource allocation problems, such as economic dispatch in power systems, also leverage to minimize quadratic cost functions under linear constraints like power balance and generation limits. In economic dispatch, the goal is to allocate output among generators to meet at minimum , where fuel costs are often modeled quadratically, allowing QP solvers to handle transmission losses and ramp rates efficiently. This application has been standard since the 1970s, providing scalable solutions for operations. In , QP is used to solve quadratic knapsack problems, where the objective includes quadratic terms for or interactions between items, subject to linear constraints, aiding in efficient loading or . also finds application in for in plug-in hybrid electric vehicles (PHEVs), optimizing charge/discharge and operation to minimize and emissions under linear constraints on power demand and . QP solvers have been integrated into MATLAB's Model Predictive Control Toolbox since its release in 1995, facilitating real-time implementation of constrained in applications by leveraging built-in active-set and interior-point algorithms for QP resolution.

In Machine Learning and Statistics

Quadratic programming plays a central role in support vector machines (SVMs), where the dual formulation of the seeks to maximize the margin between classes by solving a quadratic program. The SVM problem minimizes the with an L2 regularization term on the weights, but the form, derived via Lagrange multipliers, becomes \max_{\boldsymbol{\alpha}} \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \alpha_i \alpha_j y_i y_j K(\mathbf{x}_i, \mathbf{x}_j) subject to $0 \leq \alpha_i \leq C for all i and \sum_{i=1}^n \alpha_i y_i = 0, where \boldsymbol{\alpha} are the Lagrange multipliers, y_i are the labels, C is the regularization parameter, and K is a kernel function. This QP structure, popularized by Vapnik and colleagues in the 1990s, enables efficient classification and regression by identifying support vectors that define the decision boundary. The kernel trick extends SVMs to non-linear problems by replacing inner products with kernels like the , K(\mathbf{x}_i, \mathbf{x}_j) = \exp(-\gamma \|\mathbf{x}_i - \mathbf{x}_j\|^2), implicitly mapping data to higher dimensions without explicit computation, while preserving the QP solvability in the . To address scalability, the (SMO) algorithm decomposes the QP into subproblems involving only two variables at a time, analytically solving them to update multipliers iteratively. Modern implementations of linear SVMs leveraging decomposition methods like SMO scale to datasets with millions of samples, making them viable for large-scale tasks as of 2025. In sparse , (least absolute shrinkage and selection operator) can be reformulated as a quadratic program to enforce L1 penalties for . The standard minimizes \|\mathbf{y} - X\boldsymbol{\beta}\|^2 + \lambda \|\boldsymbol{\beta}\|_1, but by introducing auxiliary variables u_i \geq |\beta_i| for each coefficient, it becomes the QP \min_{\boldsymbol{\beta}, \mathbf{u}} \|\mathbf{y} - X\boldsymbol{\beta}\|^2 \quad \text{subject to} \quad \sum_{i=1}^p u_i \leq t, \quad u_i \geq \beta_i, \quad u_i \geq -\beta_i \quad \forall i, where t controls sparsity, transforming the non-smooth L1 term into linear constraints. This QP equivalence facilitates the use of standard solvers for sparse , promoting interpretable models in by shrinking irrelevant coefficients to zero. regression, equivalent to in , often involves quadratic programming when predictions must satisfy noise or inequality constraints, such as non-negativity or order relations in spatial data. In constrained , the optimal weights for interpolating unobserved points minimize the kriging variance subject to unbiasedness and additional linear constraints, formulated as a QP that ensures physically meaningful estimates under noisy observations. For instance, can enforce multi-point order constraints across cutoffs in indicator , reducing estimation errors in resource modeling while accounting for noise in the structure. Constrained least squares serves as a foundational building block in these statistical methods, where enforces bounds or equality constraints on coefficients to align with .

Software Tools

Commercial and Open-Source Solvers

Several commercial solvers are available for quadratic programming (), offering robust support for both and non-convex cases, often with extensions to mixed-integer formulations. , founded in , provides comprehensive capabilities for solving QP and mixed-integer quadratic programming (MIQP) problems using an interior-point (barrier) method as its core for continuous QPs. It includes free academic licenses for students, faculty, and researchers at accredited institutions, enabling unrestricted use for teaching and non-commercial research. MOSEK Optimization Suite is another prominent commercial solver that handles convex QP and quadratically constrained QP (QCQP) problems efficiently, leveraging an interior-point conic solver that supports parallelization across multiple CPU cores to accelerate computations on large-scale instances. Among open-source alternatives, OSQP (Operator Splitting Quadratic Program) solver, introduced in 2018, specializes in large-scale QP using the alternating direction method of multipliers (ADMM), an splitting technique that ensures warm-starting and fixed iteration complexity, making it particularly suitable for applications in systems. Another recent open-source solver is PROXQP, which employs a primal-dual for efficient solving of QPs, showing strong performance in modern benchmarks. For dense QP problems, the CVXOPT library provides an open-source implementation based on the primal-dual , integrated within ecosystems for smaller to medium-sized instances. Performance comparisons of these solvers often utilize standard test sets such as the qpbenchmark collection, where Gurobi demonstrates superior speed as of 2025; for example, version 13.0 solves many convex QPs with up to 1000 variables in under a second on modern hardware. OSQP excels in sparse, large-scale scenarios but may require more iterations for dense cases compared to commercial options like Gurobi and MOSEK.

Integration with Programming Languages

Quadratic programming (QP) has been integrated into numerous programming languages through specialized libraries and solver interfaces, allowing practitioners to model and solve QP problems ranging from to non-convex formulations. These integrations often leverage high-level abstractions for ease of use in and prototyping, while also providing low-level access for optimization in embedded or real-time systems. Commercial solvers like Gurobi and CPLEX offer multi-language APIs that support quadratic objectives and constraints, enabling seamless incorporation into workflows across , C++, Java, and more. In , CVXPY serves as a Python-embedded for , including , where users define problems using natural mathematical syntax and with open-source solvers such as OSQP (an operator-splitting method for sparse convex s) or (a first-order solver for conic problems). CVXOPT provides a lower-level for convex solving via interior-point methods, supporting dense and sparse matrices directly in Python code. The qpsolvers library unifies access to multiple solvers, including wrappers for qpOASES (an optimized for s in applications) and quadprog (a MATLAB-inspired solver), facilitating solver benchmarking and selection without changing code. MATLAB's Optimization Toolbox includes the quadprog function, which solves general QPs with linear constraints using active-set or interior-point s, and supports large-scale problems through handling. This built-in capability allows direct formulation in MATLAB scripts, with options for warm starts and custom preconditioners to enhance efficiency. For C++, where performance is critical, QuadProgpp implements the Goldfarb-Idnani dual active-set for strictly QPs with bound and linear constraints, offering a lightweight, header-only solution. OOQP delivers an object-oriented framework for QPs based on primal-dual interior-point methods, designed for and extension in larger software systems. The qpOASES library, implemented in C++, employs an online active-set strategy tailored for sequences of related QPs, achieving real-time performance in with homotopy-based warm starts. provides a cross-platform QP solver in C++ that handles both and non-convex cases with dense or sparse data. Julia's package acts as a modeling for optimization, supporting formulation with quadratic objectives and interfacing to solvers like Gurobi for non- QPs or OSQP for sparse problems, leveraging Julia's for speed. In , the quadprog package implements the Goldfarb-Idnani method for QPs, enabling statistical applications like with simple function calls. These language-specific tools emphasize , with many open-source options licensed under permissive terms (e.g., Apache 2.0 for OSQP) to support academic and industrial use.

References

  1. [1]
    Quadratic programming - Optimization Wiki
    Dec 15, 2024 · A quadratic program is an optimization problem that comprises a quadratic objective function bound to linear constraints.Algorithm Discussion · Numerical Examples · Active Set Method Example
  2. [2]
    Quadratic Programming Algorithms - MATLAB & Simulink - MathWorks
    Quadratic programming is the problem of finding a vector x that minimizes a quadratic function, possibly subject to linear constraints.
  3. [3]
    [PDF] Quadratic Programming
    Quadratic programming is a special case of non-linear programming, and has many applications. One application is for optimal portfolio selection, which was ...
  4. [4]
    [PDF] Methods for Convex and General Quadratic Programming∗ - CCoM
    Jul 13, 2014 · Quadratic Programs in Standard Form. The inequality constraints of a QP in standard form consist of only simple upper and lower bounds on the ...
  5. [5]
    [PDF] Numerical Optimization - UCI Mathematics
    ... Nocedal. Stephen J. Wright. EECS Department. Computer Sciences Department ... Quadratic Programming. 448. 16.1 Equality-ConstrainedQuadraticPrograms ...
  6. [6]
    Why does standard quadratic programming contain $\dfrac{1}{2}$ in ...
    Dec 31, 2019 · You can say it is for computation convenience. Assuming Q is symmetric, the gradient of xTQx with respect to x is 2Qx. Therefore, 12 ...Quadratic optimization: Why does the formula have "12" in front?When exactly are quadratic objective functions polynomial time ...More results from math.stackexchange.com
  7. [7]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  8. [8]
    [PDF] 11. Constrained least squares
    Constrained least squares. • least norm problem. • least squares with equality constraints. • linear quadratic control. 11.1. Page 2. Least norm problem.
  9. [9]
    [PDF] Constrained Linear Least Squares - Duke People
    This hand-out addresses the ordinary linear least-squares method of fitting a curve to data in situations in which the curve-fit must satisfy certain criteria.
  10. [10]
    Linear Programs and Polyhedra
    A polyhedron is a set in R n \text{R}^n whose members obey a set of linear inequalities { x ∈ R n | A x ≥ b }
  11. [11]
    [PDF] 5.2 Quadratic Programming - EPFL
    A quadratic program (QP) takes the form min x∈Rn f(x) := 1. 2. xT Gx + xT h ... equality constraints and ignoring the inactive inequality constraints.
  12. [12]
    On quadratic proramming - Beale - 1959 - Wiley Online Library
    The algorithm for quadratic programming given by Beale [1] is described in detail. ... Download PDF. back. Additional links. About Wiley Online Library.
  13. [13]
    [PDF] constraint qualifications - cs.wisc.edu
    The linear independence constraint qualification (LICQ) consists of saying that the gradients of equality and active inequality constraints are linearly ...
  14. [14]
    [PDF] 11. Equality constrained minimization
    Equality constrained minimization is minimizing f(x) subject to Ax = b, where f is convex and twice continuously differentiable.
  15. [15]
    [PDF] on the solution of equality constrained quadratic programming ...
    We have studied the properties of the projected CG method for solving quadratic programming problems of the form (1.1)–(1.2). ... Nocedal and S.J. Wright, ...
  16. [16]
    [PDF] The Simplex Method for Quadratic Programming - cs.wisc.edu
    To find the best least-squares fit to given data, where certain parameters are known a priori to satisfy inequality constraints (e.g., being nonnegative).
  17. [17]
    On the Implementation of a Primal-Dual Interior Point Method
    This paper gives an approach to implementing a second-order primal-dual interior point method. It uses a Taylor polynomial of second order to approximate a ...
  18. [18]
    A new polynomial-time algorithm for linear programming
    Nov 9, 1984 · We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n 3.5 L) arithmetic operations onO(L) bit numbers.
  19. [19]
  20. [20]
    [PDF] USER'S GUIDE FOR QPOPT 1.0 - University of California San Diego
    Abstract. QPOPT is a set of Fortran subroutines for minimizing a general quadratic function subject to linear constraints and simple upper and lower bounds.
  21. [21]
    An Algorithm for Convex Quadratic Programming That Requires O ...
    A new interior point method for minimizing a convex quadratic function over a polytope is developed. We show that our method requires O(n3.5L) arithmetic ...
  22. [22]
    Some Strongly Polynomially Solvable Convex Quadratic Programs ...
    Dec 7, 2021 · This paper begins with a class of convex quadratic programs (QPs) with bounded variables solvable by the parametric principal pivoting algorithm ...Missing: no | Show results with:no
  23. [23]
    Quadratic programming with one negative eigenvalue is NP-hard
    Minimizing a concave quadratic function with one concave direction is NP-hard, and even one negative eigenvalue makes the problem NP-hard.
  24. [24]
    Embedded Mixed-Integer Quadratic Optimization using Accelerated ...
    The MIQP problem is NP-hard, which poses a major challenge in an environment where computational and memory resources are limited.
  25. [25]
  26. [26]
    The max-cut problem and quadratic 0–1 optimization
    Themaximum cut problem consists of finding the subsetS of vertices such that the number of edges having exactly one endpoint inS is as large as possible.
  27. [27]
    Capital Budgeting of Interrelated Projects: Survey and Synthesis
    Ranking in quadratic integer programming problems. European Journal of ... Martin Weingartner, (1966) Capital Budgeting of Interrelated Projects: Survey and ...
  28. [28]
    [PDF] A Sequential Benders-based Mixed-Integer Quadratic Programming ...
    Apr 1, 2024 · Branch-and-bound-based methods are appealing because they leverage the existence of reliable, efficient, and open-source nonlinear solvers such.
  29. [29]
    [PDF] 1 Linear Programming - Princeton University
    • ... 2 Quadratic Programming. Definition 2. A quadratic program (QP) is an optimization problem with a quadratic ob- jective and linear constraints.<|control11|><|separator|>
  30. [30]
    [PDF] On Quadratically Constrained Quadratic Programs and their ...
    Jun 15, 2022 · Quadratically constrained quadratic programs (QCQPs) minimize a quadratic function with quadratic constraints. They are related to quadratic ...
  31. [31]
    Chapter 3 Shor's Semidefinite Relaxation
    We will focus on the so-called Shor's semidefinite relaxation (Shor 1987), which is particularly designed for quadratically constrained quadratic programs ( ...
  32. [32]
    [PDF] Relaxations and Randomized Methods for Nonconvex QCQPs
    This SDP is called the Lagrangian relaxation of the nonconvex QCQP. It's easy to solve, and gives a lower bound on the optimal value of the nonconvex QCQP. An ...Missing: ratio | Show results with:ratio
  33. [33]
    [PDF] SOLVING THE TRUST-REGION SUBPROBLEM USING THE ...
    The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods.
  34. [34]
    [PDF] ROBUST CONVEX OPTIMIZATION
    We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, ...
  35. [35]
    [PDF] Exact SDP relaxations of quadratically ... - Optimization Online
    The main purpose of this paper is to present sufficient conditions for the SDP relaxation to be exact for some classes of QCQPs, considering the aggregate ...
  36. [36]
    Portfolio Selection - jstor
    PORTFOLIO SELECTION*. HARRY MARKOWITZ. The Rand Corporation. THE PROCESS OF SELECTING a portfolio may be divided into two stages. The first stage starts with ...
  37. [37]
    The explicit solution of model predictive control via multiparametric ...
    We develop an algorithm to determine explicitly the state feedback control law associated with MPC, and show that it is piecewise linear and continuous.
  38. [38]
    QUADRATIC PROGRAMMING IN MODEL PREDICTIVE CONTROL ...
    Model Predictive Controllers calculate the optimal control inputs at each time step, based on past information, a plant model, a quadratic objective and given ...
  39. [39]
  40. [40]
    Economic Dispatch Optimization Strategies and Problem Formulation
    In the scope of the EDP, QP is applied to minimize the cost of the power generation when subject to various constraints such as generator output limits, ...
  41. [41]
    QP Solvers for Linear MPC - MATLAB & Simulink - MathWorks
    Model Predictive Control Toolbox software supports three built-in algorithms that solve the QP problem, and one algorithm that solves the MIQP problem.Custom QP Solver · Custom Solver for Code... · Custom Solver for Both...
  42. [42]
    Model Predictive Control Toolbox - MATLAB - MathWorks
    Design implicit, gain-scheduled, and adaptive MPC controllers that solve a quadratic programming (QP) problem. Generate an explicit MPC controller from an ...Missing: integration | Show results with:integration
  43. [43]
    [PDF] support-vector networks
    SUPPORT-VECTOR NETWORKS. Corinna Cortes 1 and Vladimir Vapnik 2. AT&T Labs-Research, USA. Abstract. The support-vector network is a new learning machine for two ...
  44. [44]
    [PDF] 12 Fast Training of Support Vector Machines using Sequential ...
    This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine.
  45. [45]
    [PDF] On the LASSO and Its Dual
    Thus, the LASSO combines characteristics of ridge regression and subset selection and promises to be a useful tool for variable selection. Though the ...
  46. [46]
    Constrained Kriging Using Quadratic Programming x
    In this Short Note we present two assumptions which imply additional constraints of positivity. Let f2 be a region in two dimensions where a stationary random ...
  47. [47]
    Constrained multiple indicator kriging using sequential quadratic ...
    The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained ...
  48. [48]
    Implicitly Constrained Semi-Supervised Least Squares Classification
    Jul 24, 2015 · We introduce a novel semi-supervised version of the least squares classifier. This implicitly constrained least squares (ICLS) classifier minimizes the squared ...
  49. [49]
    Our Story - Gurobi Optimization
    In 2008, our founders created Gurobi Optimization with a single, simple vision: To build the world's fastest and most powerful mathematical optimization solver.Missing: quadratic | Show results with:quadratic
  50. [50]
    Gurobi — AIMMS Documentation
    Gurobi is a state-of-the-art solver for Linear Programming (LP), Mixed Integer Programming (MIP) and Quadratic Programming (QP/QCP/MIQP/MIQCP) problems.<|separator|>
  51. [51]
    [PDF] Advanced Gurobi Algorithms
    • QP Simplex. • Barrier. • Convex QCP. • Barrier. • Non-convex QP. • Same as for ... • Now we have an interior point method, but what makes it a barrier method?
  52. [52]
    Free Licenses for Academics | Gurobi - Gurobi Optimization
    at no cost — to students, faculty, and staff at accredited degree-granting institutions. These licenses are designed ...Academic Site License · Academic WLS License · Get your named-user licenseMissing: quadratic | Show results with:quadratic
  53. [53]
    Mosek ApS
    MOSEK is a large scale optimization software. Solves Linear, Quadratic, Semidefinite and Mixed Integer problems.Mosek - MOSEK · Documentation · Downloads · Academic LicensesMissing: parallel solving
  54. [54]
    MOSEK - GAMS
    Solving Problems in Parallel. MOSEK can exploit multiple CPUs (or a CPU with multiple cores) to solve an optimization problem when using the interior-point ...<|separator|>
  55. [55]
    OSQP: An Operator Splitting Solver for Quadratic Programs
    It can be configured to be division-free once an initial matrix factorization is carried out, making it suitable for real-time applications in embedded systems.
  56. [56]
    [PDF] Gurobi 9.5 Performance Benchmarks
    Gurobi 9.5 is designed to be the fastest solver, with the fastest solve times for MIP, LP, QP, and QCP models, and its performance improves as problems get ...Missing: CUTEr variables
  57. [57]
    QP Benchmarks for the OSQP Solver against GUROBI ... - GitHub
    These are the scripts to compare the following Quadratic Program (QP) solvers. The detailed description of these tests is available in this paper.
  58. [58]
    Quadratic programming - IBM
    Q is a matrix of objective function coefficients. That is, the elements Q jj are the coefficients of the quadratic terms xj^ 2, and the elements Q ...
  59. [59]
    cvxpy
    CVXPY is an open source Python-embedded modeling language for convex optimization problems. ... CVXPY 1.7 is the first release of CVXPY that supports GPU solvers.Install · User Guide · Examples · API Documentation
  60. [60]
    OSQP
    We benchmarked OSQP against problems from many different classes, applications and scalings. OSQP beats most available commercial and academic solvers.OSQP solver documentation · Get started · CitingMissing: 2018 embedded systems
  61. [61]
    Home — CVXOPT
    CVXOPT is a free software package for convex optimization based on the Python programming language. It can be used with the interactive Python interpreter.Examples · Installation instructions · Documentation · Download
  62. [62]
    Quadratic programming - qpsolvers 4.8.1 documentation
    Some solvers (like quadprog) will further require that P is definite, while other solvers (like ProxQP or QPALM) can work with semi-definite matrices. Return ...Missing: dense | Show results with:dense
  63. [63]
    [PDF] qpOASES User's Manual - COIN-OR
    This chapter explains to you within a few minutes how to solve a quadratic programming. (QP) problem, or a whole sequence of them, by means of qpOASES. At the ...
  64. [64]
    quadprog - Quadratic programming - MATLAB - MathWorks
    x = quadprog( H , f , A , b ) minimizes 1/2*x'*H*x + f'*x subject to the restrictions A*x ≤ b . The input A is a matrix of doubles, and b is a vector of doubles ...
  65. [65]
    liuq/QuadProgpp: A C++ library for Quadratic Programming ... - GitHub
    A C++ library for Quadratic Programming which implements the Goldfarb-Idnani active-set dual method. At present it is limited to the solution of strictly ...
  66. [66]
    OOQP: Object-Oriented Software for Quadratic Programming
    OOQP is an object-oriented C++ package, based on a primal-dual interior-point method, for solving convex quadratic programming problems (QPs).
  67. [67]
    Constrained quadratic programming - C++, C#, Java - ALGLIB
    Constrained quadratic programming deals with quadratic optimization problems subject to boundary and/or linear equality/inequality constraints.<|separator|>
  68. [68]
    [PDF] quadprog: Functions to Solve Quadratic Programming Problems
    Description This package contains routines and documentation for solving quadratic programming problems. Depends R (>= 3.1.0). License GPL (>= 2).