Fact-checked by Grok 2 weeks ago

Quadratically constrained quadratic program

A quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions of the decision variables, generalizing quadratic programming by incorporating nonlinear quadratic constraints. The standard mathematical formulation of a QCQP is to minimize \frac{1}{2} \mathbf{x}^T P_0 \mathbf{x} + \mathbf{q}_0^T \mathbf{x} + r_0 subject to \frac{1}{2} \mathbf{x}^T P_i \mathbf{x} + \mathbf{q}_i^T \mathbf{x} + r_i \leq 0 for i = 1, \dots, m, and possibly linear equality constraints A\mathbf{x} = \mathbf{b}, where P_i are symmetric matrices, \mathbf{q}_i are vectors, and r_i are scalars. A QCQP is convex—and thus efficiently solvable in polynomial time—if the objective matrix P_0 and all constraint matrices P_i are positive semidefinite; otherwise, it is generally nonconvex and NP-hard. QCQPs arise in diverse applications, including portfolio optimization in finance (e.g., maximizing returns subject to quadratic risk constraints), optimal power flow in electrical engineering, sensor network localization, and process design in chemical engineering such as pooling and blending problems. They also model combinatorial problems like the max-cut in graph theory and tasks in machine learning and bioinformatics, such as gene classification. Solving nonconvex QCQPs often relies on relaxations, particularly semidefinite programming (SDP) approaches like the Shor relaxation or doubly nonnegative variants, which lift the problem into a higher-dimensional convex form and provide tight bounds, though at increased computational cost. Other methods include Lagrangian relaxations, convex underestimations (e.g., αBB), and global solvers like GUROBI for mixed-integer variants.

Introduction and Formulation

Definition

A quadratically constrained quadratic program (QCQP) is an optimization problem that seeks to minimize (or maximize) a quadratic objective function subject to one or more quadratic constraints. This formulation arises naturally in applications requiring the modeling of nonlinear relationships, such as in control systems, signal processing, and resource allocation, where both the goal and limitations involve second-degree polynomial expressions in the decision variables. QCQPs originated in the mid-20th century as extensions of quadratic programming (QP), which itself gained prominence through early applications in finance and operations research. Building on foundational QP work, such as Harry Markowitz's 1952 mean-variance portfolio optimization model that used quadratic objectives with linear constraints, QCQPs emerged in the 1960s and 1970s to handle more complex scenarios with nonlinear constraints, as optimization theory advanced to address realistic problems in economics, engineering, and agriculture. At its core, a QCQP relies on quadratic forms, which express second-order dependencies through symmetric matrices; for instance, a typical objective involves a term like \mathbf{x}^T Q \mathbf{x} + \mathbf{c}^T \mathbf{x}, where \mathbf{x} is the vector of variables, Q captures quadratic interactions, and \mathbf{c} represents linear coefficients, with similar structures in the constraints. This distinguishes QCQPs from standard QPs, which limit constraints to linear inequalities, and from quadratically constrained linear programs (QCLPs), which pair quadratic constraints with linear objectives, often appearing in problems like norm-bounded approximations.

Mathematical Formulation

The quadratically constrained quadratic program (QCQP) is mathematically formulated as an optimization problem that minimizes a quadratic function in the decision variable subject to an arbitrary number of quadratic inequality constraints, along with possible linear equality constraints. The standard form of the QCQP is given by \begin{align*} \minimize_{x \in \mathbb{R}^n} \quad & \frac{1}{2} x^\top P_0 x + q_0^\top x + r_0 \\ \text{subject to} \quad & \frac{1}{2} x^\top P_i x + q_i^\top x + r_i \leq 0, \quad i = 1, \dots, m, \\ & A x = b, \end{align*} where P_i \in \mathbb{S}^n (the set of n \times n symmetric real matrices) for i = 0, \dots, m, q_i \in \mathbb{R}^n, r_i \in \mathbb{R}, A \in \mathbb{R}^{p \times n}, and b \in \mathbb{R}^p. The factor of \frac{1}{2} in the quadratic and linear terms is a conventional normalization that simplifies derivative computations and matrix inversions in analytical treatments. In the general nonconvex case, the matrices P_i may be indefinite, leading to potentially nonconvex quadratic forms in both the objective and constraints. Common variations of the standard QCQP include the homogeneous QCQP, which omits linear and constant terms to yield the form \minimize x^\top P x subject to x^\top A_i x \leq 0 for i = 1, \dots, m, often arising in applications requiring scale invariance. This homogeneous structure can be derived from the standard form by the homogenization trick, introducing an auxiliary variable z_{n+1} = 1 and lifting to z = [x; 1] \in \mathbb{R}^{n+1}, resulting in \minimize z^\top \tilde{P}_0 z subject to z^\top \tilde{P}_i z \leq 0 for i = 1, \dots, m and z_{n+1}^2 = 1, where \tilde{P}_i = \begin{bmatrix} P_i & \frac{1}{2} q_i \\ \frac{1}{2} q_i^\top & r_i \end{bmatrix}. The standard formulation already accommodates multiple quadratic inequality constraints through the index i = 1, \dots, m. Quadratic equality constraints, such as x^\top A_i x + b_i^\top x + c_i = 0, can be incorporated by replacing a single inequality with two non-strict inequalities of opposite signs. The inequalities in the standard form are non-strict (\leq); strict inequalities (<) occasionally appear in theoretical contexts but are typically reformulated as non-strict for computational purposes. A QCQP instance is feasible if its constraint set is nonempty, meaning there exists at least one x \in \mathbb{R}^n satisfying all quadratic inequalities and linear equalities, which can be assessed using phase-I optimization techniques that introduce slack variables to minimize constraint violations. The problem admits an optimal solution if the feasible set is nonempty and the objective function is bounded below over this set; for example, positive definiteness of P_0 (i.e., P_0 \succ 0) ensures the objective grows quadratically with \|x\|, providing coercivity. Boundedness of the feasible set itself depends on the constraint matrices P_i, such as when they are positive definite and define compact ellipsoidal regions whose intersection is finite. When P_0 \succeq 0 and all P_i \succeq 0, the QCQP is convex.

Properties and Classifications

Convexity and Non-convexity

A quadratically constrained quadratic program (QCQP) is classified as convex when the objective function matrix P is positive semidefinite (P \succeq 0) and all constraint matrices A_i for inequality constraints x^T A_i x + 2 b_i^T x + c_i \leq 0 are also positive semidefinite (A_i \succeq 0). Under these conditions, the feasible set is convex, and the problem can be solved efficiently using interior-point methods. Additionally, strong duality holds if Slater's condition is satisfied, meaning there exists a strictly feasible point where all inequality constraints are strictly satisfied. In contrast, non-convex QCQPs arise when at least one of the matrices P or A_i is not positive semidefinite, leading to non-convex feasible regions and potentially multiple local minima. Such instances are computationally challenging, with the problem being NP-hard in general, even when there is only one non-convex quadratic constraint. Mixed cases occur when some constraints are convex (with A_i \succeq 0) while others are non-convex (with A_j not positive semidefinite), resulting in partially convex feasible sets that may still exhibit duality gaps between the primal and dual problems. These gaps imply that Lagrangian duality may not recover the global optimum, complicating certification of solution quality. A key result for QCQPs with a single quadratic constraint is the S-lemma, which provides conditions under which the non-convex problem reduces to a convex semidefinite program equivalent. Specifically, for quadratic forms q_a(x) = x^T A x + 2 b^T x + c and q_b(x) = x^T P x + 2 q^T x + r, if there exists \bar{x} such that q_a(\bar{x}) > 0 and q_a(x) \geq 0 implies q_b(x) \geq 0 for all x, then there is \lambda \geq 0 such that q_b(x) - \lambda q_a(x) \geq 0 for all x. This lemma enables exact reformulation and solution via convex optimization when the regularity condition holds. Semidefinite programming relaxations can further handle such non-convex cases by providing tight bounds.

Computational Hardness

The non-convex quadratically constrained quadratic program (QCQP) is NP-hard in general. This hardness follows from the special case of quadratic programming (QP), which minimizes a quadratic objective subject to linear constraints and thus constitutes a QCQP with no quadratic constraints; even when the objective Hessian matrix has exactly one negative eigenvalue, QP remains NP-hard. The proof proceeds via a polynomial-time reduction from the NP-complete exact cover by 3-sets problem, where the quadratic objective encodes the covering condition such that the minimum value is zero if and only if an exact cover exists. QCQPs also subsume other NP-hard problems, such as 0-1 integer quadratic programming, where binary variables x_i \in \{0,1\} are enforced via n quadratic equality constraints x_i^2 = x_i. However, when the QCQP is convex—meaning the objective Hessian is positive semidefinite and all quadratic constraint Hessians are positive semidefinite—the problem reduces to convex optimization and is solvable in polynomial time using interior-point methods. These methods achieve convergence in O(\sqrt{m} \log(1/\epsilon)) iterations, where m is the number of constraints and \epsilon is the desired accuracy, leveraging self-concordant barrier functions for quadratic objectives and constraints. For approximation, certain non-convex QCQP variants, such as maximizing a quadratic form over the boolean hypercube \{-1,1\}^n, admit no constant-factor approximation algorithm unless P=NP. This inapproximability arises from a gap-preserving reduction from the NP-hard MAX-3SAT problem, showing that distinguishing instances with solution value at least $7/8 of optimal from those below $7/8 + \delta (for constant \delta > 0) is NP-hard.

Solution Approaches

Exact Methods for Convex Cases

Convex quadratically constrained quadratic programs (QCQPs) are tractable because they can be reformulated as convex optimization problems solvable to global optimality using established methods from convex programming. These approaches exploit the positive semidefiniteness of the associated matrices to guarantee convergence to exact solutions, in contrast to non-convex cases where local optima may trap algorithms. Lagrangian duality plays a central role in solving convex QCQPs exactly. For a convex QCQP of the form \min_x \frac{1}{2} x^T Q_0 x + q_0^T x subject to \frac{1}{2} x^T Q_i x + q_i^T x + r_i \leq 0 for i=1,\dots,m, where each Q_i \succeq 0, the Lagrangian is L(x,\lambda) = \frac{1}{2} x^T Q_0 x + q_0^T x + \sum_{i=1}^m \lambda_i \left( \frac{1}{2} x^T Q_i x + q_i^T x + r_i \right), with \lambda \geq 0. Under Slater's condition—that there exists a strictly feasible point x satisfying all constraints with strict inequality—strong duality holds, meaning the primal and dual optimal values coincide. Optimality is characterized by the Karush-Kuhn-Tucker (KKT) conditions: stationarity \nabla_x L(x^*,\lambda^*) = (Q_0 + \sum_i \lambda_i^* Q_i) x^* + q_0 + \sum_i \lambda_i^* q_i = 0, primal feasibility, dual feasibility \lambda^* \geq 0, and complementarity \lambda_i^* \left( \frac{1}{2} (x^*)^T Q_i x^* + q_i^T x^* + r_i \right) = 0 for each i. These conditions can be solved directly or used to certify solutions in dual-based algorithms. Interior-point methods extend barrier-based solvers for quadratic programs to handle the quadratic constraints in convex QCQPs. By incorporating logarithmic barrier functions for each inequality, such as \phi(x) = -\sum_i \log \left( -\frac{1}{2} x^T Q_i x - q_i^T x - r_i \right), the problem is converted to an unconstrained minimization over the self-concordant barrier, solvable via Newton steps. The resulting path-following algorithm requires O(\sqrt{n} \log(1/\epsilon)) iterations to achieve \epsilon-accuracy, where n is the dimension, assuming a self-concordant barrier with affine invariance. Each iteration solves a linear system derived from the Hessian of the barrier-augmented objective, leveraging the positive semidefiniteness for efficient computation. This approach is particularly effective for medium-scale problems with structured sparsity in the Q_i matrices. Active-set methods address convex QCQPs by iteratively identifying the binding constraints and solving reduced equality-constrained subproblems. These are often implemented via sequential quadratic programming (SQP), where at each step, the nonlinear constraints are linearized around the current iterate, yielding a quadratic subproblem in the trust region. For convex cases, the SQP method converges globally to the optimum, with the active set updated based on the signs of the Lagrange multipliers and constraint violations. Handling quadratic constraints involves solving the QP subproblem exactly using dual active-set strategies, which exploit the structure to avoid full matrix factorizations. The method's efficiency stems from warm-starting the active set across iterations, reducing the number of subproblem solves. A prominent special case is the trust-region subproblem, a single-constraint QCQP of the form \min_x \frac{1}{2} x^T Q x + q^T x subject to \|x - x_0\|_2 \leq \Delta, which arises in nonlinear optimization. When Q \succeq 0, it is convex and solvable exactly via a generalized eigenvalue decomposition. The unconstrained minimizer x_u satisfies Q(x_u - x_0) = -q; if \|x_u - x_0\|_2 \leq \Delta, then x_u is optimal (\lambda = 0). Otherwise, the optimal solution lies on the boundary and satisfies (Q + \lambda I)(x^* - x_0) = -q for some \lambda > 0 such that \|x^* - x_0\|_2 = \Delta, where \lambda is found by solving the one-dimensional secular equation after eigendecomposition. This closed-form approach requires solving a one-dimensional secular equation after eigendecomposition, achieving exactness in finite arithmetic.

Relaxation Techniques

Relaxation techniques for non-convex quadratically constrained quadratic programs (QCQPs) aim to approximate the feasible set or objective function with convex surrogates, providing lower bounds on the optimal value or feasible solutions close to the original problem. These methods are particularly useful since non-convex QCQPs are NP-hard in general, and relaxations enable tractable computations via convex optimization solvers. Common strategies include dualizing constraints, linearizing non-convex terms, and lifting the problem to higher dimensions. Lagrangian relaxation addresses non-convex QCQPs by incorporating quadratic constraints into the objective function using non-negative Lagrange multipliers \lambda \in \mathbb{R}^m_+, forming the Lagrangian L(x, \lambda) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x), where f_i(x) = x^T P_i x + q_i^T x + r_i for i=0,\dots,m. The dual function g(\lambda) = \inf_x L(x, \lambda) yields a lower bound on the optimal value by solving \max_{\lambda \geq 0} g(\lambda), often formulated as a semidefinite program for computational tractability. This relaxation provides tight bounds in cases with few constraints, justified by strong duality under certain conditions like the S-procedure for one quadratic constraint. Extensions integrate it into branch-and-bound frameworks, where dual bounds prune suboptimal branches during global search. Linearization, often via successive convex approximation (SCA), handles non-convexity by iteratively replacing non-convex quadratic terms with convex surrogates using first-order Taylor expansions. For a non-convex constraint x^H A_m x \leq b_m, decompose A_m = A_m^{(+)} + A_m^{(-)} into positive and negative semidefinite parts, then approximate the concave part around a current point z as x^H A_m^{(-)} x \leq 2 \Re\{z^H A_m^{(-)} x\} - z^H A_m^{(-)} z, yielding a convex surrogate solvable as a second-order cone program. Algorithms like feasible point pursuit SCA introduce slack variables to maintain feasibility, penalizing violations to drive convergence to a Karush-Kuhn-Tucker (KKT) point of the original problem. These methods ensure non-increasing objective values and are applied in signal processing tasks, such as multicast beamforming, outperforming randomization-based approaches in feasibility. The reformulation-linearization technique (RLT) tightens relaxations by introducing new variables for bilinear products and generating valid linear inequalities from bounds on the variables. For a QCQP with box constraints l \leq x \leq u, reformulate by replacing x_i x_j with X_{ij}, then multiply bound constraints to derive inequalities like X_{ij} - l_i x_j - l_j x_i + l_i l_j \geq 0, resulting in a linear program with O(n^2) variables and constraints. Higher-level RLT hierarchies (e.g., level k) multiply up to k factors for progressively tighter bounds, converging to the convex hull as k \to n. Applied to box-constrained QCQPs, RLT provides lower bounds with gaps up to 70% on test instances, improved by combining with other relaxations, and is computationally efficient compared to semidefinite methods. The Shor relaxation offers a foundational lifting approach by homogenizing the QCQP and introducing a matrix variable X = \begin{pmatrix} 1 \\ x \end{pmatrix} \begin{pmatrix} 1 & x^T \end{pmatrix}, relaxing the rank-1 condition X \succeq 0, \rank(X)=1 to positive semidefiniteness X \succeq 0. This transforms the non-convex problem into a semidefinite program, providing a convex outer approximation whose optimal value lower-bounds the original. While exactness holds for certain structured cases, it generally yields approximate solutions recoverable via randomization or Gaussian rounding.

Semidefinite Programming Relaxation

The semidefinite programming (SDP) relaxation, originally introduced by Shor, provides a convex approximation to the nonconvex quadratically constrained quadratic program (QCQP) by lifting the problem into the space of symmetric positive semidefinite matrices. Consider the general QCQP formulated as minimizing x^\top P x + q^\top x + r subject to x^\top A_i x + b_i^\top x + c_i \leq 0 for i = 1, \dots, m, where P, A_i \in \mathbb{S}^n and q, b_i \in \mathbb{R}^n. The relaxation introduces the matrix variable X \in \mathbb{S}^n and vector x \in \mathbb{R}^n, replacing quadratic terms x^\top M x with linear traces \operatorname{Tr}(M X) under the substitution X = x x^\top. To handle affine terms, the problem is homogenized by considering the augmented matrix \begin{bmatrix} 1 & x^\top \\ x & X \end{bmatrix} \succeq 0. The resulting SDP is: \begin{align*} \min_{X, x} \quad & \operatorname{Tr}(P X) + q^\top x + r \\ \text{s.t.} \quad & \operatorname{Tr}(A_i X) + b_i^\top x + c_i \leq 0, \quad i = 1, \dots, m, \\ & \begin{bmatrix} 1 & x^\top \\ x & X \end{bmatrix} \succeq 0. \end{align*} This formulation relaxes the nonconvex rank-one constraint \operatorname{rank}(X) = 1 (or equivalently, X \succeq x x^\top), yielding a tractable convex program solvable in polynomial time. The SDP relaxation is tight—meaning the optimal value matches that of the original QCQP and an optimal rank-one solution exists—under specific conditions, such as for homogeneous QCQPs with a single quadratic constraint (via the S-lemma), or for multiple constraints when the matrices have sufficient eigenvalue multiplicity (e.g., the smallest eigenvalue has multiplicity at least m+1) ensuring a rank-one optimal solution exists in the SDP. The S-lemma states that for two homogeneous quadratic forms q_1(x) = x^\top F_1 x and q_2(x) = x^\top F_2 x, assuming there exists x with q_2(x) > 0, then q_1(x) \geq 0 for all x with q_2(x) \geq 0 if and only if there exists \lambda \geq 0 such that F_1 - \lambda F_2 \succeq 0. This duality implies that the SDP solution decomposes into a rank-one matrix for such problems. For non-tight cases, the SDP provides approximation guarantees. In certain indefinite QCQPs, such as maximizing an indefinite quadratic over the unit sphere, randomized rounding of the SDP solution yields a 1/2-approximation in expectation. A prominent example is the maximum cut problem, a special nonconvex QCQP equivalent to maximizing \frac{1}{4} \sum_{i,j} w_{ij} (1 - x_i x_j) subject to x \in \{-1,1\}^n, where the Goemans-Williamson algorithm applies SDP relaxation followed by random hyperplane rounding to achieve an approximation ratio of at least 0.87856. Computationally, the SDP relaxation involves optimizing over semidefinite cones of dimension (n+1) \times (n+1), resulting in approximately O(n^2) decision variables, which scales quadratically with the original problem size n. Standard interior-point SDP solvers, such as SeDuMi, efficiently handle these problems for moderate n (up to a few hundred) by exploiting sparsity and structure in the constraints.

Applications

In Engineering and Control

In control theory, quadratically constrained quadratic programs (QCQPs) arise in the formulation of optimal control problems featuring quadratic cost functions and stability constraints, such as those ensuring bounded state trajectories or performance guarantees. For instance, the infinite-horizon linear quadratic regulator (LQR) with constraints on states and inputs can be reformulated as a QCQP to handle risk-sensitive objectives, where the quadratic cost penalizes deviations from desired behavior while quadratic constraints enforce stability bounds. Linear matrix inequalities (LMIs) play a central role in H-infinity control synthesis, enabling the design of controllers that minimize the worst-case gain from disturbances to outputs through quadratic stability conditions. In signal processing, QCQPs are prominently applied to beamforming in antenna arrays, where the objective is typically to minimize transmit power while satisfying quadratic constraints on signal response at desired directions. Adaptive beamforming algorithms solve QCQPs to reconstruct interference-plus-noise covariance matrices, ensuring robust beam patterns that maintain signal-to-interference-plus-noise ratios under uncertainties like steering vector mismatches. This approach is particularly effective in array signal processing for null broadening, where quadratic constraints shape the beam to suppress wideband interferers while preserving mainlobe response. Resource allocation problems in sensor networks frequently model localization as a QCQP, leveraging quadratic distance constraints between nodes to estimate positions from ranging measurements. In ad-hoc wireless sensor networks, the sensor network localization (SNL) problem minimizes a quadratic error in position estimates subject to quadratic equality or inequality constraints derived from measured distances to anchors, enabling accurate node placement without full connectivity. These formulations account for noisy range data by relaxing constraints into quadratic inequalities, facilitating scalable solutions for large-scale deployments. Optimal power flow (OPF) problems in electrical engineering are formulated as nonconvex QCQPs, minimizing generation costs or power losses subject to quadratic constraints on voltage magnitudes, phase angles, and power balance equations. These models handle nonlinear AC power flows and are solved using relaxations like semidefinite programming for tractable approximations in grid operations. Post-2010 developments have extended QCQPs to robotics path planning, particularly in unmanned aerial vehicle (UAV) trajectory optimization, where quadratic costs on energy or time compete with constraints on collision avoidance and dynamics. For quadrotor navigation in cluttered environments, QCQPs generate feasible trajectories by minimizing quadratic deviations from reference paths while enforcing quadratic bounds on velocity and acceleration to respect vehicle limits. In multi-UAV scenarios, convex-QCQP formulations enable online replanning, incorporating ellipsoidal constraints for obstacle avoidance and ensuring collision-free paths in dynamic settings. Semidefinite programming relaxations are occasionally employed to solve these nonconvex QCQPs efficiently.

In Statistics and Machine Learning

In portfolio optimization, the Markowitz model can be formulated as a QCQP when incorporating quadratic constraints on risk or return, such as bounding the portfolio variance while maximizing expected return under quadratic equality constraints for target returns. This setup arises in risk-constrained variants where the objective minimizes quadratic risk subject to quadratic constraints ensuring a minimum return level, making it a non-convex QCQP in general cases with multiple assets. Such formulations are prevalent in financial modeling to balance diversification and performance, often solved via semidefinite relaxations for tractability. The dual optimization problem of support vector machines (SVMs) is a quadratic program (QP), particularly for the hard-margin SVM, where it involves maximizing a quadratic form over Lagrange multipliers subject to linear constraints from the kernel matrix and equality for class balance. This structure captures margin maximization, enabling kernel-based classification by solving for support vectors that define the decision boundary. The approach is foundational in binary classification tasks, with extensions to soft-margin cases incorporating linear slack constraints. In clustering and dimensionality reduction, QCQPs appear in extensions of principal component analysis (PCA), such as sparse PCA, where the problem is cast as a QCQP to maximize variance explained under quadratic constraints on loadings, promoting interpretable low-dimensional representations. These methods enhance unsupervised learning by incorporating structural priors, often using alternating optimization to handle non-convexity. Despite the NP-hardness of general non-convex QCQPs, practical instances in these domains rely on efficient approximations.

Examples and Implementations

Illustrative Example

Consider the following convex quadratically constrained quadratic program (QCQP) in two dimensions, which minimizes a quadratic objective subject to two quadratic inequality constraints: \min_{x \in \mathbb{R}^2} \ (x_1 - 2)^2 + x_2^2 subject to x_1^2 + x_2^2 \leq 1, \quad (x_1 - 1)^2 + x_2^2 \leq 1. This formulation has a positive definite objective Hessian (identity matrix scaled by 2) and positive definite constraint Hessians, ensuring convexity. To solve this convex QCQP, apply the Karush-Kuhn-Tucker (KKT) conditions, which provide necessary and sufficient optimality criteria for convex problems. The Lagrangian is \mathcal{L}(x, \lambda_1, \lambda_2) = (x_1 - 2)^2 + x_2^2 + \lambda_1 (x_1^2 + x_2^2 - 1) + \lambda_2 ((x_1 - 1)^2 + x_2^2 - 1), where \lambda_1 \geq 0 and \lambda_2 \geq 0 are dual multipliers. The stationarity conditions yield \frac{\partial \mathcal{L}}{\partial x_1} = 2(x_1 - 2) + 2\lambda_1 x_1 + 2\lambda_2 (x_1 - 1) = 0, \frac{\partial \mathcal{L}}{\partial x_2} = 2x_2 + 2\lambda_1 x_2 + 2\lambda_2 x_2 = 0. From the second equation, x_2 (1 + \lambda_1 + \lambda_2) = 0. Since $1 + \lambda_1 + \lambda_2 > 0, it follows that x_2 = 0. Substituting into the first equation and the constraints simplifies the problem to one dimension along x_1, with feasible interval [0, 1]. Assuming the first constraint binds (\lambda_1 > 0), set x_1^2 = 1, so x_1 = 1 (the positive root is feasible). This gives \lambda_1 = 1 and \lambda_2 = 0, satisfying primal feasibility, dual feasibility, and complementary slackness. The second constraint holds with slack. Thus, the optimal solution is x^* = (1, 0) with objective value f_0(x^*) = 1. Geometrically, the feasible set is the lens-shaped intersection of two unit disks centered at (0, 0) and (1, 0). The objective function represents squared Euclidean distance from the point (2, 0), with circular level sets expanding outward from that center. The optimizer x^* lies at the boundary point of the feasible set closest to (2, 0). For a non-convex variant illustrating multiple local optima, modify the objective to include an indefinite Hessian, as in the trust-region subproblem \min_x \frac{1}{2} x^T A x - b^T x subject to \|x\|^2 \leq \Delta^2, where A has both positive and negative eigenvalues. Depending on the trust-region radius \Delta, this QCQP can exhibit multiple local minima, including the global minimum and saddle points.

Software Solvers and Languages

Several software tools and libraries facilitate the modeling and solving of quadratically constrained quadratic programs (QCQPs), particularly for convex cases and mixed-integer variants. Open-source options like CVXPY in Python support convex QCQPs through interfaces to solvers such as ECOS and SCS, enabling the formulation of quadratic objectives and linear constraints, while extensions like the QCQP package handle nonconvex instances via relaxations and heuristics. Gurobi, a commercial solver with academic licensing, addresses mixed-integer QCQPs (MIQCQPs) by supporting quadratic objectives and constraints, provided the quadratic terms form positive semidefinite matrices for convexity guarantees, and employs branch-and-bound for nonconvex cases. Among commercial tools, MOSEK excels in SDP relaxations of QCQPs, converting nonconvex problems into semidefinite programs solvable via interior-point methods, with examples demonstrating mixed-integer relaxations. In MATLAB, the quadprog function solves quadratic programs with linear constraints, but for quadratic constraints, fmincon or the Optimization Toolbox extends support through nonlinear programming solvers. Scripting languages enhance QCQP modeling: YALMIP in MATLAB provides a high-level interface for defining QCQPs and interfacing with solvers like MOSEK or quadprog, automatically detecting convexity for appropriate solver selection. Similarly, JuMP in Julia supports QCQP formulation with quadratic expressions and constraints, integrating with solvers like Gurobi or Ipopt for convex and nonconvex solving, including extensions like PolyJuMP for polynomial variants. In the 2020s, integrations with machine learning frameworks have emerged for QCQP applications; for instance, CVXPY's compatibility with TensorFlow and PyTorch allows embedding convex QCQPs in neural network training pipelines, often via SDP relaxations for nonconvex handling. SDP solvers like SDPT3 can be referenced briefly for underlying relaxation computations in these tools.