Linear complementarity problem
The linear complementarity problem (LCP), denoted as LCP(q, M), is a core problem in mathematical programming that requires finding a vector z \in \mathbb{R}^n such that z \geq 0, q + Mz \geq 0, and z^T (q + Mz) = 0, where q \in \mathbb{R}^n is a given vector and M \in \mathbb{R}^{n \times n} is a given matrix.[1] This formulation unifies several classical problems in optimization, including linear and quadratic programming, bimatrix games, and variational inequalities, by capturing the complementarity condition where for each index i, at least one of z_i or (q + Mz)_i must be zero.[1] The solution set, denoted SOL(q, M), may consist of zero, one, finitely many, or infinitely many points, depending on the properties of M.[1] Key matrix classes, such as P-matrices (where all principal minors are positive) and Q-matrices (where SOL(q, M) is nonempty for all q), determine existence and uniqueness of solutions; for instance, every LCP with a P-matrix has a unique solution.[1] Historically, the LCP emerged in the 1950s through work on quadratic programming and geometric problems, with the earliest explicit formulation appearing in a 1958 paper by Samelson, Thrall, and Wesler, and the term "linear complementarity problem" coined by Richard W. Cottle in 1965.[1] Algorithms for solving LCPs, such as Lemke's complementary pivot algorithm (1965), provide computational methods that converge under certain conditions, like when M is positive definite or copositive-plus.[1] LCPs have broad applications across disciplines: in economics, they model market equilibria, such as spatial price equilibria via the work of Takayama and Judge (1971); in engineering, they address contact problems in mechanics, including rigid-body approaches and journal bearing free-boundary issues; and in operations research, they solve nonconvex quadratic programs through Karush-Kuhn-Tucker conditions.[1] Additional uses include traffic network equilibria, portfolio selection, and structural engineering problems, highlighting the LCP's role as a versatile framework for equilibrium computation.[1]Definition and Formulation
Standard Definition
The linear complementarity problem (LCP) is a fundamental problem in mathematical optimization, defined for an n \times n matrix M and a vector q \in \mathbb{R}^n as the task of finding vectors w, z \in \mathbb{R}^n satisfying the following conditions: \begin{cases} w = q + M z, \\ w \geq 0, \\ z \geq 0, \\ w^T z = 0. \end{cases} This formulation, often denoted as \mathrm{LCP}(q, M), requires w and z to be nonnegative while their componentwise product sums to zero under the complementarity condition w^T z = 0.[2] The nonnegativity constraints ensure feasibility in the nonnegative orthant, and the linear relation ties w directly to z via the affine transformation involving M and q.[3] The complementarity condition w^T z = 0 implies that for each index i = 1, \dots, n, at least one of w_i or z_i must be zero (though both may vanish simultaneously). This pairwise orthogonality captures the essence of complementary slackness, a concept borrowed from optimization duality, where variables and their dual counterparts cannot both be strictly positive. In vector form, it enforces w_i z_i = 0 for all i, partitioning the indices into complementary sets where either w_i > 0 and z_i = 0, or w_i = 0 and z_i > 0, or both are zero.[2] Such structure arises naturally in modeling equilibrium points and constrained systems.[4] The LCP, building on geometric insights from Samelson, Thrall, and Wesler (1958),[5] emerged in the optimization literature of the 1960s. Lemke's 1965 paper introduced complementary pivot methods as a unifying framework for solving bimatrix games and quadratic programs.[4] This standard primal form serves as the cornerstone for subsequent theoretical and computational developments in the field.[2]Variants and Generalizations
The linear complementarity problem (LCP) admits a geometric interpretation in terms of complementary cones and affine subspaces. The solution set consists of points (w, z) in the nonnegative orthant \mathbb{R}^n_+ \times \mathbb{R}^n_+ that satisfy the affine equation w - Mz = q, where w and z belong to complementary cones generated by the columns of the matrix [I, -M]. These cones are polyhedral sets defined by index subsets \alpha \subseteq \{1, \dots, n\}, with the full cone \text{pos}(C_M(\alpha)) spanning the directions where complementarity holds strictly for indices in \alpha and its complement. This structure highlights how solutions correspond to intersections of the affine space with the boundaries of these cones, providing insight into the topology of the feasible set.[6] A key variant is the mixed linear complementarity problem (MLCP), which relaxes the nonnegativity constraints on some variables to allow them to be free (unrestricted in sign) while maintaining partial complementarity conditions. Formally, given matrices A \in \mathbb{R}^{n \times n}, B \in \mathbb{R}^{m \times m}, C \in \mathbb{R}^{n \times m}, D \in \mathbb{R}^{m \times n}, and vectors a \in \mathbb{R}^n, b \in \mathbb{R}^m, the MLCP seeks u \in \mathbb{R}^n and v \in \mathbb{R}^m such that a + Au + Cv = 0, \quad b + Du + Bv \geq 0, \quad v \geq 0, \quad v^T (b + Du + Bv) = 0. Here, u is unrestricted, and complementarity applies only to the v components, making the MLCP suitable for modeling systems with equality constraints or free variables, such as certain traffic equilibria or economic models. Solvability often relies on properties of the Schur complement B - DA^{-1}C being a P-matrix when A is nonsingular.[1] The nonlinear complementarity problem (NCP) generalizes the LCP by replacing the linear mapping with a nonlinear function f: \mathbb{R}^n \to \mathbb{R}^n. It requires finding z \geq 0 such that w = f(z) + q \geq 0 and w^T z = 0, capturing broader phenomena like nonconvex optimization or equilibrium problems where linearity assumptions fail. This extension preserves the complementarity structure but introduces challenges in existence and computation due to the nonlinearity of f, often addressed via smoothing or continuation methods.[1][7] Bilevel or hierarchical LCPs extend the framework to Stackelberg games, where an upper-level problem optimizes over parameters that define a lower-level LCP solved by a follower. In this setup, the leader maximizes their objective subject to the follower's LCP equilibrium, formulating a nested structure that models leader-follower dynamics in competitive settings like supply chains or policy design. The resulting problem is a mathematical program with complementarity constraints (MPCC), inheriting nonconvexity from the lower level.[8] For illustration, consider the 2×2 LCP with M = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \quad q = \begin{pmatrix} -1 \\ -1 \end{pmatrix}. A solution is z = (1, 1)^T, w = (0, 0)^T, satisfying w = q + Mz \geq 0 and w^T z = 0; this matrix M is not a Q-matrix, and the instance admits analysis of solution multiplicity in related parametric variants.[1]Mathematical Properties
Solvability Conditions
The linear complementarity problem (LCP) defined by a matrix M \in \mathbb{R}^{n \times n} and vector q \in \mathbb{R}^n is feasible if there exist vectors w, z \in \mathbb{R}^n with w \geq 0, z \geq 0, and w = Mz + q. This condition describes the nonempty intersection of the affine space \{(w, z) \mid w = Mz + q\} with the nonnegative orthant \mathbb{R}^n_+ \times \mathbb{R}^n_+, forming a polyhedron known as the feasible set of the LCP. Solvability of the LCP requires, in addition, that some point in this polyhedron satisfies the complementarity condition w_i z_i = 0 for all i = 1, \dots, n. A key solvability result holds for symmetric positive semi-definite matrices. If M is symmetric positive semi-definite (meaning x^T M x \geq 0 for all x \in \mathbb{R}^n) and the LCP(q, M) is feasible, then it has at least one solution.[1] This follows from the equivalence between such LCPs and convex quadratic programs, which are solvable when the objective is bounded below on the feasible set due to the convexity of the feasible set and objective function. The solution set is convex in this case.[1] For copositive matrices, solvability depends on the vector q. A matrix M is copositive if x^T M x \geq 0 for all x \geq 0; under this property, if the LCP(q, M) is feasible, then it is solvable (i.e., M \in Q_0). More precisely, for matrices that are copositive plus (copositive and satisfying that if x \geq 0 with x^T M x = 0, then M x \geq 0), feasibility guarantees the existence of a complementary solution, as established in early analyses of complementary pivot methods. For example, consider M copositive with q such that q \geq 0; the nonnegative orthant provides a feasible point (z = 0, w = q), ensuring solvability.[1] Uniqueness of solutions relates to strict complementarity, where a solution (w, z) satisfies w_i + z_i > 0 for all i. If an LCP admits a strictly complementary solution and the matrix M belongs to a class ensuring the solution set is a singleton (such as positive definite matrices), then the solution is unique. This property arises because strict complementarity partitions the variables into complementary sets without ambiguity, and the underlying polyhedral structure yields a vertex solution under these conditions. The LCP also emerges as the Karush-Kuhn-Tucker (KKT) conditions for nonlinear programming problems. Specifically, for a convex nonlinear program with inequality constraints, the KKT system can be formulated as an LCP in the primal variables and dual multipliers, where solvability of the LCP corresponds to the existence of optimal Lagrange multipliers satisfying stationarity, primal feasibility, dual feasibility, and complementarity slackness. This connection underscores that feasible LCPs arising from convex programs are solvable, mirroring the guaranteed existence of KKT points under constraint qualifications.Matrix Classes and Positivity
In the theory of the linear complementarity problem (LCP), certain classes of matrices guarantee solvability or uniqueness of solutions for specific or all right-hand-side vectors q. These classifications are pivotal for understanding the structural properties that ensure the existence and uniqueness of solutions to \operatorname{LCP}(M, q), where M is an n \times n matrix and q \in \mathbb{R}^n. A matrix M belongs to class P, also known as a P-matrix, if all its principal minors are positive. This property ensures that \operatorname{LCP}(M, q) has a unique solution for every q \in \mathbb{R}^n.[1] The P-matrix characterization implies that Lemke's algorithm always succeeds in finding the unique solution, highlighting its role in guaranteeing global uniqueness across all q.[1] The broader class Q consists of matrices M for which \operatorname{LCP}(M, q) is solvable for every q \in \mathbb{R}^n. P-matrices form a subclass of Q, but Q includes additional matrices where solvability holds without necessarily requiring uniqueness.[1] Symmetric positive semi-definite (PSD) matrices belong to the subclass Q_0 (solvable if feasible), not Q.[1] Copositive matrices and their strict variant play a key role in positivity-related solvability, particularly for nonnegative q. A matrix M is copositive if z^T M z \geq 0 for all nonnegative vectors z \geq 0, and strictly copositive if z^T M z > 0 for all z \geq 0 with z \neq 0. For copositive M, \operatorname{LCP}(M, q) has a solution whenever q \geq 0, while strictly copositive matrices extend this guarantee more robustly, often implying bounded solution sets.[1] An illustrative example is a skew-symmetric matrix M, satisfying M^T = -M. Such matrices always belong to class Q_0, meaning \operatorname{LCP}(M, 0) is solvable (e.g., via the trivial solution x = 0).[1] For instance, the 2×2 matrix \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} exemplifies this property.[1]Connections to Optimization
Relation to Quadratic Programming
The linear complementarity problem (LCP) arises naturally as the set of Karush-Kuhn-Tucker (KKT) optimality conditions for a convex quadratic program (QP). Consider the standard QP of minimizing the objective function \frac{1}{2} x^T Q x + c^T x subject to the nonnegativity constraint x \geq 0, where Q is a positive semidefinite (PSD) matrix and c \in \mathbb{R}^n. The KKT conditions for this problem require stationarity (Q x + c - z = 0), primal feasibility (x \geq 0), dual feasibility (z \geq 0), and complementary slackness (x^T z = 0), which together form the LCP (q, M) with q = c and M = Q. This equivalence implies that solving the LCP provides the optimal solution to the QP when Q is PSD, ensuring the QP is convex and thus has a global minimum. The PSD property of Q guarantees the existence of a solution to the corresponding LCP for any q, linking the convexity of the QP directly to LCP solvability. For illustration, consider the simple QP of minimizing \frac{1}{2} x^2 + x subject to x \geq 0. The KKT conditions yield the LCP with M = 1 and q = 1, solved by x = 0 and z = 1, confirming the QP optimum at x = 0. The LCP also captures the saddle-point conditions between a primal QP and its dual, unifying the two through complementarity. Specifically, the primal QP \min_{x \geq 0} \frac{1}{2} x^T Q x + c^T x has dual \max_{z \geq 0} -\frac{1}{2} (z - c)^T Q^\dagger (z - c) (where Q^\dagger is the pseudoinverse if Q is singular), and their joint optimality equates to solving the LCP (q, M) with M = Q. The connection between LCP and QP was formalized in the 1960s, with Cottle and Dantzig introducing the LCP framework in 1968 as a unifying pivot-based approach for solving QPs and related problems.[1]Link to Variational Inequalities
The linear complementarity problem (LCP) can be reformulated as a variational inequality (VI) over the nonnegative orthant. Specifically, given a matrix M \in \mathbb{R}^{n \times n} and vector q \in \mathbb{R}^n, the VI seeks a vector z \in K such that (q + Mz)^T (z' - z) \geq 0 for all z' \in K, where K = \mathbb{R}^n_+ is the nonnegative orthant. This VI formulation is equivalent to the standard LCP, which requires finding z \in \mathbb{R}^n such that z \geq 0, w = q + Mz \geq 0, and z^T w = 0. The equivalence holds because, for the convex cone K = \mathbb{R}^n_+, the VI condition implies that z solves the LCP: the inequality ensures w^T z = 0 and w^T z' \geq 0 for z' \geq 0, which together yield the complementarity and nonnegativity requirements. When M is positive semi-definite, the mapping F(z) = q + Mz is monotone, meaning (F(z) - F(z'))^T (z - z') \geq 0 for all z, z' \in K. For such monotone VIs over the orthant, solvability is guaranteed under mild conditions, such as the recession cone of the feasible set being pointed, providing a theoretical foundation for existence results in LCPs with P-matrices or copositive matrices. More broadly, the LCP represents a special case of a nonlinear VI where the operator F is affine, i.e., F(z) = Mz + q. This connection embeds the LCP within the general theory of VIs, allowing the application of VI-specific techniques, such as minty formulations or proximal point methods, to LCP solution. A practical illustration arises in traffic network equilibrium models, where the problem of finding user-optimal flows can be cast as a VI over flow variables in the nonnegative orthant. For linear link cost functions, this VI reduces directly to an LCP, enabling the use of LCP solvers for computing equilibria in transportation systems.Solution Algorithms
Pivot-Based Methods
Pivot-based methods for solving the linear complementarity problem (LCP) rely on combinatorial pivoting techniques analogous to those in the simplex method for linear programming, but adapted to enforce the complementarity condition w^T z = 0 where w = q + M z, w \geq 0, and z \geq 0. These methods generate a sequence of complementary bases by pivoting on pairs of variables (z_i, w_i) such that at most one is positive in each basis, traversing a path in the solution space until a feasible complementary solution is found or cycling is detected.Lemke's Algorithm
Lemke's complementary pivoting algorithm, introduced in 1965, is a foundational pivot-based method that augments the LCP with an artificial variable z_0 \geq 0 and a covering direction d > 0 (typically the all-ones vector) to form the nearly complementary system: find z \geq 0, z_0 \geq 0, w = q + d z_0 + M z \geq 0 such that z_0^T w = 0 and z^T w = 0. The algorithm starts from the initial point where z = 0, z_0 > 0, and w = q + d z_0 > 0 (assuming q has no negative entries for feasibility; otherwise, a Phase I procedure is used). It then performs a sequence of complementary pivots, selecting an entering variable from the pair (z_i, w_i) corresponding to the zero component in the current basic solution, and uses a ratio test to determine the leaving variable, ensuring non-negativity. Pivoting continues until z_0 = 0 (yielding an LCP solution) or a secondary ray is reached (indicating no solution). The steps of Lemke's algorithm are as follows:- Initialization: Set z = 0, choose z_0 > 0 large enough so that w = q + d z_0 > 0. The initial basis consists of z_0 and the w_i's that are positive.
- Pivoting Phase: Identify the complementary pair where one variable (say w_k) is zero and the other (z_k) is non-basic. Enter z_k into the basis by pivoting on the column of z_k in the tableau representation of the system, using the minimum ratio test on the rows where the pivot column coefficient is negative to select the leaving variable.
- Termination Check: If z_0 = 0 and complementarity holds, return (z, w) as the solution. If a variable can decrease without bound (unbounded ray), declare no solution exists. Repeat pivoting otherwise.