The minimax theorem is a cornerstone of game theory, first proved by mathematician John von Neumann in 1928, stating that in any finite two-player zero-sum game represented by an m \times n payoff matrix A, there exist mixed strategies p for the row player (maximizer) and q for the column player (minimizer) such that \max_p \min_q p^T A q = \min_q \max_p p^T A q, defining a unique value of the game that each player can guarantee regardless of the opponent's actions.[1] This result ensures the existence of an equilibrium in mixed strategies for such games, resolving the strategic tension between players seeking to maximize and minimize the same payoff.[2]Von Neumann introduced the theorem in his seminal paper "Zur Theorie der Gesellschaftsspiele" (On the Theory of Games of Strategy), published in Mathematische Annalen, where he formalized the concept of mixed strategies—probabilistic combinations of pure strategies—to address scenarios where pure strategies lead to suboptimal outcomes.[3] His proof used a topological argument, demonstrating that the maximin value (the highest payoff the maximizer can force) equals the minimax value (the lowest payoff the minimizer can impose).[4] This work marked the birth of game theory as a mathematical discipline, influencing subsequent developments including the 1944 collaboration with Oskar Morgenstern in Theory of Games and Economic Behavior, which expanded its applications to economics.[5]Beyond game theory, the minimax theorem has profound implications in optimization, where it underpins the duality theory of linear programming: for a primal maximization problem, the dual minimization problem achieves the same optimal value under feasible conditions, enabling efficient algorithmic solutions like the simplex method.[6] Generalizations, such as Maurice Sion's 1958 theorem, extend the result to infinite-dimensional settings for quasi-concave-quasi-convex functions over compact convex sets, broadening its utility in convex analysis and operations research.[7] The theorem's principles also inform decision-making under uncertainty in fields like artificial intelligence, where algorithms like minimax search guide adversarial planning in games such as chess.[8]
Background in game theory
Zero-sum games
A zero-sum game is a two-player mathematical model in game theory where the total payoff remains constant, typically zero, such that one player's gains exactly equal the other's losses across all possible outcomes.[9] In such games, the interests of the players are strictly opposed, with no possibility of mutual benefit from the interaction.[10] The structure is commonly represented by a payoff matrix A = (a_{ij}), where the rows correspond to the pure strategies (actions) available to player 1 (the row player), the columns to those of player 2 (the column player), and each entry a_{ij} denotes the payoff received by player 1 when player 1 chooses row i and player 2 chooses column j; player 2 receives the negative payoff -a_{ij}.[11]A classic example is the game of rock-paper-scissors, a simple zero-sum game with three actions per player. The payoff matrix for player 1 is given by:\begin{array}{c|ccc}
& \text{Rock} & \text{Paper} & \text{Scissors} \\
\hline
\text{Rock} & 0 & -1 & 1 \\
\text{Paper} & 1 & 0 & -1 \\
\text{Scissors} & -1 & 1 & 0 \\
\end{array}Here, a tie yields 0, a win yields +1, and a loss yields -1, ensuring that the payoffs for both players sum to zero in every case (e.g., if player 1 plays rock and player 2 plays paper, player 1 gets -1 while player 2 gets +1).[12] This matrix illustrates the zero-sum property: no value is created or destroyed, only redistributed between opponents.Zero-sum games differ fundamentally from non-zero-sum games, where the total payoffs can exceed or fall short of zero, allowing for outcomes where both players gain or lose collectively. For instance, the Prisoner's Dilemma is a non-zero-sum game in which two suspects interrogated separately can each receive severe penalties if both defect (total payoff low), lighter penalties if both cooperate (total higher), or mixed results otherwise, highlighting opportunities for mutual improvement absent in zero-sum settings.[13] The constant-sum nature of zero-sum games emphasizes pure antagonism, as any advantage for one player directly diminishes the other's position.[14]The term "zero-sum" was popularized by John von Neumann in his seminal 1928 paper "Zur Theorie der Gesellschaftsspiele," which formalized such games within the emerging field of game theory.[4] However, the underlying concept of interactions where one entity's gain mirrors another's loss predates this work, appearing in economic discussions as early as the mercantilist period (16th–18th centuries), when international trade was often viewed as a fixed-pie contest favoring exports over imports to accumulate wealth at competitors' expense.[15]
Mixed strategies
In zero-sum games, where one player's gains result only from the other's losses, players often employ mixed strategies to introduce randomness and avoid predictable exploitation. A mixed strategy for the row player (player 1) is a probability vector p \in \mathbb{R}^m, where m is the number of pure strategies, satisfying p_i \geq 0 for all i and \sum_{i=1}^m p_i = 1, assigning probabilities to each pure strategy. Similarly, for the column player (player 2) with n pure strategies, a mixed strategy q \in \mathbb{R}^n satisfies q_j \geq 0 for all j and \sum_{j=1}^n q_j = 1.[16]The expected payoff under mixed strategies p and q, given the payoff matrix A \in \mathbb{R}^{m \times n} where a_{ij} is the payoff to player 1 when choosing pure strategy i against j, is computed as the bilinear formp^T A q = \sum_{i=1}^m \sum_{j=1}^n p_i a_{ij} q_j.This formulation, central to analyzing strategic uncertainty, allows players to evaluate outcomes probabilistically rather than deterministically.[16]A classic illustration is the game of rock-paper-scissors, represented by the symmetric zero-sum payoff matrixA = \begin{pmatrix}
0 & -1 & 1 \\
1 & 0 & -1 \\
-1 & 1 & 0
\end{pmatrix},where rows and columns correspond to rock, paper, and scissors, respectively. If both players adopt the uniform mixed strategy p = q = (1/3, 1/3, 1/3)^T, the expected payoff is p^T A q = 0, ensuring neither can gain an advantage through deviation.Mixed strategies enable the concept of security levels, quantifying a player's guaranteed outcome against an adversarial opponent. For player 1, the maximin security level is \max_p \min_q p^T A q, the highest payoff achievable regardless of player 2's response. For player 2, aiming to minimize player 1's payoff, the minimax security level is \min_q \max_p p^T A q, the lowest payoff they can force.[16]These strategies are essential in games lacking pure strategy equilibria, such as rock-paper-scissors or matching pennies, where committing to a single action allows exploitation by the opponent. By randomizing over pure strategies, a player renders their behavior unpredictable, securing the maximin value and preventing systematic losses.[17]
Von Neumann introduced the minimax theorem in his 1928 paper "Zur Theorie der Gesellschaftsspiele," published in Mathematische Annalen, where he established the existence of optimal strategies in finite two-player zero-sum games through a proof establishing the existence of saddle points for quasiconcave-quasiconvex payoff functions over strategy spaces.[1]The theorem applies to finite zero-sum games, where two players, say Player I (the maximizer) and Player II (the minimizer), have finite action sets of sizes m and n, respectively, and the game is represented by an m \times n payoff matrix A with real-valued entries a_{ij}, such that the payoff to Player I from actions i and j is a_{ij} and to Player II is -a_{ij}.[1]Mixed strategies allow each player to randomize over their actions: Player I chooses a probability vector p \in \Delta^m (the m-dimensional simplex), and Player II chooses q \in \Delta^n. The expected payoff is then p^T A q. Von Neumann's minimax theorem states that there exist optimal mixed strategies p^* \in \Delta^m and q^* \in \Delta^n, and a value v \in \mathbb{R}, such that\max_{p \in \Delta^m} \min_{q \in \Delta^n} p^T A q = \min_{q \in \Delta^n} \max_{p \in \Delta^m} p^T A q = p^{*T} A q^* = v.[1]This equality implies the existence of a saddle point in mixed strategies: for any p \in \Delta^m and q \in \Delta^n, p^T A q^* \leq v \leq p^{*T} A q, so p^* and q^* form a Nash equilibrium where neither player benefits from unilateral deviation.[18]A classic example is the matching pennies game, where Player I wins 1 if both choose heads (H) or both tails (T), and loses 1 otherwise; Player II wins the opposite. The payoff matrix for Player I isA = \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}.The value is v = 0, achieved by the uniform mixed strategies p^* = q^* = \left( \frac{1}{2}, \frac{1}{2} \right), ensuring each player's expected payoff is zero regardless of the opponent's choice.[19]
Proof overview
Von Neumann's original proof of the minimax theorem, published in 1928, used an analytic argument establishing saddle point existence, from which a fixed-point theorem can be derived; a simpler version employing Brouwer's fixed-point theorem was provided by von Neumann in 1937 by considering the compact convex sets of mixed strategies for both players, thereby establishing the existence of a strategy pair where neither player can unilaterally improve their payoff, confirming the equality of maximin and minimax values.[4]In 1938, Jean Ville offered a more algebraic proof that reframed the theorem using the geometry of strategy simplices, invoking separating hyperplanes to demonstrate that if no arbitrage exists—meaning no strategy yields a strictly positive payoff against all opposing strategies—then the maximin equals the minimax.[20]Following George Dantzig's 1947 development of linear programming, a modern proof leverages LP duality, originally highlighted by von Neumann: the row player's problem is formulated as maximizing a value v subject to p^T A \geq v \mathbf{1} for mixed strategy p in the probability simplex (where A is the payoff matrix and \mathbf{1} the all-ones vector), while the column player's dual minimizes v subject to A q \leq v \mathbf{1} for q in its simplex; strong duality theorem ensures the optimal values match, proving the minimax equality.[21]A key insight unifying these approaches is the compactness of the finite-dimensional probability simplices for mixed strategies combined with the bilinear continuity of the expected payoff, ensuring the infima and suprema are achieved as minima and maxima.[22]These proofs hinge on the finiteness of action sets in the game, requiring further topological conditions like compactness or convexity for generalizations beyond finite cases.[23]
Generalizations to functions
Concave-convex functions
The minimax theorem extends beyond finite strategy spaces to continuous functions defined on compact convex sets, particularly those that are concave in one variable and convex in the other. Consider a function f: X \times Y \to \mathbb{R}, where X and Y are compact convex subsets of topological vector spaces. The function f is assumed to be concave in its first argument x \in X for each fixed y \in Y and convex in its second argument y \in Y for each fixed x \in X, with continuity ensuring well-behaved extrema.[24]Under these conditions, the minimax theorem states that\sup_{x \in X} \inf_{y \in Y} f(x, y) = \inf_{y \in Y} \sup_{x \in X} f(x, y).Moreover, due to the compactness of X and Y and the continuity of f, the supremum over x and the infimum over y are attained, yielding maximum and minimum values, respectively. This equality, combined with the concavity-convexity, establishes the existence of a saddle point (x^*, y^*) where f(x, y^*) \leq f(x^*, y^*) \leq f(x^*, y) for all x \in X, y \in Y, such that neither player can unilaterally improve their outcome.[24]In the context of zero-sum games, this formulation bridges finite matrix games to infinite settings by interpreting f(x, y) as the expected payoff to the maximizing player, with x and y representing mixed strategies over continuous or infinite action spaces. A key example is the bilinear payoff f(x, y) = x^\top A y, where A is a payoff matrix extended to infinite dimensions, and x, y lie in compact convex sets such as probability simplices; here, linearity ensures concavity in x and convexity in y, directly invoking the duality.[24]For a concrete illustration of applications in continuous games, the theorem applies to the continuous Colonel Blotto game, where strategies are distributions over an interval [0, 1] rather than discrete points; the payoff is determined by integrals over these strategy spaces, and the theorem guarantees that the value of the game equals the interchange of suprema and infima, attained at optimal mixed strategies.[25]This generalization traces its roots to extensions of John von Neumann's 1928 finite-game theorem, building on the bilinear case discussed in von Neumann and Morgenstern's 1944 Theory of Games and Economic Behavior, with early general formulations for concave-convex functions appearing in optimization and game theory literature during the early 1950s, such as Glicksberg (1952) and Fan (1953), formalizing saddle-point equilibria in broader models.[24][25]
Sion's theorem
Sion's minimax theorem, formulated by Maurice Sion in 1958, generalizes von Neumann's minimax theorem by relaxing the requirements on the payoff function from strict concavity-convexity to quasi-concavity-quasi-convexity, thereby extending applicability to a broader class of functions and spaces.[26] The theorem states that for a real-valued function f: X \times Y \to \mathbb{R}, where X is a compact convexsubset of a topological vector space and Y is a convexsubset of another topological vector space, if f is quasi-concave in x for each fixed y \in Y and quasi-convex in y for each fixed x \in X, with appropriate semi-continuity conditions (upper semi-continuous in x and lower semi-continuous in y), then\min_{y \in Y} \max_{x \in X} f(x, y) = \max_{x \in X} \min_{y \in Y} f(x, y).[26]Quasi-concavity of f in x means that for each fixed y \in Y and real number c, the upper level set \{x \in X : f(x, y) \geq c\} is convex, while quasi-convexity in y implies that for each fixed x \in X and c, the lower level set \{y \in Y : f(x, y) \leq c\} is convex.[26] The key conditions include the compactness of X, which ensures the maxima over X are attained, and the convexity of both X and Y, which supports the quasi-properties; the semi-continuity ensures the equality of the minimax values holds under these topological assumptions, though the minimum over Y may not be attained if Y is not compact.[26]Published in the Pacific Journal of Mathematics, Sion's work unifies and extends earlier results, such as Ky Fan's 1953 minimax theorem for quasi-concave-convex functions on compact sets, by employing a novel approach based on the Knaster-Kuratowski-Mazurkiewicz theorem rather than fixed-point or separating hyperplane methods.[26][27] Unlike the stricter concave-convex case, which requires the function to be concave in one variable and convex in the other—implying full linearity in many game-theoretic contexts—Sion's weaker assumptions accommodate non-smooth functions where level sets remain convex, broadening applications to infinite-dimensional settings.[26]A representative application arises in infinite zero-sum games with continuous action spaces, where the payoff function may not be bilinear but satisfies quasi-concavity in one player's strategies and quasi-convexity in the other's, allowing the theorem to guarantee the existence of a value for the game under compactness of one strategy set.[26]
Applications
In optimization
The minimax theorem establishes a profound connection to linear programming (LP) duality, where every LP problem admits a primal maximization form \max \{ c^T x \mid Ax \leq b, x \geq 0 \} and a dual minimization form \min \{ b^T y \mid A^T y \geq c, y \geq 0 \}.[28] Under appropriate conditions, the theorem interprets the dual as an adversarial game: the primal player maximizes payoff subject to constraints, while the dual player minimizes by choosing "worst-case" constraint multipliers, ensuring strong duality where the primal maximum equals the dual minimum.[29] This game-theoretic lens provides a proof of strong duality via the minimax equality for finite zero-sum games, framing constraints as strategic choices.[28]Historically, George Dantzig's simplex method, introduced in 1947, implicitly leverages this minimax framework to solve for game values in zero-sum games by reformulating them as LPs.[30] Dantzig demonstrated the equivalence between solving two-person zero-sum games and LPs, allowing the simplex algorithm to compute optimal mixed strategies and values efficiently.[31] For instance, in a 2x2 zero-sum game with payoff matrix A = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}, the row player's optimal value v solves the LP \max v subject to p_1 a_{11} + p_2 a_{21} \geq v, p_1 a_{12} + p_2 a_{22} \geq v, p_1 + p_2 = 1, p \geq 0, where p = (p_1, p_2) is the mixed strategy.[32]In robust optimization, problems are often formulated as \min_x \max_{u \in \mathcal{U}} f(x, u), where \mathcal{U} is an uncertainty set and u represents adversarial perturbations.[33]Sion's minimax theorem guarantees the equality \min_x \max_{u \in \mathcal{U}} f(x, u) = \max_{u \in \mathcal{U}} \min_x f(x, u) under quasi-convex-quasi-concave conditions on f, enabling tractable reformulations and duality-based solutions for robust counterparts of LPs or convex programs.[34] This application ensures computational guarantees for worst-case performance in uncertain environments, such as supply chain design or portfolio optimization.[35]Extensions of the minimax theorem to nonlinear programming arise through saddle-point formulations, where a function L(x, y) is minimized over x and maximized over y, yielding \min_x \max_y L(x, y) = \max_y \min_x L(x, y) at saddle points under convexity-concavity.[36] Penalty methods further incorporate this by constructing exact minimax penalties, such as \min_x \max_y \{ f(x) + \rho \|c(x) - y\|^2 \}, where \rho > 0 enforces constraints c(x) = 0 without altering the original solution set for sufficiently large \rho.[37] These approaches generalize duality to nonconvex settings, facilitating numerical solutions via alternating optimization or extragradient methods.[38]
In decision theory and AI
In decision theory, the minimax criterion addresses decision-making under uncertainty by selecting the action that maximizes the minimum possible payoff, providing a robust strategy for risk-averse agents facing adversarial or unknown outcomes. This approach, formalized by Abraham Wald as the maximin rule in statistical decision functions, prioritizes worst-case scenarios to minimize potential losses.[39] Von Neumann's minimax theorem extends this criterion to mixed strategies in zero-sum games, guaranteeing the existence of equilibrium values and optimal randomized policies that align with expected utility maximization under complete uncertainty.In economics, the minimax theorem underpins equilibrium analysis in markets modeled as zero-sum interactions, influencing the Arrow-Debreu general equilibrium model by supplying foundational tools for proving existence through duality and fixed-point theorems analogous to game-theoretic equilibria. Debreu explicitly referenced von Neumann's minimax framework alongside activity analysis in developing the model's formal structure, enabling rigorous treatment of competitive pricing and resource allocation.[40]In artificial intelligence, the minimax algorithm operationalizes the theorem for adversarial search in two-player zero-sum games, recursively computing optimal moves by maximizing one's own score while assuming the opponent minimizes it. Pioneered in computer chess by Claude Shannon's 1950 analysis, which modeled chess as a vast minimax search tree with an estimated game-tree complexity of $10^{120} (the Shannon number), the algorithm gained efficiency through alpha-beta pruning, which eliminates irrelevant branches to reduce evaluation time from exponential to near-square-root levels. This enabled early AI systems to compete in tactical domains like chess from the 1950s onward.[41]The minimax principle also informs reinforcement learning, particularly in value iteration for zero-sum Markov games, where agents learn policies approximating Nash equilibria via minimax updates to Q-values, balancing exploration and exploitation in multi-agent environments.[42] In modern applications, systems like AlphaGo (2016) integrate Monte Carlo tree search (MCTS) as a scalable approximation to full minimax, using neural networks to guide simulations and evaluate positions in high-branching-factor games like Go, achieving superhuman performance without exhaustive tree expansion.[43] Another prominent application is in generative adversarial networks (GANs), introduced in 2014, where a generator and discriminator engage in a minimax game to improve generative modeling, with the objective \min_G \max_D V(D, G).[44]Despite these advances, minimax's computational complexity—scaling exponentially with depth due to high branching factors in real games—renders exact solutions infeasible for large domains, necessitating approximations such as MCTS to focus simulations on promising paths and mitigate the "curse of dimensionality."