Zero-sum game
A zero-sum game is a concept in game theory representing a competitive situation between two or more players where the total gains and losses sum to zero, such that one player's benefits come exclusively at the expense of equivalent losses to the others, with no net creation or destruction of value.[1] This framework models pure conflict, contrasting with non-zero-sum games where cooperation can generate mutual benefits.[2] The term originated with mathematician John von Neumann, who introduced the formal analysis of zero-sum games in his 1928 paper "Zur Theorie der Gesellschaftsspiele," proving the minimax theorem for two-person zero-sum games.[3] The minimax theorem states that in such games, there exists an optimal mixed strategy for each player, ensuring a game value v where the maximizing player can guarantee at least v and the minimizing player can guarantee at most v, resolving strategic uncertainty through equilibrium.[4] Von Neumann's work laid the foundation for modern game theory, later expanded in his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern, which applied these ideas to economics and decision-making under uncertainty.[5] Zero-sum games are exemplified by classic contests like chess or rock-paper-scissors, where outcomes are strictly win-lose, and simplified models of poker that abstract bluffing and betting as zero-sum interactions.[1] Beyond recreation, they inform real-world applications in economics (e.g., certain models of international trade), military strategy (e.g., modeling deterrence during the Cold War), and finance (e.g., certain trading competitions), though many practical situations deviate toward non-zero-sum dynamics due to potential for joint gains.[2][6] Extensions to multi-player or imperfect-information settings continue to drive research in algorithmic game theory and artificial intelligence.[7]Fundamentals
Definition
In game theory, a zero-sum game models situations involving multiple decision-makers, or players, each selecting from a set of available actions, known as strategies, to maximize their own outcomes, referred to as payoffs, which are typically numerical values representing gains or losses.[8] These elements—players, strategies, and payoffs—form the foundational components of game-theoretic analysis, assuming rational behavior where players aim to optimize their interests based on anticipated actions of others.[2] A zero-sum game is formally defined as a strategic interaction among players where the total payoffs sum to zero, meaning the gains of one player exactly equal the losses of the others, creating a strictly competitive environment with no net creation or destruction of value.[9] In the standard two-player case, this is represented in normal form by a payoff matrix A = (a_{ij}) \in \mathbb{R}^{m \times n}, where m and n denote the number of pure strategies available to the row player and column player, respectively; here, a_{ij} is the payoff received by the row player when selecting pure strategy i and the column player selecting pure strategy j, while the column player's payoff is simultaneously -a_{ij}.[10] Players may also employ mixed strategies, which involve probabilistic distributions over their pure strategies, allowing for randomized play to achieve expected payoffs in simultaneous-move scenarios.[11] The concept of zero-sum games originated with John von Neumann's seminal 1928 paper, which analyzed such games in the context of poker and broader strategic interactions, laying the groundwork for modern game theory.[12]Key Properties
Zero-sum games represent a special case of constant-sum games, where the total payoff across all players sums to zero for every possible outcome. In general constant-sum games, the payoffs sum to a fixed constant C; to reduce such a game to zero-sum form without altering strategic incentives, subtract C/2 from each player's payoffs (for two players) or adjust proportionally for more players, ensuring the sum becomes zero while preserving the relative ordering of strategies. This transformation, given by u_i' = u_i - c_i where \sum c_i = C, maintains the game's equilibrium structure because adding constants to payoffs does not change optimal strategies or Nash equilibria.[13][14] The adversarial nature of zero-sum games arises from the strict opposition of players' interests, where one player's gain directly equals another's loss, eliminating opportunities for mutual benefit or cooperation. Unlike non-zero-sum games, where joint strategies might increase total welfare, zero-sum settings force players to minimize opponents' payoffs to maximize their own, framing interactions as pure conflicts with no Pareto improvements possible. This opposition ensures that rational play involves safeguarding against exploitation, often leading to defensive strategies.[13][15] In payoff matrices for zero-sum games, a saddle point occurs at an entry a_{ij} that is the minimum value in its row (the worst outcome for the row player given column j) and the maximum value in its column (the best outcome for the column player given row i). This point represents a pure strategy equilibrium, satisfying the condition that the row player's maximin value equals the column player's minimax value: \max_i \min_j a_{ij} = \min_j \max_i a_{ij}. If a saddle point exists, both players can commit to the corresponding pure strategies without regret, as deviations worsen their expected payoff.[13][16] The value of a zero-sum game, denoted v, is the expected payoff to the maximizing player (or negative to the minimizing player) when both play optimally, guaranteed by the minimax theorem for finite two-player games. This value bounds the payoff: no player can secure more than v against optimal opposition, nor less than v with security strategies. In matrix terms, v lies between the maximin and minimax, coinciding at saddle points or mixed strategy equilibria.[17][4] In two-player symmetric zero-sum games, the payoff matrix is skew-symmetric (A = -A^T), meaning players share identical strategy sets and payoffs are negated transposes, making optimal strategies interchangeable between players. Such symmetry implies that if a strategy profile (s, t) is optimal, so is (t, s), and the game's value is zero, as no player holds an inherent advantage. This structure simplifies analysis, often yielding uniform mixed strategies over symmetric supports.[18][17]Solving Methods
Two-Player Games
In two-player zero-sum games, the minimax theorem provides the foundational result for optimal play. Formulated by John von Neumann in 1928, the theorem states that for any finite two-player zero-sum game with payoff matrix A = (a_{ij}), where rows represent player I's pure strategies and columns player II's, there exists a value v such that \max_p \min_q p^T A q = \min_q \max_p p^T A q = v, with p and q denoting mixed strategies (probability distributions over pure strategies).[12] This equality guarantees that player I can secure at least v against any response by player II, while player II can hold player I to at most v. Von Neumann's proof relies on Brouwer's fixed-point theorem applied to the space of mixed strategies, establishing the existence of an equilibrium point where the maximin and minimax values coincide without constructing explicit strategies.[12] The argument involves showing that the continuous function mapping strategy profiles to expected payoffs has a fixed point corresponding to the game's value. Mixed strategies are essential when pure strategies do not suffice, allowing players to randomize over their pure strategies to achieve the value v. A mixed strategy for player I is a probability vector p = (p_i) with \sum_i p_i = 1 and p_i \geq 0, and similarly q = (q_j) for player II; the expected payoff is then E[p, q] = \sum_i \sum_j p_i q_j a_{ij} = p^T A q.[19] Optimal mixed strategies p^* and q^* satisfy p^{*T} A q^* = v, ensuring neither player can improve unilaterally. Pure strategy equilibria occur via saddle points, where a pair of pure strategies (i^*, j^*) satisfies a_{i^* j} \leq a_{i^* j^*} \leq a_{i j^*} for all i, j, making v = a_{i^* j^*}.[19] Such points exist if the payoff matrix has a row maximum that is also a column minimum, but randomization is necessary in non-saddle-point games like matching pennies to prevent exploitation. Two-player zero-sum games can be solved by formulating the search for optimal mixed strategies as a linear program (LP). For player I (maximizer), the primal LP is to maximize v subject to \sum_i p_i a_{ij} \geq v for all j, \sum_i p_i = 1, and p_i \geq 0; the dual for player II minimizes v subject to \sum_j a_{ij} q_j \leq v for all i, \sum_j q_j = 1, and q_j \geq 0. By strong duality of LPs, the optimal values coincide at the game's value v, yielding optimal p^* and q^*. Finite two-player zero-sum games are solvable in polynomial time, as the LP formulation has size polynomial in the number of pure strategies, and LPs are solvable in polynomial time via interior-point methods.Multi-Player Games
In multi-player zero-sum games, where the total payoff sums to zero across all participants, the extension of two-player solution concepts encounters significant challenges due to the increased strategic complexity and potential for non-cooperative dynamics among more than two agents. Unlike the two-player case, where von Neumann's minimax theorem guarantees a unique value and optimal strategies, multi-player settings lack such a universal equilibrium structure, often resulting in multiple Nash equilibria with varying payoff distributions or even cycles in best-response dynamics that prevent convergence to a stable outcome.[20] A key illustration of this limitation arises in three-player zero-sum games, where the minimax theorem does not hold in general. Consider a polymatrix representation with players A, B, and C, where interactions are pairwise: A plays a two-action game against B (heads or tails), and B plays against C similarly, with payoffs structured such that matching heads yields +1 for the row player and -1 for the column, while mismatching reverses this, and the overall game is zero-sum. The payoff tensor for this setup reveals non-unique Nash equilibria; for instance, one equilibrium assigns zero payoffs to all players under mixed strategies (A plays heads, B mixes 50-50, C plays heads), while another yields payoffs of -1 for A, 0 for B, and +1 for C (A heads, B tails, C heads). This demonstrates indeterminate outcomes, as no single value exists that all players can guarantee against joint deviations by others.[20] To address payoff allocation in cooperative interpretations of multi-player zero-sum games, the Shapley value provides a fair division based on each player's average marginal contribution to coalitions. Defined for a cooperative game (N, v) with player set N of size n and characteristic function v (where v(N) = 0 in zero-sum games), the Shapley value for player i is given by \phi_i(v) = \frac{1}{n!} \sum_{\pi \in \Pi} \left( v(P_i^\pi \cup \{i\}) - v(P_i^\pi) \right), where \Pi denotes all n! permutations of N, and P_i^\pi is the set of players preceding i in permutation \pi. This axiomatic solution (satisfying efficiency, symmetry, linearity, and null-player properties) ensures payoffs sum to zero and quantifies individual power, though it assumes transferable utility and may not align with non-cooperative equilibria. Coalition formation offers another approach to simplifying multi-player zero-sum games, where subsets of players band together to act as a single entity, effectively reducing the game to a two-"superplayer" zero-sum contest between the coalition S and its complement \bar{S}. The value v(S) of coalition S is then the minimax value of the induced two-player game with payoff \sum_{i \in S} u_i, where u_i are individual utilities. Stability requires that no subcoalition deviates profitably, often analyzed via the core—the set of imputations x satisfying \sum_{i \in T} x_i \geq v(T) for all T \subseteq S—but in essential zero-sum games (v(S) + v(\bar{S}) = 0 and v(S) > 0 for some S), the core is empty, indicating inherent instability and vulnerability to breakdowns.[21] For computational resolution, multi-player zero-sum games are often represented in extensive form to capture sequential moves and information sets, enabling approximation algorithms. Fictitious play, where each player iteratively best-responds to the empirical frequency of others' past actions, converges to Nash equilibria in certain multi-player subclasses like zero-sum polymatrix games or "one-against-all" structures, though it may cycle in general cases. Counterfactual regret minimization (CFR), extended from two-player imperfect-information settings, approximates equilibria by minimizing counterfactual regrets at information sets in multi-player extensive games, with variants like Monte Carlo CFR providing scalable self-play learning despite lacking convergence guarantees in non-two-player zero-sum contexts.[22][23] The exact solution of general n-player zero-sum games is computationally intractable, with determining the value or optimal strategies proven NP-hard even in restricted forms like extensive games with imperfect recall, as established in foundational complexity results from the early 1990s building on observations dating to the 1950s. For n \geq 3, computing Nash equilibria is PPAD-complete in normal-form representations, underscoring the shift from polynomial-time solvability in two players to exponential challenges in multi-player settings.[24][25]Examples and Applications
Classic Examples
One of the simplest and most illustrative zero-sum games is rock-paper-scissors, a symmetric two-player game where each player simultaneously chooses one of three actions: rock, paper, or scissors. The payoff structure is such that rock beats scissors (+1 for the rock player, -1 for the scissors player), scissors beats paper (+1, -1), and paper beats rock (+1, -1), with ties resulting in 0 for both. This can be represented by the following payoff matrix for Player 1 (Player 2's payoffs are the negative):| Player 1 \ Player 2 | Rock | Paper | Scissors |
|---|---|---|---|
| Rock | 0 | -1 | +1 |
| Paper | +1 | 0 | -1 |
| Scissors | -1 | +1 | 0 |
| Player 1 \ Player 2 | Heads | Tails |
|---|---|---|
| Heads | +1 | -1 |
| Tails | -1 | +1 |