Fact-checked by Grok 2 weeks ago

Normal-form game

A normal-form game, also known as a strategic-form game, is a fundamental representation in that models strategic interactions among a of who simultaneously select actions from their respective sets, with outcomes and payoffs determined by the resulting profile. Formally, it is defined as a \Gamma = (N, (S_i)_{i \in N}, (u_i)_{i \in N}), where N is the set of , S_i is the set for each i, and u_i: \prod_{i \in N} S_i \to \mathbb{R} is the payoff (or ) for i, mapping each profile to a real-valued payoff reflecting preferences over outcomes. This structure assumes complete information, simultaneous moves, and rational seeking to maximize their expected payoffs, often represented in matrix form for two-player games to visualize combinations and associated payoffs. The concept originated with John von Neumann's 1928 work on the for zero-sum games and was rigorously formalized in his 1944 collaboration with in Theory of Games and Economic Behavior, which established the foundations of by introducing von Neumann-Morgenstern utility functions to handle cardinal preferences via lotteries over outcomes. Unlike the extensive-form representation, which captures sequential moves and information sets via game trees, the normal form abstracts away timing and focuses on strategic interdependence, making it suitable for analyzing static, one-shot interactions. Key solution concepts in normal-form games include , where no player benefits from unilaterally deviating given others' strategies, and dominated strategies, which can be eliminated to simplify analysis. Normal-form games are pivotal for studying phenomena like cooperation and conflict in diverse fields, including economics (e.g., Cournot oligopoly models for quantity competition), biology (e.g., evolutionary stable strategies), and engineering (e.g., resource allocation in energy markets). Classic examples include the Prisoner's Dilemma, illustrating tension between individual and collective rationality, and zero-sum games like matching pennies, where one player's gain equals another's loss. While finite games with pure strategies guarantee at least one mixed-strategy Nash equilibrium by Nash's 1951 theorem, the form's limitations—such as ignoring dynamics or incomplete information—have spurred extensions like Bayesian games.

Introduction

Definition

A normal-form game, also known as a strategic-form game, is a core model in that describes strategic interactions among a set of rational who select actions simultaneously, without observing others' choices, to determine outcomes based on payoffs. This representation captures static decision-making under interdependence, where each player's payoff depends on the collective strategy profile chosen by all participants. Key characteristics of a normal-form game include , where all possess full knowledge of the game structure, including available and payoff mappings; strategy sets that may be finite or , accommodating discrete or continuous choices; and a non-cooperative setting, in which act independently to maximize their individual utilities without enforceable commitments. These features distinguish normal-form games from dynamic models like extensive-form games, which incorporate sequential moves and information revelation. Unlike cooperative games, which allow for binding agreements and coalition formation to achieve joint outcomes, normal-form games emphasize individual rationality and potential self-interested conflicts, precluding enforceable side payments or contracts. The foundational framework was established by and , who formalized it as a tool for analyzing economic and strategic behavior. In standard notation, a normal-form game involves n players, indexed by i = 1 to n, each with a strategy set S_i denoting the possible actions available to player i. Payoff functions then assign real-valued utilities to each combination of strategies across all players.

Historical Context

The concept of the normal-form game traces its origins to early 20th-century efforts to formalize strategic decision-making in . In 1921, French mathematician introduced ideas related to the in the context of games like poker, laying preliminary groundwork for analyzing strategic interactions through probabilistic strategies, though without a fully rigorous proof. This work anticipated the structured representation of player choices and outcomes but remained focused on specific game types. John von Neumann advanced these ideas significantly in his 1928 paper "Zur Theorie der Gesellschaftsspiele," where he proved the for two-person zero-sum games and introduced the normal form as a matrix-based representation of strategies and payoffs, establishing a foundational framework for . Building on this, von Neumann collaborated with economist to publish "Theory of Games and Economic Behavior" in 1944, which expanded the normal-form approach to broader economic contexts, incorporating utility theory and demonstrating its applicability beyond pure zero-sum scenarios. Following , the normal-form game gained prominence in and , driven by wartime applications in and postwar efforts to model economic competition and resource allocation. In 1950 and 1951, extended the framework to non-zero-sum games through his development of the concept, which identified stable strategy profiles in normal-form representations where no player benefits from unilateral deviation. Later refinements, such as those by Jean-François Mertens in the 1980s, addressed stability issues in these equilibria, providing criteria for strategically robust outcomes in normal-form games.

Components and Formulation

Players and Strategies

In a normal-form game, the players form a finite set of rational decision-makers, typically denoted by N = \{1, 2, \dots, n\}, where each player seeks to maximize their own payoff given the actions of others. This setup assumes simultaneous decision-making without binding commitments, distinguishing it from extensive-form representations. Each player i \in N has a set of pure strategies S_i, consisting of all possible complete plans of action available to them in the game. A pure specifies a definite for player i, such as selecting a specific move without . The collection of pure strategies across all players forms the strategy space S = \prod_{i \in N} S_i, and a strategy s = (s_1, \dots, s_n) represents a specific combination where s_i \in S_i for each i. To capture uncertainty or imperfect information, players may employ mixed strategies, which are probability distributions over their pure strategies. For player i, a mixed strategy \sigma_i: S_i \to [0,1] assigns probabilities such that \sum_{s_i \in S_i} \sigma_i(s_i) = 1, allowing randomization across actions. A mixed strategy profile is then \sigma = (\sigma_1, \dots, \sigma_n), where each \sigma_i is independent of the others, enabling analysis of equilibria that may not exist in pure strategies alone. Normal-form games often feature finite strategy spaces, particularly discrete choices, as in rock-paper-scissors, where each player's S_i = \{\text{rock}, \text{paper}, \text{scissors}\}, yielding |S| = 3^n possible pure strategy profiles for n players. However, strategy spaces can be infinite, such as compact intervals like [0,1] in the , where players choose extraction levels continuously, or unbounded sets like [0, \infty) in Cournot competition, where firms select output quantities. Finite cases guarantee the existence of mixed-strategy Nash equilibria, while infinite spaces require additional compactness or continuity assumptions for similar results.

Payoff Functions

In a normal-form game, the payoff function for each i \in N (where N is the set of players) is denoted u_i: S \to \mathbb{R}, which assigns a real-valued to every profile s = (s_1, \dots, s_n) \in S (the of all players' sets). This function quantifies the outcome of the game from player i's perspective, capturing preferences over possible results. Payoff functions are typically interpreted through von Neumann-Morgenstern (vNM) utility theory, which provides a cardinal representation of preferences under uncertainty. Unlike ordinal utilities, which only rank outcomes without measuring intensity, vNM utilities are unique up to positive affine transformations and enable the computation of expected utilities for lotteries or mixed strategies. For a mixed strategy profile \sigma = (\sigma_1, \dots, \sigma_n), where each \sigma_j is a probability distribution over player j's strategies, player i's expected payoff is given by E[u_i(\sigma)] = \sum_{s \in S} \left[ \prod_{j \in N} \sigma_j(s_j) \right] u_i(s). This formulation arises from the vNM axioms of completeness, transitivity, continuity, and independence, ensuring that rational agents maximize expected utility. Games are classified as zero-sum if the sum of payoffs across all players is zero for every strategy profile, i.e., \sum_{i \in N} u_i(s) = 0 for all s \in S, implying pure conflict where one player's gain equals another's loss. In contrast, non-zero-sum games allow for variable total payoffs, enabling cooperation or mutual benefit, as the sum \sum_{i \in N} u_i(s) can differ across profiles. Standard normal-form game analysis assumes of all payoff functions, meaning every player knows the payoffs, knows that others know them, and so on . Additionally, players are assumed rational, seeking to maximize their expected payoffs given beliefs about others' actions. These assumptions underpin the strategic evaluation of outcomes in normal-form games.

Representations

Matrix Representation

The matrix representation, often called the bimatrix form, is a tabular method used to visualize normal-form games involving exactly two players with finite strategy sets. In this format, the rows represent the pure strategies available to player 1, while the columns represent those of player 2. Each entry in the resulting matrix consists of an ordered pair (u_1(s_1, s_2), u_2(s_1, s_2)), where u_1 and u_2 denote the payoffs to player 1 and player 2, respectively, for the joint strategy profile (s_1, s_2). Consider a simple case where player 1 has strategies labeled A and B, and player 2 has strategies X and Y. The bimatrix takes the following structure:
XY
A(a_1, b_1)(a_2, b_2)
B(a_3, b_3)(a_4, b_4)
Here, each (a_i, b_i) pair quantifies the payoffs for the corresponding strategy intersection, facilitating direct comparison across outcomes. This representation offers key advantages for analysis, particularly its intuitiveness in displaying interactions for finite pure strategies, which enables quick identification of best responses and candidate equilibria without complex computations. It provides a compact summary of the game's structure, making it well-suited for simultaneous-move scenarios in two-player settings. Nevertheless, the matrix form has notable limitations, especially regarding scalability: for games with more than two players, extending to multidimensional payoff arrays becomes cumbersome and visually unwieldy, while continuous strategy spaces render the infinite entries impossible to tabulate explicitly.

Strategic Form Tuple

The strategic form provides a compact mathematical representation of a normal-form game, encapsulating its essential elements in tuple notation. Formally, a normal-form game in strategic form is defined as the tuple \Gamma = (N, (S_i)_{i \in N}, (u_i)_{i \in N}), where N is a finite set of players, S_i denotes the strategy set available to player i \in N, and u_i: S \to \mathbb{R} is the payoff function assigning a real-valued payoff to each strategy profile s = (s_i)_{i \in N} \in S = \prod_{i \in N} S_i for player i. This formulation, introduced in the foundational work on noncooperative games, allows for arbitrary finite or infinite strategy sets and any number of players n = |N|, providing a general framework beyond the two-player case. To incorporate uncertainty and randomization, the strategic form extends to mixed strategies via the tuple \Gamma_m = (N, (\Sigma_i)_{i \in N}, (U_i)_{i \in N}), where \Sigma_i = \Delta(S_i) is the set of mixed strategies for player i, consisting of all probability distributions over the pure strategies in S_i, and U_i: \Sigma \to \mathbb{R} is the expected utility function defined as U_i(\sigma) = \mathbb{E}_{s \sim \sigma} [u_i(s)] for a mixed strategy profile \sigma = (\sigma_i)_{i \in N} \in \Sigma = \prod_{i \in N} \Sigma_i, where the expectation is taken with respect to the product distribution induced by \sigma (reducing to a summation for finite S). This extension preserves the structure of the original game while enabling analysis of equilibria involving , as mixed strategies induce expected payoffs linear in the probabilities. The pure strategy game \Gamma and its mixed extension \Gamma_m are in the sense that pure strategies correspond to degenerate mixed strategies (Dirac distributions on singletons in S_i), ensuring that solution concepts like defined on \Gamma_m restrict appropriately to pure strategy profiles in \Gamma. This isomorphism underscores the generality of the notation, which contrasts with representations limited to finite two-player games by facilitating theoretical extensions to broader classes of interactions.

Examples

Prisoner's Dilemma

The Prisoner's Dilemma is a foundational example in normal-form game theory, originally formulated by researchers Merrill Flood and Melvin Dresher in 1950 to explore non-cooperative decision-making, and later named by mathematician Albert Tucker to evoke a criminal scenario. In this setup, two suspects are arrested for a crime and held in isolation, unable to communicate. Each must independently choose whether to remain silent (cooperate with the other prisoner by not betraying them) or confess (defect by implicating the partner). The outcomes depend on their joint choices, with payoffs measured in years of prison time (lower values are preferable, often converted to utilities where higher numbers indicate better outcomes, such as reduced sentence length). The standard payoff structure assigns the following prison sentences: if both remain silent, each serves 1 year; if one confesses while the other remains silent, the confessor goes free (0 years) and the silent one serves 3 years; if both confess, each serves 2 years. Represented as a normal-form payoff matrix with utilities (negative values for years served, higher utility better), the game for Player 1 (rows) and Player 2 (columns) is:
Player 2 \ Player 1Silent (Cooperate)Confess (Defect)
Silent (Cooperate)-1, -1-3, 0
Confess (Defect)0, -3-2, -2
This matrix reveals that confessing strictly dominates remaining silent for each player: against an opponent's silence, confessing yields 0 > -1; against confession, it yields -2 > -3. Despite mutual silence yielding the socially optimal outcome (total utility -2, Pareto superior to mutual confession's -4), individual drives both to confess, resulting in a suboptimal . This highlights the game's non-zero-sum nature, where total payoffs vary across outcomes (unlike zero-sum games, where one player's gain equals the other's loss), underscoring conflicts between and collective welfare in interdependent decisions. Variations of the Prisoner's Dilemma generalize beyond the one-shot scenario. In the iterated version, where the game repeats indefinitely with the same players, cooperation can emerge through strategies that condition actions on prior outcomes, as demonstrated in Axelrod's 1980 tournament experiments where "tit-for-tat" (cooperate initially, then mirror the opponent's last move) proved robust. More broadly, the dilemma holds for any parameters satisfying temptation > reward > punishment > sucker (e.g., 0 > -1 > -2 > -3 in the utility matrix above) and 2 × reward > temptation + sucker, ensuring defection's dominance while preserving mutual cooperation's collective benefit; tweaks to these inequalities can alter the game's dynamics, such as creating harmony games if reward > temptation.

Coordination Game

A coordination game is a type of normal-form game in which players' interests align such that they all benefit from selecting the same strategy, with the primary challenge being to achieve mutual alignment without communication. Unlike games of pure conflict, coordination games feature payoffs that reward , often resulting in multiple stable outcomes where deviation by any player reduces everyone's welfare. A classic example of a pure involves two drivers approaching each other on a narrow , each deciding simultaneously whether to swerve left or right to avoid collision. If both choose the same direction, they pass safely with high payoffs; if they choose differently, they collide with low payoffs. The symmetric payoff for this , where payoffs represent utilities (higher for success, lower for failure), is as follows:
Driver 1 \ Driver 2LeftRight
Left1, 10, 0
Right0, 01, 1
This structure yields two pure-strategy Nash equilibria—(Left, Left) and (Right, Right)—where no player benefits from unilaterally changing their choice, assuming the other does not. Social conventions, such as driving on the right , often serve as focal points to select one equilibrium and avoid miscoordination. Coordination games can vary in structure; pure coordination games like the driving scenario have symmetric preferences and equivalent equilibria, while others introduce asymmetric interests. The Battle of the Sexes exemplifies the latter, where a husband and wife want to spend an evening together but prefer different activities—say, the opera or a match—with payoffs higher for attending together than apart, though each favors their preferred event. Introduced by Luce and Raiffa, the payoff matrix (with utilities reflecting joint attendance value and preference strength) is:
Husband \ WifeOperaFight
Opera2, 10, 0
Fight0, 01, 2
Here, the pure-strategy Nash equilibria are (Opera, Opera) and (Fight, Fight), but players face a coordination risk due to divergent preferences, potentially leading to suboptimal separate outings if they fail to align. The key insight of coordination games is the existence of multiple Nash equilibria, which creates inefficiency risks from miscoordination, as players may converge on a Pareto-inferior outcome or fail to coordinate altogether without external cues like norms or communication.

Solution Concepts

Dominated Strategies

A strategy s_i for player i in a normal-form game strictly dominates another strategy t_i if it yields a payoff u_i(s_i, s_{-i}) \geq u_i(t_i, s_{-i}) for every strategy profile s_{-i} of the opponents, with strict inequality holding for at least one such profile. This condition implies that no rational player would ever choose the dominated strategy t_i, as the dominating strategy s_i is always at least as good and sometimes better, regardless of others' actions. Weak dominance relaxes the strict inequality requirement, allowing equality in all cases but still prohibiting the weakly dominated strategy under of . The iterated elimination of strictly dominated strategies (IESDS) is a procedure that successively removes strictly dominated strategies from , updating the strategy sets after each round until no further eliminations are possible. This stepwise reduction simplifies while preserving the set of rationalizable outcomes, as each elimination step reflects the that rational players avoid inferior choices. In finite games, IESDS is order-independent, meaning the final reduced game is unique regardless of the sequence of eliminations. Rationalizability refers to the solution concept obtained as the limit of infinite iterations of strict dominance elimination in finite normal-form games, capturing all strategy profiles consistent with of among players. Introduced independently by Bernheim and Pearce, rationalizable strategies form a refinement that eliminates implausible choices beyond what a single round of dominance might achieve, though it may not yield a unique outcome. To illustrate, consider the following generic 2x2 with players Row and Column, each having strategies A/B and X/Y, respectively:
XY
A3, 02, 1
B1, 20, 3
For the row player, strategy A strictly dominates B, since 3 > 1 and 2 > 0. Eliminating B reduces the game to:
XY
A3, 02, 1
Now, for the column player, Y strictly dominates X, since 1 > 0 (against A). Eliminating X leaves the unique outcome (A, Y) with payoffs (2, 1). This game is dominance solvable via IESDS, converging to a single rationalizable strategy profile.

Nash Equilibrium

In a normal-form game, a Nash equilibrium is a strategy profile \sigma^* where no player can improve their expected payoff by unilaterally changing their strategy, assuming the other players' strategies remain fixed. Formally, \sigma^* is a Nash equilibrium if, for every player i, \sigma_i^* is a best response to \sigma_{-i}^*, i.e., u_i(\sigma_i^*, \sigma_{-i}^*) \geq u_i(s_i, \sigma_{-i}^*) for all pure strategies s_i \in S_i, and every pure strategy with positive probability under \sigma_i^* achieves this maximum payoff. John Nash proved the existence of at least one (possibly mixed) for every finite normal-form game with a finite number of players and actions, relying on applied to a continuous best-response over the simplex of mixed strategies. This result holds even for games without pure-strategy equilibria, such as , where players must randomize to prevent exploitation. A pure-strategy Nash equilibrium occurs when each player assigns probability 1 to a single pure strategy in the profile, satisfying the best-response condition deterministically; in contrast, a mixed-strategy Nash equilibrium involves players randomizing over multiple pure strategies with positive probability. While pure equilibria are simpler to interpret, mixed ones are necessary for equilibrium in games with inherent conflict, and Nash's theorem guarantees their existence in finite games. To compute Nash equilibria, methods like best-response dynamics iteratively update each player's strategy to their current best response against the others' strategies, potentially converging to a pure in potential games or under certain conditions, though cycles can occur in general. This approach highlights the equilibrium's role as a of such adjustment processes. Nash equilibria possess against unilateral deviations by construction, making them robust to individual perturbations in non-cooperative settings. However, they are not necessarily Pareto efficient, as outcomes may exist where all players are better off; for instance, in the , the unique of mutual defection is Pareto dominated by mutual cooperation.

Relation to Extensive-Form Games

Conversion Between Forms

The normal form of an is obtained by defining each 's as a complete contingent that specifies an action for every information set controlled by that , with payoffs calculated as the expected values derived from the outcomes of all possible profiles leading to histories. This reduction process transforms the sequential structure into a simultaneous-move representation, where the set for each consists of all such behavioral . In extensive-form games with imperfect information, information sets partition the decision nodes into groups where the player cannot distinguish between histories; strategies must therefore assign the same action to all nodes within each information set, which expands the normal-form strategy space relative to the extensive form's sequential choices and can lead to exponentially larger matrices. For instance, in games where a player observes partial signals, the contingent plans must cover all possible realizations consistent with the information set, ensuring consistency across indistinguishable paths. While the normal form fully captures the set of possible outcomes and expected payoffs from any profile in the extensive form, it discards the temporal ordering of moves and subgame structure, potentially obscuring refinements like subgame perfection that rely on sequential . equilibria in this induced normal form identify strategy profiles that are stable under simultaneous play but may include non-credible threats when interpreted sequentially. A classic illustration is the entry deterrence game, where the potential entrant moves first to Enter or Stay Out; if Enter, the then chooses Accommodate (yielding duopoly payoffs of 15 for each) or Fight (yielding -1 for each), while Stay Out gives the entrant 0 and the monopoly profit of 35. Converting to normal form yields the following payoff , with the entrant's strategies as rows and the incumbent's contingent plans as columns:
Entrant \ IncumbentAccommodateFight
Enter15, 15-1, -1
Stay Out0, 350, 35
This matrix admits equilibria at (Enter, Accommodate) and (Stay Out, Fight), but the latter fails subgame perfection since, after entry, Accommodate strictly dominates Fight for the , highlighting how the normal form includes implausible deterrence strategies that exploit the loss of sequential .

Sequential Interpretation

Although normal-form games assume simultaneous moves by all players, they can interpret sequential through strategy profiles where players anticipate and condition on others' responses. In such representations, a player's incorporates expectations of subsequent actions, effectively embedding a sequential structure into the simultaneous framework. For example, the Stackelberg equilibrium models a leader-follower dynamic, where the leader commits to a that maximizes their payoff given the follower's rational best response to it, approximating sequential play as the limit of normal-form interactions with imperfect commitment. This approach allows normal-form games to capture forward-looking behavior without explicit timing, relying on the interdependence of payoff functions across strategy profiles. However, this interpretation has significant limitations, as the normal form abstracts away from the observability of moves and the possibility of binding commitments, often resulting in multiple equilibria that fail to distinguish credible sequential paths. In sequential contexts, players may not observe prior actions, leading the normal form to overlook sets and produce equilibria supported by non-credible threats or promises that would unravel under sequential scrutiny. Consequently, normal-form equilibria can include outcomes incompatible with sequential , where off-equilibrium behaviors do not align with optimal responses in hypothetical subgames. A key critique arises from subgame perfection, which refines Nash equilibria to ensure consistency across all subgames in an underlying extensive-form representation; normal-form Nash equilibria often fail this test, as they may prescribe irrational play in unreached subgames. To address this, refinements like trembling-hand perfection introduce small perturbations to strategies, modeling minor errors in execution and requiring equilibria to remain robust as these "trembles" approach zero, thereby enforcing sequential rationality even in the normal form. These refinements eliminate equilibria reliant on incredible contingencies, aligning the normal form more closely with dynamic interpretations. The exemplifies this sequential interpretation and its nuances. In its extensive form, the proposer offers a division of a fixed sum, and the responder accepts or rejects; yields the subgame-perfect where the proposer offers the minimal acceptable amount (often greater than zero), and the responder accepts any positive offer. When represented in normal form, the responder's become complete plans (accept or reject for each possible offer), resulting in a larger strategy space, but the unique subgame-perfect persists, demonstrating equivalence between the forms under rational anticipation—though the normal form obscures the sequential unraveling of unfair offers. This equivalence highlights how normal-form can encode , yet also underscores the need for refinements to exclude non-credible equilibria in more complex sequential settings.

Applications

Economics and Social Sciences

In economics, normal-form games model strategic interactions among rational agents making simultaneous decisions, such as in competition where firms choose outputs or prices without observing rivals' choices. The Cournot model, introduced by in 1838 and formalized in game-theoretic terms, represents quantity competition among firms producing homogeneous goods. Each firm selects an output quantity q_i to maximize profit \pi_i(q) = p(Q) q_i - c_i(q_i), where Q = \sum q_j is total output, p(Q) is inverse demand, and c_i is cost; the occurs where each firm's best-response function \psi_i(q_{-i}), derived from the first-order condition p(Q) + p'(Q) \psi_i(q_{-i}) q_i - c_i'(\psi_i(q_{-i})) = 0, intersects with others. This equilibrium yields higher prices and outputs than but lower than , illustrating strategic interdependence. The Bertrand model extends this to price competition, where firms simultaneously set prices p_i for homogeneous goods with constant marginal cost c. Payoffs are zero if p_i > \min p_j, shared demand if equal to the minimum, or full demand if strictly lower; the unique Nash equilibrium has all firms pricing at c, resulting in zero profits despite positive demand at higher prices—a result known as the Bertrand paradox. Best-response functions involve undercutting rivals slightly above c, leading to this competitive outcome under assumptions of no capacity constraints and full information. Auction theory applies normal-form games to simultaneous sealed-bid settings, particularly independent private value (IPV) auctions where bidders' valuations v_i are privately drawn from a common distribution (e.g., uniform [0,1]) and independent. In a first-price IPV auction with n bidders, the symmetric Bayesian bidding is b(v_i) = \frac{n-1}{n} v_i, where the highest bidder pays their bid; this shades bids below true values to balance winning probability and surplus extraction. Seminal work by Vickrey (1961) showed truth-telling as a dominant in second-price IPV auctions, where the winner pays the second-highest bid, ensuring efficiency; holds across standard formats under IPV assumptions. Social dilemmas in economics use normal-form games to analyze collective action failures, such as public goods provision where players simultaneously decide contributions to a . In a linear , each of n players chooses to contribute g_i \in [0, e] from endowment e, yielding payoff \pi_i = (1 - g_i) e + \alpha \frac{\sum g_j}{n} with multiplier \alpha > 1; the dominant is g_i = 0, leading to underprovision despite Pareto-superior full contribution. Voting paradoxes, modeled as normal-form coordination games over profiles, reveal cycles where no Condorcet winner exists (e.g., preferences rank A > B, B > C, C > A), preventing stable equilibria in simultaneous and highlighting aggregation inconsistencies. Bargaining situations employ normal-form representations of non-cooperative models, such as where players simultaneously announce demands for shares of a pie \pi. Each player i demands d_i \in [0, \pi]; if \sum d_i \leq \pi, each receives d_i, otherwise all get 0. Nash equilibria consist of any demands where \sum d_i = \pi, supporting the axiomatic Nash bargaining solution under additional refinements. Empirical validation through lab experiments tests normal-form predictions. These results support normal-form models' predictive power in controlled settings, though behavioral deviations persist.

Biology and Evolutionary Game Theory

In , normal-form games model interactions among organisms where payoffs represent relative fitness rather than economic utility, allowing analysis of how heritable strategies spread through in . A central concept is the (ESS), developed by and in the 1970s as a refinement of the for biological systems. An ESS is a strategy that, when prevalent in a population, resists by rare alternative () strategies. Formally, for a resident strategy E and invader I, E is an ESS if, for sufficiently small \epsilon > 0, u(E, (1-\epsilon)E + \epsilon I) > u(I, (1-\epsilon)E + \epsilon I), where u denotes the expected payoff, interpreted as fitness, and the population state is a mixture of mostly E with a small fraction \epsilon of I. If payoffs are equal, a secondary condition requires u(E, E) > u(I, E). This ensures that mutants with lower fitness in the resident environment decline in frequency. ESS analysis applies to symmetric normal-form games, predicting stable behavioral outcomes like aggression levels in contests. The Hawk-Dove game exemplifies ESS in normal-form representations of animal conflicts over limited resources, such as territory or mates. In this two-strategy game, "Hawk" involves escalated fighting (high risk of but potential full reward), while "Dove" entails non-aggressive displays (low risk but shared or lost reward). The payoff matrix typically yields negative fitness for two Hawks due to injury costs exceeding benefits, positive but reduced fitness for Hawk vs. Dove, equal low fitness for two Doves, and zero or negative for Dove vs. Hawk. No pure strategy is an ESS; instead, a mixed ESS emerges where the population frequency of Hawks is p = V/C, with V as resource value and C > V as injury cost, balancing aggression to maximize average fitness. This predicts observed ritualized behaviors in nature, where full fights are rare. Normal-form games have illuminated biological applications, including sex ratio evolution, where parental strategies for offspring follow recast in game-theoretic terms. In this model, parents "play" by investing in sons or daughters, with fitness depending on population s; deviation from 1:1 biases the ratio toward the rarer sex, favoring counter-strategies until equilibrium at equal investment (adjusted for sex-specific costs). This holds under random mating and equal parental expenditure potential, explaining near-universal 1:1 ratios despite local variations. models similarly employ normal-form games to explore mutualistic interactions, such as between plants and pollinators or microbes, where payoff structures (e.g., harmony games with mutual benefits for ) yield favoring partner fidelity when exploitation risks are low and benefits high. These frameworks reveal conditions for stable , like repeated interactions or sanctions against cheaters. Replicator dynamics provide a continuous-time for how frequencies evolve in normal-form games under ESS criteria, modeling proportional to relative . The equations are \dot{x}_i = x_i (f_i(\mathbf{x}) - \bar{f}(\mathbf{x})), where x_i is the frequency of i, f_i(\mathbf{x}) its payoff against \mathbf{x}, and \bar{f}(\mathbf{x}) = \sum x_j f_j(\mathbf{x}) the average . with above-average increase, driving convergence to ESS if stable; interior equilibria may oscillate or diverge otherwise. Originating in analyses of symmetric games, these highlight long-term without assuming . Unlike economic applications, biological normal-form games treat payoffs as fitness contributions to , accruing to genes rather than individuals, and omit individual —organisms do not choose optimally but inherit and vary strategies via and selection. This population-level focus emphasizes invasion resistance over one-shot decisions, enabling predictions of polymorphic equilibria absent in rational-choice models.

References

  1. [1]
    [PDF] Chapter 3 Representation of Games - MIT OpenCourseWare
    The normal-form (or strategic-form) representation, in which the above informa- tion is summarized by use of strategies. Both forms of representation are useful ...
  2. [2]
    [PDF] Game Theory: Normal Form Games - Michael Levet
    Jun 23, 2016 · A normal form game has a set of players. Each player has a set of strategies. These players each select a strategy and play their selections ...
  3. [3]
  4. [4]
    Game Theory - Stanford Encyclopedia of Philosophy
    Jan 25, 1997 · Game theory is the study of the ways in which interacting choices of economic agents produce outcomes with respect to the preferences (or utilities) of those ...
  5. [5]
    [PDF] Normal-form Games | Brown CS
    Jan 22, 2020 · Neumann and Morgenstern's expected utility theory1 argues that. 1 J. Von Neumann and O. Morgenstern. Theory of Games ... In a normal-form game ...
  6. [6]
    [PDF] Emile Borel and the foundations of game theory - Knowledge Base
    In von Neumann's 1928 paper, before proving the minimax theorem, he began by formally developing the idea of strategic normalization that Borel had sketched in ...
  7. [7]
    [PDF] ON THE THEORY OF GAMES OF STRATEGY - John von Neumann
    Nevertheless it is possible to bring all games falling under this definition into a much simpler normal form, in a way, into the simplest form that is at all ...
  8. [8]
  9. [9]
    [PDF] Theory of Games of Strategy - RAND
    1. John von Neumann, "Zur Theorie der Gesellschaftsspiele,". Mathematische Annalen 100: 295-320 (1928).Missing: normal | Show results with:normal
  10. [10]
    [PDF] Non Cooperative Games John Nash
    Jan 26, 2002 · It turns out that the set of equilibrium points of a two-person zero- sum game is simply the set of all pairs of opposing "good strategies."
  11. [11]
    [PDF] Strategic form games - MIT OpenCourseWare
    Such games are referred to as strategic form games—or as normal form games ... Finite Strategy Spaces. When the Si is finite for all i, we call the game ...
  12. [12]
    [PDF] Game Theory: Normal Form Games - Michael Levet
    Jun 23, 2016 · A mixed strategies Nash equilibrium in a normal form game is equivalent to a pure strategies Nash equilibrium in a mixed extension.
  13. [13]
    [PDF] Game Theory Notes: Maskin
    What is a mixed strategy? A mixed strategy σi is a probability distribution over pure strategies. Each player's randomization is statistically independent of ...
  14. [14]
    [PDF] Examples of Normal Form Games 1 Contents 1 Finite Action Spaces
    2 Infinite Action Spaces. 2.1 A Serious Prisoners Dilemma: the Tragedy of the Commons. There are n players with strategy sets xi ∈ [0,1] and payoff ui(x) = r.
  15. [15]
    [PDF] Normal-Form Games | Brown CS
    Sep 3, 2025 · Von Neumann and Morgenstern's classic expected utility theory1. 1 J. Von Neumann and O. Morgenstern. Theory of Games and Economic Behavior.
  16. [16]
    [PDF] Common Knowledge and Payoffs Introduction to Game Theory III
    aIn game theory, we commonly assume that. `All the potential players are CK,. `The strategies are CK. `The payoffs are CK. aSo everyone knows “the details of ...
  17. [17]
    [PDF] Ec2010a: Game Theory Section Notes - Harvard University
    Dec 1, 2021 · The vNM representation theorem rationalizes players maximizing expected payoff through a pair of conditions on their preference over lotteries.
  18. [18]
    [PDF] GAME THEORY
    A finite two-person game in strategic form can be represented as a matrix of ordered pairs, sometimes called a bimatrix. The first component of the pair ...
  19. [19]
    [PDF] 4 BIMATRIX GAMES - Euler
    Bimatrix game is a two-player normal form game where. • player 1 has a finite strategy set S = {s1,s2,...,sm}. • player 2 has a finite strategy set T = {t1,t2 ...
  20. [20]
    Normal Form Game - an overview | ScienceDirect Topics
    In a matrix form representation of a two-player normal form game, all possible action vectors are arrayed in a matrix such that player 1's actions (the first ...
  21. [21]
    Epistemic Foundations of Game Theory
    Mar 13, 2015 · Definition 1.1 (Game in Strategic Form) A game in strategic form is a tuple \(\langle N , (S_i)_{i\in N}, (u_i)_{i\in N }\rangle\) where \(N ...
  22. [22]
    Prisoner's Dilemma - Stanford Encyclopedia of Philosophy
    Sep 4, 1997 · ... Flood and Dresher's formulation of the ordinary PD. Hume writes about two neighboring grain farmers: Your corn is ripe today; mine will be ...
  23. [23]
    Effective Choice in the Prisoner's Dilemma - Robert Axelrod, 1980
    This is a “primer” on how to play the iterated Prisoner's Dilemma game effectively. Existing research approaches offer the participant limited help.
  24. [24]
    [PDF] Chapter 6 Games - Cornell: Computer Science
    This is called a Coordination Game because the two players' shared goal is really to coordinate on the same strategy. There are many settings in which ...Missing: lanes | Show results with:lanes
  25. [25]
    [PDF] Rationalizable Strategic Behavior B. Douglas Bernheim ...
    Dec 6, 2007 · In particular, I focus attention on the properties of rationalizable strategies in general normal form games, devoting relatively little space ...
  26. [26]
    Rationalizable Strategic Behavior and the Problem of Perfection - jstor
    This paper explores the fundamental problem of what can be inferred about the outcome of a noncooperative game, from the rationality of the players and from ...
  27. [27]
    Non-Cooperative Games - jstor
    (4) JOHN NASH, Two Person Cooperative Games, to appear in Econometrica. (5) H. W. KUHN, Extensive Games, Proc. Nat. Acad. Sci. U. S. A., 36 (1950) 570-576.
  28. [28]
    [PDF] Algorithmic Game Theory Lecture #16: Best-Response Dynamics
    Nov 13, 2013 · Best-response dynamics is a straightforward procedure by which players search for a pure. Nash equilibrium (PNE) of a game. Specifically ...
  29. [29]
    [PDF] Preplay contracting in the Prisoners' Dilemma
    The classic Prisoners' Dilemma is perhaps the simplest game with a dominant strategy equilibrium that is. Pareto inefficient. Literally thousands of ...
  30. [30]
    [PDF] A Course in Game Theory - Mathematics Department
    ... Osborne, Martin J. A course in game theory/Martin J. Osborne, Ariel Rubinstein. p. cm. Includes bibliographical references and index. ISBN 0-262-15041-7 ...
  31. [31]
    [PDF] Game Theory Basics II: Extensive Form Games1
    Sep 30, 2019 · Given strategies and payoffs, we can re-represent the extensive form game as a game in strategic form. For example, the basic entry deterrence ...
  32. [32]
    [PDF] Stackelberg Equilibrium and Security Games - Columbia University
    Feb 29, 2020 · Stackelberg equilibrium is an equilibrium notion for two-player general-sum games where one player is a leader and the other player is a.
  33. [33]
    [PDF] Reinhard Selten - Prize Lecture
    This is the first main conclusion of the paper. A subgame perfect equilibrium set is a set of subgame perfect equilibria all of which yield the same payoffs, ...
  34. [34]
    [PDF] Game Theory Refresher Muriel Niederle - Stanford University
    Feb 3, 2009 · The Ultimatum game as a normal form game. Two players have to decide how to divide $10. Player 1, the proposer, decides how much to pass on to ...
  35. [35]
    [PDF] COURNOT COMPETITION - Vanderbilt University
    Cournot's 1838 model of strategic interaction between competing firms has become the primary workhorse for the analysis of imperfect competition, and.<|separator|>
  36. [36]
    [PDF] Introduction to Game Theory & The Bertrand Trap
    In this section, we introduce a simple, yet important, model of competition among two or more firms known as the Bertrand model. Because it will help Bertrand ...Missing: oligopoly | Show results with:oligopoly
  37. [37]
    [PDF] Auction Theory
    The basic model of auctions with independent private values was introduced by Vickrey (1961). He derived equilibrium bidding strategies in a first-price.
  38. [38]
    Democratic decisions establish stable authorities that overcome the ...
    Dec 23, 2013 · If subjects failed to establish such an authority, the public goods game took the form of a conventional social dilemma (mutual cooperation was ...Missing: normal- | Show results with:normal-
  39. [39]
    [PDF] Paradoxes of Voting - GMU
    Mar 4, 2015 · Or, as we know say, majority rule can fail to produce a Condorcet winner. The following example illustrates the paradox. Consider three voters ...
  40. [40]
    None
    Summary of each segment:
  41. [41]
    None
    ### Summary of Lab Experiments on Ultimatum Game Testing Normal-Form or Equilibrium Predictions
  42. [42]
    The Logic of Animal Conflict - Nature
    Nov 2, 1973 · The Logic of Animal Conflict. J. MAYNARD SMITH &; G. R. PRICE. Nature volume 246, pages 15– ...Missing: title | Show results with:title
  43. [43]
    Extraordinary Sex Ratios - Science
    Extraordinary Sex Ratios: A sex-ratio theory for sex linkage and inbreeding has new implications in cytogenetics and entomology.Missing: game | Show results with:game
  44. [44]
    The evolution of interspecific mutualisms - PNAS
    Interspecific mutualisms are widespread, but how they evolve is not clear. The Iterated Prisoner's Dilemma is the main theoretical tool to study cooperation.
  45. [45]
    Evolutionary stable strategies and game dynamics - ScienceDirect
    Maynard Smith and Price (1973) have introduced the concept of ESS (evolutionarily stable strategy) to describe a stable state of the game. We attempt to ...