Fact-checked by Grok 2 weeks ago

Extensive-form game

An extensive-form game is a fundamental representation in that models strategic interactions as a sequential process, specifying the players, the order and timing of their moves, the choices available at each decision point, the information each player possesses when acting, and the payoffs resulting from all possible outcomes, typically illustrated via a directed with nodes and branches. This representation was pioneered by and in their 1944 book A Theory of Games and Economic Behavior, where they formalized games with sequential moves to capture dynamic decision-making beyond simultaneous choices. Unlike the normal-form game, which abstracts interactions into a payoff matrix assuming simultaneous strategy selection and hides sequential structure, the extensive form explicitly reveals timing and information flow, allowing for the analysis of concepts like and foresight. Any extensive-form game can be converted to an equivalent normal form by enumerating all possible strategies, but the reverse often loses critical details about move order. Key elements of an extensive-form game include a finite set of players (possibly including "" for chance moves), a with non-terminal decision nodes assigned to players and terminal payoff nodes, information sets partitioning nodes to reflect what players know (singleton sets for , multiple nodes for imperfect information), and payoff functions at terminals convertible to utilities. Games may feature , as in chess where every prior move is known, or imperfect information, as in poker with hidden cards. Strategies in extensive-form games are complete plans specifying actions for every information set, often analyzed via behavioral strategies for simplicity in imperfect-information cases. Central to solving extensive-form games is the subgame perfect equilibrium (SPE), a refinement of Nash equilibrium introduced by Reinhard Selten in 1965, which requires strategies to form a Nash equilibrium in every subgame—defined as a portion of the tree starting from any node—thus eliminating non-credible threats or promises. SPEs are computed via backward induction in finite games of perfect information, starting from terminal nodes and working upwards, guaranteeing existence in finite settings. These concepts enable applications in economics, political science, and computer science, modeling phenomena like bargaining, auctions, and AI decision-making under uncertainty.

Fundamentals

Definition and Representation

An extensive-form game models strategic interactions where players make decisions sequentially over time, capturing the order of moves, available , and potential outcomes in a structured manner. This is particularly useful for analyzing games with a temporal , such as negotiations or competitive entries into markets, by explicitly detailing who acts when and under what circumstances. The game is graphically depicted as a , a where nodes and branches encode the game's progression. Non-terminal nodes represent either decision points for players or chance events, while branches emanating from these nodes denote possible actions or probabilistic outcomes. Terminal nodes, or leaves, specify the payoff profiles—vectors of numerical payoffs assigned to each player based on the path taken to reach that endpoint, reflecting their preferences or utilities. Players are the rational agents alternating or simultaneously deciding at their respective nodes, with payoffs typically assuming higher values are preferred. Key components include information sets, which group nodes that a player cannot distinguish between due to limited , ensuring the model accounts for imperfect without revealing the full history. nodes, marked by circles or similar, introduce uncertainty via Nature's moves, assigning probabilities to outgoing branches to model exogenous events like random draws. These elements collectively provide a comprehensive framework for sequential , distinguishing extensive-form games from simultaneous-move representations like normal-form games. The concept originated with John von Neumann's 1928 work on the theory of strategy games, where he first outlined a tree-like structure to analyze sequential play in zero-sum contexts like poker. This foundational approach was extended and formalized by in 1953, incorporating information sets to handle imperfect information and broader applicability beyond perfect information scenarios. A representative example is the entry deterrence game, a two-player of . The begins at the with the potential entrant (Player 1) choosing to Enter or Stay Out. If Stay Out, the game ends with payoffs (0 for entrant, 2 for incumbent). If Enter, control passes to the incumbent (Player 2), who decides to Accommodate (payoffs: 1 for entrant, 1 for incumbent) or Fight (payoffs: -1 for entrant, 0 for incumbent). This structure highlights sequential incentives, with branches labeled by actions and nodes by players, illustrating how early choices constrain later options.

Relation to Normal-Form Games

Normal-form games represent situations of simultaneous strategic , where each player's set is listed, and payoffs are specified for every combination of strategies as a or of utilities. In this representation, players choose actions without observing others' choices, and the focus is on the strategic interdependence captured by the payoff structure. To convert an extensive-form game to its normal-form equivalent, one identifies each player's pure strategies, which are complete contingent plans specifying an action for every possible history (or information set) reachable by that player. The strategy space expands accordingly: for a player with multiple decision points, the number of pure strategies is the product of the number of actions at each of their information sets. The resulting normal-form game has rows (or columns) corresponding to these pure strategies for each player, with payoffs computed as the expected utilities from the terminal nodes reached under each strategy profile. This process embeds the sequential structure into a simultaneous-choice framework, preserving all possible outcomes and equilibria of the original game. A simple example is the sequential , where Player 1 chooses first to Cooperate (C) or Defect (D), observed by Player 2, who then chooses C or D. Standard payoffs are: (C, C) yields (3, 3); (D, D) yields (1, 1); (D, C) yields (5, 0); (C, D) yields (0, 5). Player 1 has two strategies: C or D. Player 2, facing two information sets (after P1's C or D), has four strategies: (C after C, C after D) denoted CC; (C after C, D after D) as CD; (D after C, C after D) as DC; (D after C, D after D) as DD. The normal-form payoff matrix is:
Player 1 \ Player 2CCCDDCDD
C(3,3)(3,3)(0,5)(0,5)
D(5,0)(1,1)(5,0)(1,1)
This matrix shows how Player 2's contingent responses determine outcomes, with the unique at (D, DD), reflecting defection regardless of Player 1's choice. The extensive form offers key advantages over the normal form by explicitly modeling the timing of moves, which reveals opportunities for and the of threats or promises that may not be apparent in the static . For instance, sequential moves allow a first mover to influence later decisions in ways that simultaneous choices cannot. However, this representational richness comes at a cost: the strategy space in the normal form grows exponentially with the game's length or , rendering the unwieldy for even moderately complex extensive-form games. Every finite extensive-form game possesses a strategically equivalent normal-form representation via this reduction, ensuring that solution concepts like Nash equilibrium apply identically across forms. The converse does not hold without additional structure: not every normal-form game arises naturally from an extensive form unless a specific sequence of moves and information patterns is imposed. This equivalence was foundational to the development of game theory, as established in the original framework linking the two representations.

Finite Extensive-Form Games

Games with Perfect Information

In extensive-form games with , every information set consists of a single , meaning that at each decision point, the player knows the exact history of all previous moves by all . This structure ensures that have complete knowledge of the game's progression up to their turn, allowing for sequential without ambiguity about the current state. Such games feature strictly alternating moves with no simultaneous actions or concealed choices, as all decisions are observable and occur one at a time. Classic examples include chess and , where alternate turns on a fully visible board, and each move updates the shared state without hidden elements or chance events beyond explicit random nodes if present. In these games, play follows deterministic paths determined by player choices, with uncertainty limited solely to any chance nodes (such as dice rolls in variants like ), and the finite length of the game tree guarantees a unique structure rooted at every . A prominent example is the centipede game, introduced by Rosenthal in 1981, which illustrates the tension between short-term defection and long-term cooperation in a perfect-information setting. In this two-player game, participants alternate decisions to either "stop" (defect and claim the current pot) or "continue" (pass and grow the pot), over a finite number of legs resembling a centipede's body. For a simple four-leg version, the game tree begins with Player 1 choosing to stop (payoffs: 1 for Player 1, 0 for Player 2) or continue, leading to Player 2's choice to stop (0 for Player 1, 2 for Player 2) or continue; if continued, Player 1 chooses stop (3 for Player 1, 2 for Player 2) or continue, and finally Player 2 stops automatically (2 for Player 1, 4 for Player 2). This setup creates a temptation to defect early, as stopping at any point yields more for the current player than continuing and facing the opponent's likely defection, yet mutual continuation to the end maximizes joint payoffs, highlighting paradoxical incentives despite full observability. The analysis of perfect-information games traces back to Ernst Zermelo's 1913 paper on chess, where he proved that in finite two-player games without draws, one player has a under optimal play, establishing the foundational principle of for such structures. Zermelo's work demonstrated that the exhaustive enumeration of moves in chess's perfect-information allows determination of forced outcomes, influencing subsequent developments in .

Games with Imperfect or Incomplete Information

In extensive-form games with imperfect information, players do not observe the exact history of actions leading to their decision , resulting in non-singleton information sets that group indistinguishable . An information set is a collection of where the player cannot differentiate between them based on available observations, requiring the player to select the same action across all in the set. This contrasts with games, where each information set contains only a single , allowing full observability of prior moves. A in such games specifies an for every information set belonging to the , rather than for individual , as the must commit to consistent behavior under uncertainty about the current . For example, the Battle of the Sexes game can be represented sequentially with imperfect information: 1 chooses to go to the or , but 2 does not observe this choice and faces an information set linking the two possible after 1's move. 2 must then select an applicable to both possibilities, such as coordinating on or without knowing 1's intent. This setup models simultaneous-move games like rock-paper-scissors in a sequential form, where the second 's information set bundles all potential first- actions, capturing the hidden nature of the opponent's choice. Imperfect information primarily affects players' beliefs about past actions, leading to uncertainty over the game's path. In contrast, incomplete information involves uncertainty about opponents' types or payoff structures, often modeled by introducing Nature as a player who randomly assigns types with known probabilities, transforming the game into a Bayesian framework. Here, players hold prior beliefs over these types and update them based on observed signals, as formalized in Harsanyi's model of games played by Bayesian players. A classic example of incomplete information is the beer-quiche signaling game introduced by In-Koo Cho and David M. Kreps in 1987, where a strong (surly) or weak (wimp) type of sender (Player 1) chooses to consume beer or quiche in the morning, observed by a receiver (Player 2) who then decides whether to duel or not duel in the afternoon. Nature assigns the strong type with probability 0.9; the strong type derives an incremental payoff of 1 from beer (0 from quiche), while the weak type derives 1 from quiche (0 from beer). Dueling provides an additional payoff of 2 to the strong sender (-3 to the weak) and -1 to the receiver if strong (+2 if weak); not dueling provides 0 to both from the duel component. Thus, the strong type always prefers beer (payoffs: 3 for beer+duel, 2 for quiche+duel, 1 for beer+no duel, 0 for quiche+no duel), while the weak type always prefers quiche (payoffs: -3 for beer+duel, -2 for quiche+duel, 0 for beer+no duel, 1 for quiche+no duel; receiver: -1 or +2 for duel, 0 for no duel). The receiver, unaware of the type, infers from the breakfast choice to decide the duel. This illustrates how incomplete information about types influences signaling strategies and beliefs, distinct from imperfect information's focus on hidden actions.

Formal Mathematical Definition

A finite extensive-form game is formally defined as a tuple (N, H, P, A, \tau, u, I), where N is a finite set of players (possibly including a chance player c to model incomplete information); H is the set of all possible histories, consisting of finite sequences of actions that represent paths from the root of the game tree to either a decision point or the end of the game; P: H \setminus Z \to N is the player function that assigns to each nonterminal history h \in H \setminus Z (where Z \subseteq H denotes the terminal histories) the player who moves at that history; A: H \setminus Z \to 2^{\mathcal{A}} (with \mathcal{A} a finite universal action set) specifies the nonempty finite set of actions A(h) available at each nonterminal history h; \tau: Z \to \mathbb{R}^{|N|} is the terminal payoff vector function that assigns payoffs to each terminal history; u: \mathbb{R}^{|N|} \to \mathbb{R} is the utility normalization for players (often the identity for von Neumann-Morgenstern utilities); and I = \{I_i\}_{i \in N} is a partition of the nonterminal histories into information sets for each player i, where each information set I_i(h) groups histories that player i cannot distinguish. The set H is closed under prefixes, meaning that if a sequence h' is a prefix of h \in H, then h' \in H, which structures the game as a without cycles, with the empty sequence \emptyset as the root. The player function P ensures sequential moves, while A(h) defines the branching at each decision point. The terminal payoff [\tau](/page/Tau) captures outcomes at leaves Z, and utilities u reflect player preferences over these payoffs, typically assuming complete and transitive preferences representable by expected utility. Information sets I model the players' observational limitations: at any history h where player i = P(h) moves, the action chosen applies uniformly to all histories in the same information set I_i(h), enforcing consistency in behavior under . In the case of perfect information, each information set is a singleton (|I_i(h)| = 1 for all nonterminal h with P(h) = i), meaning players observe the full history of prior actions. Imperfect information arises when |I_i(h)| > 1, grouping histories that look identical to the moving player. Incomplete information is unified by including the chance player c \in N, where moves at histories with P(h) = c are resolved probabilistically according to a fixed distribution over A(h), simulating nature's uncertainty without separate formalism. The payoff to player i \in N upon reaching a terminal history h \in Z is given by u_i(\tau(h)), where \tau(h) is the of stage payoffs at the terminal , and u_i extracts and normalizes the i-th component (often simply u_i(\tau(h)) = \tau_i(h) under direct assignment). This structure extends to expected utilities over chance moves via integration over probabilistic paths. Finiteness of the game—implied by finite N, finite-length histories in H, and finite A(h) for all h—guarantees no cycles in the (as sequences are prefix-closed and terminate), ensuring well-defined sequential and compact spaces: pure strategies are finite products of choices over information sets, yielding finite-dimensional simplices for mixed strategies.

Infinite Extensive-Form Games

Games with Infinite Action Spaces

In extensive-form games with infinite action spaces, the underlying game tree has infinitely many nodes due to infinite branching but maintains a finite horizon, while the action set A(h) at one or more decision nodes h is infinite. This can manifest as a countable infinity, such as discrete choices over natural numbers, or an uncountable infinity, like continuous choices over an interval such as [0, \bar{v}] for bids in an . Such structures extend the standard finite-action framework by allowing more realistic modeling of economic scenarios where agents face unbounded or continuous decision options, while preserving the sequential nature of play through histories and information sets. A canonical example is the with independent private values. Here, first draws a private value v_i for each bidder i from a known , creating singleton information sets for each bidder's type. Bidders then simultaneously select bids from the continuous [0, \bar{v}], unaware of others' values or bids. The highest bidder wins the object and pays their bid, with payoffs u_i = v_i - b_i if they win and $0 otherwise. This representation highlights imperfect information via the initial move by , and the infinite action captures the of arbitrary bid amounts. The primary challenge in these games arises from the infinite strategy spaces, which preclude the finite compactness that guarantees equilibrium existence in standard finite-action cases. Without additional structure, pure strategy Nash equilibria may fail to exist, necessitating the use of behavioral strategies—probability distributions over actions at each information set—or mixed strategies over continuous spaces. To ensure equilibrium existence, topological assumptions are imposed: action sets must be (e.g., closed and bounded intervals), and payoff functions must be continuous in strategies and outcomes. Under these conditions, a behavioral strategy equilibrium exists, extending results from normal-form games via fixed-point theorems. This reliance on continuity and compactness, as formalized in seminal existence theorems, underscores the distinction from finite-action extensive-form games, where pure strategies suffice and no such topological requirements are needed. For instance, Glicksberg's theorem for continuous normal-form games provides the foundational fixed-point argument, adapted to extensive forms through behavioral strategies that respect information sets and perfect recall. These assumptions enable the application of equilibrium refinements while avoiding the non-existence issues prevalent in discontinuous or non-compact settings.

Games with Infinite Time Horizons

In extensive-form games with infinite time horizons, histories consist of infinite sequences of actions, enabling the representation of games that may proceed indefinitely without a predetermined . This structure contrasts with finite-horizon games by allowing potentially unbounded play lengths, where decision nodes recur without termination. Such games are formally defined with an of histories H, comprising all possible infinite paths through the game , and strategies that specify actions at every information set along these paths. Player utilities in these games are typically computed using a discount factor \delta \in (0,1) to account for , yielding u_i(h) = \sum_{t=0}^{\infty} \delta^t r_{i,t}(h) for player i, where r_{i,t}(h) denotes the reward at time t along history h. This ensures convergence of the infinite sum, reflecting the diminished value of future payoffs. A prominent class of such games is games, where the game state evolves according to transition probabilities after each joint action, potentially returning to prior states indefinitely. Introduced by Shapley in 1953, these games model dynamic interactions like or pursuit-evasion scenarios, with existence of equilibria established for zero-sum cases via value iteration. An illustrative example is the infinitely repeated in extensive form, where players alternate moves in a tree that unfolds without bound, and cooperation can be sustained as an equilibrium outcome through strategies like tit-for-tat, unlike in finite repetitions. Equilibrium analysis in these games often relies on contraction mapping principles for the discounted case, where the Bellman operator on value functions satisfies a condition with constant \delta < 1, guaranteeing a unique fixed point and thus existence. For non-zero-sum settings, folk theorems characterize the set of sustainable , stating that any feasible and individually rational payoff profile can be achieved as a subgame-perfect if the discount factor is sufficiently close to 1, provided is perfect or but patient. These results require conditions like full dimensionality of the feasible payoff set to avoid unraveling. A canonical example is the infinite-horizon bargaining game, where two players alternate offers and accept/reject decisions over a fixed pie, with the game tree branching indefinitely upon rejection and payoffs discounted per round. In Rubinstein's 1982 model, the unique subgame-perfect equilibrium involves immediate agreement on a division that splits the surplus according to players' discount factors, such as (1 - \delta)/(1 - \delta^2) for player 1 when \delta_1 = \delta_2 = \delta. Historically, Shapley's 1953 framework laid the foundation for stochastic games, initially focusing on undiscounted average rewards in zero-sum contexts. Subsequent extensions incorporated discounting for convergence and explored continuous-time analogs, such as differential games, to model real-time dynamics in economics and control theory.

Solution Concepts

Backward Induction and Subgame Perfection

Backward induction is a solution method for finite extensive-form games with perfect information, involving the recursive elimination of suboptimal actions starting from the terminal nodes and working backward to the root of the game tree. This approach assumes rational players who maximize their payoffs, allowing the determination of optimal strategies by evaluating choices at each decision node based on anticipated future outcomes. In such games, backward induction yields a unique subgame perfect equilibrium under perfect information, as it resolves the game by selecting best responses sequentially from the end. The algorithm proceeds as follows: At each terminal node h, assign the payoff vector u(h). For a non-terminal history h where player i moves, the continuation value v(h) is the payoff vector such that v(h)_i = \max_{a \in A(h)} v(ha)_i, with the maximizing action selected, and other components determined by subsequent optimal play. The formal maximization for player i's value is v_i(h) = \max_{a \in A(h)} v_i(ha). Optimal actions are then those achieving this maximum at each node. This process propagates backward through the tree, ensuring that strategies are optimal conditional on reaching any point. For games with perfect recall, the resulting strategy profile is stationary and forms a Nash equilibrium. Subgame perfection extends to general extensive-form games, including those with imperfect information, by requiring that the strategy profile induces a Nash equilibrium in every . A is a subtree rooted at a history where players' information sets remain intact, and the restriction of strategies to this must be mutually best responses. Introduced by , this refinement eliminates non-credible threats or promises that might sustain Nash equilibria in the overall game but fail in isolated s. In perfect-information games, subgame perfect equilibria coincide exactly with those found via . A prominent example illustrating and its counterintuitive implications is the , introduced by Robert Rosenthal (1981). In this two-player game, players alternate turns over multiple rounds, with each able to "continue" (extending the game and potentially increasing joint payoffs) or "terminate" (taking a larger immediate share while leaving the opponent with less). predicts immediate termination by the first player, as rationality unravels from the end: at the final node, the last player takes; working backward, each prior player prefers to terminate rather than risk a worse outcome. This yields payoffs of (2,0) for a simple two-move version, despite continuation leading to (3,3), highlighting a where rational play ends the game prematurely, contrary to intuition. Despite its power, has limitations beyond finite perfect-information settings. In games with imperfect information, where players cannot distinguish certain histories, the method cannot be directly applied, as information sets prevent unique identification of successors, necessitating refinements like subgame perfection. Similarly, in infinite-horizon or infinite-action games, convergence issues arise without additional assumptions, such as or , to ensure well-defined limits. These constraints underscore the need for generalized concepts to handle broader classes of extensive-form games.

Behavioral Strategies and Equilibrium Refinements

In extensive-form games with imperfect information, behavioral strategies provide a compact way to represent randomization directly at information sets rather than across the entire strategy space. A behavioral strategy for player i is a function \sigma_i: \mathcal{I}_i \to \Delta(A(I)), where \mathcal{I}_i denotes the set of player i's information sets, A(I) is the set of actions available at information set I \in \mathcal{I}_i, and \Delta(A(I)) is the set of probability distributions over those actions. This formulation contrasts with pure strategies, which specify a single action at each decision node, or mixed strategies, which assign probabilities to complete plans of action across all nodes; behavioral strategies allow independent randomization at each information set, capturing realistic decision-making under uncertainty without enumerating exponentially many pure strategy mixtures. In finite extensive-form games with perfect recall—where players remember all prior actions they have taken—behavioral strategies are fully equivalent to mixed strategies in terms of induced outcome distributions. Kuhn's theorem establishes that every mixed strategy can be replicated by a behavioral strategy that yields identical probabilities for every , and conversely, every behavioral strategy corresponds to a mixed strategy with the same outcomes. This equivalence simplifies analysis, as behavioral strategies reduce the dimensionality of the strategy space from the product over nodes to the product over sets, facilitating in large games. To refine Nash equilibria in these settings, sequential equilibrium imposes sequential rationality and belief consistency. A sequential equilibrium consists of a behavioral strategy profile \sigma and a system of beliefs \mu over histories (or nodes) such that: (i) for every information set I, the strategy \sigma_i maximizes player i's expected payoff given beliefs \mu at I and strategies elsewhere; and (ii) beliefs \mu are derived from \sigma via Bayes' rule whenever possible, with arbitrary but consistent extensions otherwise. This refinement ensures that players optimize not just overall but at every information set, addressing implausible equilibria where off-path behaviors are unsupported. Trembling-hand perfect equilibrium further strengthens this by requiring robustness to small perturbations: an equilibrium is trembling-hand perfect if it is the limit (as \epsilon \to 0) of equilibria in perturbed games where players independently "tremble" and play each action at an information set with positive probability at least \epsilon. Introduced by Selten, this concept eliminates equilibria sensitive to errors, promoting stability in dynamic settings. A classic illustration arises in simplified poker games, where imperfect information about opponents' cards creates information sets grouping multiple nodes. For instance, consider a two-player poker variant where Player 1 receives a high or low card (equally likely) and decides to bet or check, while Player 2, observing the bet but not the card, responds with call or fold. Player 1's behavioral strategy might specify a 30% probability of betting (bluffing) at the information set with the low card to induce folds, and 80% betting with the high card, while Player 2's strategy assigns a 40% call probability at the betting information set; these probabilities directly model bluffing and calling frequencies without expanding to full mixed strategies over card realizations. The expected payoff under a behavioral strategy profile \sigma in an extensive-form game is given by E[u_i(\sigma)] = \sum_{h \in H} \pi(h \mid \sigma) \, u_i(\tau(h)), where H is the set of histories, \pi(h \mid \sigma) is the probability of reaching history h induced by \sigma and chance moves, and \tau(h) maps terminal history h to player i's payoff u_i. Equilibrium refinements like sequential and trembling-hand perfection require that strategies maximize this expectation sequentially, conditional on beliefs at each information set.

References

  1. [1]
    [PDF] Chapter 3 Representation of Games - MIT OpenCourseWare
    The extensive-form representation of a game contains all the information about the game explicitly, by defining who moves when, what each player knows when he ...
  2. [2]
    [PDF] 2. Extensive Form Games - Game Theory lab
    In this chapter, we study extensive form games which provide a more detailed representation than strategic form games. We explain the all important notion of a ...
  3. [3]
    None
    ### Summary of Extensive Form Games (Jonathan Levin, January 2002)
  4. [4]
    [PDF] Reinhard Selten - Prize Lecture
    A subgame perfect equilibrium of a bounded multistage game generates a subgame perfect equilibrium in every one of its delay supergames. This is the first main ...
  5. [5]
    Extensive Form Game - an overview | ScienceDirect Topics
    An extensive form game in game theory is a type of game where players make decisions sequentially, represented in the form of a game tree.
  6. [6]
    [PDF] Extensive form games - MIT OpenCourseWare
    Mar 16, 2010 · Game Theory: Lecture 12. Extensive Form Games. Example 1 – Entry Deterrence Game: Entrant. In. Out. A. F. Incumbent. (2,1). (0,0). (1,2). There ...
  7. [7]
    [PDF] A Course in Game Theory - Mathematics Department
    Page 1. A Course in Game Theory. Electronic version of “A Course in Game Theory” by. Martin J. Osborne and Ariel Rubinstein (ISBN 0-262-65040-1). Copyright c ...
  8. [8]
    [PDF] Equivalence of Games in Extensive Form - RAND
    He obtained an elegent characterization of those changes in information patterns which would transform a game structure into an equivalent game structure.
  9. [9]
    [PDF] GAMES WITH PERFECT INFORMATION - Penn Math
    Proposition 2.1 was first stated as a mathematical theorem by Zermelo. (1913). As we shall see in the next section, it fails for some infinite PI-games. The ...
  10. [10]
    [PDF] Counterfactual conditional analysis using the Centipede Game
    Apr 30, 2019 · The Centipede Game was first introduced by Robert Rosenthal in 1981. It is a finite n-person extensive form game with perfect information.
  11. [11]
    [PDF] Zermelo and the Early History of Game Theory
    It is generally agreed that the first formal theorem in the theory of games was proved by E. Zermelo1 in an article on Chess appearing in German in. 1913 ( ...
  12. [12]
    Zermelo 1913 - ResearchGate
    It is often cited as the first mathematical analysis of strategies in games. While the paper claims to be an application of set theory, and while it would have ...
  13. [13]
    [PDF] extensive games with imperfect information - Northwestern University
    Definition. A (pure) strategy for player P` in a extensive game with imperfect information is a choice of an action for each information set owned by P`. Thus ...
  14. [14]
    [PDF] Extensive-Form Games with Imperfect Information
    Sep 12, 2012 · We will see that Bayesian games can be represented as extensive-form games with imperfect information. Page 21. Example 4: A Modified Prisoner's ...
  15. [15]
    [PDF] GAMES WITH INCOMPLETE INFORMATION PLAYED BY ...
    JOHN C HARSANYI. University of California, Berkeley. The paper develops a new theory for the analysis of games with incomplete information where the players ...
  16. [16]
    [PDF] Signaling Games and the Intuitive Criterion (Cho & Kreps, 1987)
    “I am drinking beer, which ought to convince you that I am of the strong type. For (given the Quiche-pooling equilibrium) I would never wish to drink beer if I ...
  17. [17]
    Equilibrium in behavior strategies in infinite extensive form games ...
    We prove the existence of equilibrium in behavior strategies for extensive form games when the game has infinite actions. The result is derived under the a.
  18. [18]
    [PDF] Extensive form games - MIT OpenCourseWare
    Mar 18, 2010 · Definition. Consider an extensive form game with an infinite horizon, denoted by G. ∞ . Let h denote an ∞-horizon history, i.e., h = (a0 ...
  19. [19]
    Stochastic Games* | PNAS
    Stochastic Games*. L. S. ShapleyAuthors Info & Affiliations. October 15, 1953. 39 (10) 1095-1100. https://doi.org/10.1073/pnas.39.10.1095. View related content.
  20. [20]
    [PDF] Repeated Prisoner's Dilemma
    It is convenient to represent the game in “extensive form” when the moves are sequential – A player is assigned to start the game. The next assigned player,.<|control11|><|separator|>
  21. [21]
    The Folk Theorem in Repeated Games with Discounting or with ...
    May 1, 1986 · MLA. Fudenberg, Drew, and Eric Maskin. “The Folk Theorem in Repeated Games with Discounting or with Incomplete Information.” Econometrica ...
  22. [22]
    [PDF] The Folk Theorem in Repeated Games with Discounting or with ...
    Jul 24, 2007 · However, the basic approach is the same as in our Folk Theorem with discounting: after a deviation each player switches to his minimax strategy.<|separator|>
  23. [23]
    [PDF] Perfect Equilibrium in a Bargaining Model - Ariel Rubinstein
    1 (January, 1982). PERFECT EQUILIBRIUM IN A BARGAINING MODEL. BY ARIEL RUBINSTEIN'. Two players have to reach an agreement on the partition of a pie of size 1 ...
  24. [24]
    [PDF] 2015.215284.Theory-Of.pdf
    JOHN VON NEUMANN. OSKAR MORGENSTERN. PREFACE TO SECOND EDITION. The second edition differs from the first in some minor respects only. We have carried ...
  25. [25]
    Rationality in Extensive-Form Games
    The backward induction technique not only assigns values to every possible position arising in a game, it also provides an optimal move for the player whose ...
  26. [26]
    [PDF] Backward Induction Reasoning beyond Backward Induction
    Sep 18, 2024 · Backward Induction is only defined for games with perfect information, but its logic is also invoked in many equilibrium concepts for games ...Missing: limitations | Show results with:limitations
  27. [27]
    [PDF] Lecture 4: Extensive Form Games with Complete Information
    In every finite extensive form game, a subgame perfect equilibrium (SPE) exists (possibly in mixed strategies). • An SPE is a Nash equilibrium by definition. • ...
  28. [28]
    [PDF] 1 Extensive-form games
    Sep 10, 2025 · The fa- mous theorem of Kuhn [1953] implies that, for perfect-recall games, behavioral strategies are as expressive as mixed strategies in the ...
  29. [29]
    [PDF] Kreps, D. and R. Wilson [1982]: "Reputation and Imperfect ...
    KREPS And R. Wilson, Sequential equilibrium, Econometrica, in press. 4. P. MILGROm and J. Roberts, Predation, reputation and entry deterrence ...
  30. [30]
    [PDF] Reexamination of the Perfectness Concept for Equilibrium Points in ...
    Aug 23, 1974 · The new definition of perfectness has the property that a perfect equilibrium point is always subgame perfect but a subgame perfect equi-.
  31. [31]
    [PDF] Lecture Note 6: Extensive-Form Games - Columbia University
    Feb 6, 2022 · An example is shown in Figure 3. Figure 3: A (rather weird) poker game where P1 is dealt Ace or King with equal probability. “r,” “f,” and “c” ...