Cooperative game theory
Cooperative game theory is a branch of game theory that models strategic interactions among rational agents who can form binding coalitions to achieve collective outcomes, with a primary focus on the fair allocation of the resulting payoffs or costs among coalition members.[1] Unlike non-cooperative game theory, which assumes no enforceable contracts and emphasizes individual strategies, cooperative game theory assumes binding agreements that enable coalition formation and group stability.[1] The foundational representation of cooperative games is the characteristic function form, which assigns a value v(S) to every possible coalition S of players, indicating the total utility or worth that coalition can generate on its own, with v(\emptyset) = 0.[2] Games are broadly classified into transferable utility (TU) games, where the coalition's payoff can be arbitrarily divided among members (e.g., as money), and non-transferable utility (NTU) games, where payoffs are specified as sets of feasible utility vectors without free transferability.[2] Key assumptions include the absence of externalities (coalition outcomes do not affect others' utilities) and the possibility of side payments in TU settings.[1] Central to cooperative game theory are solution concepts that identify stable or equitable payoff distributions for the grand coalition (all players). The core comprises allocations where the total payoff equals v(N) and no subcoalition S receives less than v(S), ensuring no group has incentive to deviate; it is non-empty for balanced games per the Bondareva-Shapley theorem.[1] The Shapley value provides a unique allocation by averaging each player's marginal contribution across all coalition orderings, satisfying axioms of efficiency, symmetry, dummy player zero payoff, and additivity.[1] Other concepts include the nucleolus, which lexicographically minimizes the maximum excess (dissatisfaction) of coalitions, and bargaining sets, which resist objections from blocking coalitions.[1] Cooperative game theory originated in the 1944 seminal work Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern, which laid the groundwork for analyzing n-person games and coalition stability.[3] It has since been extended by contributions like the Shapley value (1953) and applied across fields, including economics for cost-sharing and market design, political science for voting and alliance formation, operations research for resource allocation, and computer science for network cooperation and machine learning interpretability.[4][5]Fundamentals
Mathematical Definition
In cooperative game theory, a game is formally defined as a pair (N, v), where N is a finite set of players, typically denoted as N = \{1, 2, \dots, n\} for some positive integer n, and v is the characteristic function that assigns to each possible coalition of players a real-valued worth representing the total payoff or value that coalition can achieve on its own. This structure abstracts the strategic interactions among players by focusing on the collective outcomes of subsets, assuming that players can form binding agreements to cooperate without external enforcement issues being modeled explicitly. The theory primarily considers transferable utility (TU) games, in which the worth v(S) for a coalition S \subseteq N is assumed to be divisible in any manner among the members of S, allowing for flexible sharing of gains through side payments. The characteristic function is mathematically specified as v: 2^N \to \mathbb{R}, where $2^N denotes the power set of N (the collection of all subsets of N), and it satisfies the normalization condition v(\emptyset) = 0, indicating that the empty coalition generates no value.[1] This setup emphasizes coalition formation as the central mechanism, with the worth v(S) capturing the maximum joint utility attainable by S independently of the remaining players.[6] A simple illustrative example is the three-player majority game with N = \{1, 2, 3\}, where v(\{1\}) = v(\{2\}) = v(\{3\}) = 0, v(\{1,2\}) = v(\{1,3\}) = v(\{2,3\}) = 1, and v(\{1,2,3\}) = 1; here, any pair of players can generate a unit of value (e.g., by achieving a majority to produce output), but no single player can, and the full coalition yields the same as a pair.[1]Characteristic Function
In cooperative game theory, the characteristic function v plays a central role by assigning to each possible coalition S \subseteq N, where N is the finite set of players, a value v(S) that represents the maximum total payoff the members of S can jointly achieve through cooperation, assuming the coalition acts as a unified entity while outsiders may act adversely to minimize this payoff. This value encapsulates the coalition's independent bargaining power or productive capacity, independent of how the remaining players N \setminus S organize themselves.[7] Formally, in transferable utility (TU) games, v: 2^N \to \mathbb{R} with v(\emptyset) = 0, where the payoff can be freely redistributed among coalition members via side payments. The interpretation of v(S) underscores the assumption that coalition S operates cohesively as a single decision-maker, optimizing its collective outcome against potential opposition from non-members, which simplifies the analysis of coalition formation and stability in n-person games.[7] This formulation, introduced by von Neumann and Morgenstern, allows the game's strategic structure to be reduced to coalition worths, facilitating the study of cooperative solutions without specifying full strategic interactions. Cooperative games extend beyond TU settings to non-transferable utility (NTU) games, where payoffs cannot be arbitrarily reallocated due to differing individual utilities or indivisibilities; here, the characteristic function v assigns to each S a comprehensive set v(S) \subseteq \mathbb{R}^{|S|} of feasible payoff vectors for the players in S, often assumed to be comprehensive (closed under reduction) and bounded. In NTU games, v(S) thus captures the set of all possible utility allocations achievable by S acting together, reflecting the coalition's joint opportunities without assuming a common numeraire. For illustration, consider a simple two-player bargaining game where players 1 and 2 must divide a unit resource; in the NTU formulation, v(\{1,2\}) = \{ (x,y) \in \mathbb{R}^2 \mid x \geq 0, y \geq 0, x + y \leq 1 \}, representing all non-negative divisions summing to at most 1, while singletons have v(\{i\}) = \{0\}. This example highlights how NTU structures preserve the heterogeneity of player preferences, generalizing TU games where such sets would collapse to scalar maxima like v(\{1,2\}) = 1.Core Concepts
Coalitions and Imputations
In cooperative game theory, the set of players is denoted by N = \{1, 2, \dots, n\}. A coalition S \subseteq N is a subset of players who choose to cooperate, pooling their efforts to achieve a collective outcome that may exceed what they could obtain individually.[3] The grand coalition N represents the scenario in which all players join forces, often serving as the benchmark for analyzing full cooperation in the game.[6] The value attainable by any coalition S is given by the characteristic function v(S), which quantifies the coalition's productive potential. An imputation provides a feasible way to distribute the grand coalition's value v(N) among all players. Formally, an imputation x = (x_1, x_2, \dots, x_n) \in \mathbb{R}^N is a payoff vector satisfying group rationality, ensuring the total payoff equals the grand coalition's value (\sum_{i \in N} x_i = v(N)), and individual rationality, guaranteeing each player receives at least as much as they would alone (x_i \geq v(\{i\}) for all i \in N).[6] These conditions ensure the distribution is both collectively efficient and personally acceptable to each participant.[3] The set of all imputations for a given game is mathematically defined as A = \left\{ x \in \mathbb{R}^N \;\middle|\; \sum_{i \in N} x_i = v(N),\; x_i \geq v(\{i\}) \;\forall\; i \in N \right\}. [6] For instance, consider a three-player game where v(N) = 1 and v(\{i\}) = 0 for each player i; the imputations then consist of all nonnegative payoff vectors (x_1, x_2, x_3) such that x_1 + x_2 + x_3 = 1.[8]Subgames
In cooperative game theory, a subgame is formed by restricting the original game to a subset of players, known as a coalition S \subseteq N, where N is the full player set. This subgame, denoted (S, v_S), retains the structure of a cooperative game but focuses exclusively on the players in S, with the characteristic function v_S defined such that v_S(T) = v(T) for every subset T \subseteq S. This restriction preserves the original payoffs achievable by coalitions within S, effectively excluding the influence of players outside S.[9] Subgames facilitate the decomposition of complex cooperative games into manageable components, enabling independent analysis of coalition dynamics within smaller groups. By isolating subsets of players, researchers can examine how internal cooperation functions without interference from the broader player set, which is particularly useful for understanding incremental coalition formation processes. This approach highlights the self-contained nature of subgroup interactions in larger strategic environments.[1] For instance, in a four-player game with N = \{1, 2, 3, 4\} and characteristic function v, the subgame on coalition S = \{1, 2\} is (\{1, 2\}, v_S), where v_S(\{1\}) = v(\{1\}), v_S(\{2\}) = v(\{2\}), and v_S(\{1, 2\}) = v(\{1, 2\}). Here, players 3 and 4 are disregarded, allowing focused study of the potential cooperation between players 1 and 2 based solely on their joint and individual values from the original game.Efficiency and Individual Rationality
In cooperative game theory, the efficiency axiom requires that any payoff distribution, or imputation, fully utilizes the value created by the grand coalition, meaning the sum of payoffs to all players equals the coalition's total value: \sum_{i \in N} x_i = v(N). This condition ensures no resources are left undistributed and no player receives more than the collective output allows, embodying Pareto optimality where no player can gain without another losing. Individual rationality complements efficiency by stipulating that each player's payoff must be at least as much as they could secure independently, formalized as x_i \geq v(\{i\}) for every player i \in N. This axiom safeguards against exploitative allocations, guaranteeing that no participant is worse off by joining the coalition than acting alone, thus promoting voluntary cooperation. Together, efficiency and individual rationality delineate the set of imputations, which consists of all payoff vectors x \in \mathbb{R}^N that satisfy both conditions and are thus feasible and acceptable to all players. Imputations serve as the foundational domain for further solution concepts in cooperative games. For instance, in a symmetric three-player game where v(\{i\}) = 0 for each i and v(N) = 3, efficient imputations that equalize payoffs—such as (1, 1, 1)—distribute the total value evenly while respecting individual rationality.[10] These axioms originated in the seminal work of John von Neumann and Oskar Morgenstern, who introduced them in their 1944 book Theory of Games and Economic Behavior as core principles for analyzing stable coalition outcomes.Properties
Superadditivity
In cooperative game theory, superadditivity is a key property of the characteristic function v that captures the benefits of coalition merging. A transferable utility (TU) game (N, v) is superadditive if, for any two disjoint coalitions S, T \subseteq N with S \cap T = \emptyset, the value of their union satisfies v(S \cup T) \geq v(S) + v(T).[6] This inequality indicates that the joint coalition can achieve at least as much worth as the separate coalitions, reflecting synergies or economies of scale in cooperation. The property has significant implications for coalition formation and stability. Superadditivity encourages players to build larger coalitions, as merging disjoint groups never reduces total value, often leading to the prediction that the grand coalition N will form in equilibrium.[11] It is also essential for the non-emptiness of the core in certain classes of games, such as stochastic cooperative games, where superadditivity ensures stable allocations exist under specific probabilistic structures.[12] A representative example is a production game where the characteristic function is defined as v(S) = |S|^2 for any coalition S \subseteq N, modeling quadratic returns to scale from group size. To verify superadditivity, consider disjoint coalitions S and T with sizes a = |S| and b = |T|. Then, v(S \cup T) = (a + b)^2 = a^2 + 2ab + b^2 \geq a^2 + b^2 = v(S) + v(T), since $2ab \geq 0. This basic inequality confirms the property holds, illustrating how merging amplifies output beyond additive summation.[13]Monotonicity
In cooperative game theory, a transferable utility game (N, v) is said to be monotone if the characteristic function v satisfies v(S \cup \{i\}) \geq v(S) for every coalition S \subseteq N and every player i \in N \setminus S.[6] This condition is equivalent to v(S) \leq v(T) whenever S \subseteq T \subseteq N, ensuring that the value of a coalition does not decrease upon enlargement. The monotonicity property implies that the marginal contribution of any player i to a coalition S, given by v(S \cup \{i\}) - v(S), is non-negative, reflecting positive complementarity among players.[6] This feature is common in economic models of production or resource allocation, where additional participants enhance overall efficiency or output without diminishing returns in the basic setup. A representative example arises in cost-sharing games, such as public goods provision, where the coalition value is v(S) = f(|S|) and f is an increasing function capturing total benefits or cost savings that grow with group size; the game is monotone provided f is non-decreasing.[6] Monotonicity represents a weak form of convexity in cooperative games: while a convex game requires supermodularity—i.e., non-decreasing marginal contributions, v(S \cup \{i\}) - v(S) \leq v(T \cup \{i\}) - v(T) for S \subseteq T and i \notin T—monotonicity merely demands these contributions to be non-negative, making it a less restrictive assumption.[14]Properties of Simple Games
A simple game is defined as a pair (N, W), where N is a finite set of players and W \subseteq 2^N \setminus \{\emptyset\} is the collection of winning coalitions, with the grand coalition N \in W, and typically no singleton \{i\} \in W for any i \in N (excluding dictators).[15] This structure abstracts situations where coalitions either succeed or fail without intermediate payoffs, such as in voting systems or committee decisions.[8] A fundamental property of simple games is monotonicity, which states that if S \in W and S \subseteq T \subseteq N, then T \in W.[15] This ensures that enlarging a winning coalition cannot make it losing, reflecting the non-decreasing nature of coalition power in such models. Minimal winning coalitions form another key property, defined as the sets S \in W such that no proper subset of S is in W.[15] These minimal sets capture the irreducible combinations needed for success and serve as the foundational elements for analyzing the game's structure, often used to represent the game compactly via their blocker or dual.[16] A representative example is the majority voting game, where |N| is odd and W consists of all coalitions S \subseteq N with |S| \geq (|N| + 1)/2.[15] Here, minimal winning coalitions are precisely the sets of size (|N| + 1)/2, illustrating how simple games model parliamentary or electoral procedures.[16]Connections
Relation to Non-Cooperative Game Theory
Cooperative game theory and non-cooperative game theory represent two foundational branches of game theory, distinguished primarily by their assumptions about player interactions and enforceable commitments. In cooperative game theory, players are assumed to form coalitions with binding agreements that enforce cooperation, allowing the analysis to focus on the distribution of joint gains among coalition members rather than individual strategic choices. This approach models scenarios where external mechanisms, such as contracts or social norms, ensure that agreements are upheld, emphasizing coalition values and fair divisions. In contrast, non-cooperative game theory examines situations where players cannot make enforceable commitments and must anticipate others' actions through individual strategies, leading to solution concepts like the Nash equilibrium, where no player benefits from unilaterally deviating given others' strategies.[17] These differences highlight cooperative theory's normative focus on achievable joint outcomes versus non-cooperative theory's positive analysis of strategic incentives without binding enforcement. Cooperative game theory was formalized in the 1944 book Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern, building on von Neumann's 1928 minimax theorem for two-person zero-sum games. Non-cooperative game theory was advanced by John Nash in his 1950 paper on equilibrium points in n-person games, which established the Nash equilibrium as a key tool for non-binding strategic interactions.[3] Bridges between the two theories exist through extensions of non-cooperative models that incorporate limited communication or correlation, such as correlated equilibria, where players can coordinate actions via external signals without binding contracts, potentially replicating cooperative payoffs. For instance, in the Prisoner's Dilemma—a canonical example—non-cooperative analysis predicts mutual defection as the unique Nash equilibrium, yielding suboptimal payoffs for both players, whereas cooperative theory permits binding agreements that enforce mutual cooperation and achieve the Pareto-superior outcome. This contrast illustrates how cooperative assumptions enable outcomes unattainable in purely non-cooperative settings, though real-world applications often blend elements of both to model partial enforceability.Links to Combinatorial Optimization
Cooperative game theory exhibits strong connections to combinatorial optimization through the structure of characteristic functions and solution concepts like the core. In combinatorial optimization games, the value v(S) of a coalition S is defined as the optimal value of a combinatorial optimization problem restricted to the players in S, such as maximum matching or minimum covering in a graph induced by S.[18] This formulation allows the game's payoffs to be derived directly from optimization objectives, bridging the two fields. The core of such a game, which consists of imputations x satisfying \sum_{i \in S} x_i \geq v(S) for all coalitions S \subseteq N and \sum_{i \in N} x_i = v(N), can be interpreted as the feasible region of a linear program (LP) where the inequalities correspond to coalition constraints and the characteristic values v(S) serve as right-hand sides. This LP perspective highlights similarities, as verifying core membership or emptiness reduces to solving LPs augmented with oracles for evaluating v(S), much like dual optimization in combinatorial problems.[19] Despite these analogies, key differences persist. Combinatorial optimization primarily seeks to maximize or minimize an objective over feasible actions, focusing on efficiency, whereas cooperative games emphasize equitable payoff distribution among agents while ensuring stability against deviations. In optimization, the emphasis is on algorithmic solvability for the grand coalition's value v(N), but in games, the challenge lies in imputing shares that respect all subcoalition values, often requiring separation over exponentially many constraints. A prominent example is the assignment game, where two sets of agents (e.g., buyers and sellers) form bilateral matches with given profits, and v(S) is the maximum-weight bipartite matching value for coalition S. Here, the core allocations correspond precisely to the competitive equilibria of the associated market, where payoffs represent equilibrium prices and rents, ensuring no blocking pairs. Post-2000 developments have advanced computational tractability for cores in combinatorial games. For instance, integer programming formulations have enabled improved algorithms for checking core nonemptiness and finding core elements in classes like matching and covering games, leveraging the underlying optimization structure.[18] In specific subclasses, such as b-matching games on bipartite graphs, polynomial-time separation oracles exist for the core LP, allowing efficient computation via the ellipsoid method or direct combinatorial algorithms.[20] These advances underscore how optimization techniques facilitate solving cooperative solution concepts, particularly when the game's values admit efficient evaluation.Solution Concepts
Stable Set
The stable set, introduced by John von Neumann and Oskar Morgenstern as a solution concept for cooperative games, identifies a collection of imputations that cannot be internally undermined and collectively blocks all alternative imputations. This set-valued solution aims to capture stable social outcomes where coalitions cannot improve upon the selected allocations without facing counter-blockades from within the set. Imputations, as defined in the context of coalitions, form the universe from which stable sets are drawn, ensuring efficiency and individual rationality. Formally, for a transferable utility game (N, v), a stable set \phi \subseteq A—where A denotes the set of imputations—satisfies internal stability and external stability. Internal stability requires that for all x, y \in \phi, x does not dominate y. External stability requires that for every y \notin \phi, there exists some x \in \phi such that x dominates y.[21] Domination is defined via coalitions: an imputation x dominates y through a coalition S \subseteq N if \sum_{i \in S} x_i \leq v(S) (feasibility for S under x) and x_i > y_i for all i \in S (strict improvement for every member of S).[22] The domination relation can be expressed as follows: x dominates y if there exists S \subseteq N \setminus \{j \mid x_j \leq y_j\} such that \sum_{i \in S} x_i \leq v(S) \quad \text{and} \quad x_i > y_i \ \forall i \in S. This ensures the coalition S can credibly enforce x over y by reallocating payoffs internally while making all participants strictly better off.[22] Stable sets do not always exist; for instance, William F. Lucas constructed a 10-player game in which no stable set satisfies both stability conditions. Moreover, when stable sets exist, a game may admit multiple such sets, reflecting different possible equilibrium structures.[21] In simple majority voting games, stable sets often correspond to allocations that distribute power according to indices like the Banzhaf power index, where minimal winning coalitions—such as pairs in a three-player game—equally share the value while excluding the loser. For example, in the three-player majority game with v(\{1,2\}) = v(\{1,3\}) = v(\{2,3\}) = v(N) = 1 and v zero otherwise, one stable set is \{(0.5, 0.5, 0), (0.5, 0, 0.5), (0, 0.5, 0.5)\}, capturing symmetric power distribution among voters.[23]Core
In cooperative game theory, the core represents a solution concept that identifies allocations stable against deviations by any coalition of players. For a transferable utility (TU) game defined by a player set N and characteristic function v: 2^N \to \mathbb{R}, the core consists of all imputations—feasible and individually rational payoff vectors x \in \mathbb{R}^N satisfying \sum_{i \in N} x_i = v(N) and x_i \geq v(\{i\}) for all i \in N—such that no coalition can improve its total payoff by acting independently. Formally, the core C(v) is the set C(v) = \left\{ x \in A \ \middle|\ \sum_{i \in S} x_i \geq v(S) \ \forall S \subseteq N \right\}, where A denotes the set of imputations.[24] This definition, introduced by Gillies, emphasizes group rationality by ensuring that the allocation to any coalition S meets or exceeds what S could achieve alone, thereby preventing blocking by deviations.[25] The core possesses key geometric properties: it is a closed and convex set, as it arises from the intersection of half-spaces defined by the coalition inequalities and the efficiency constraint.[1] An empty core signals inherent instability, where no allocation can satisfy all coalition demands simultaneously; for instance, in essential constant-sum games—where v(S) + v(N \setminus S) = v(N) for all S and v(N) > 0—the core is always empty, as the fixed total payoff cannot accommodate all blocking threats without violation.[10] In market games modeling exchange economies with TU, the core coincides with the set of competitive equilibria under appropriate conditions, such as when utilities are transferable and agents have quasi-concave preferences; here, core allocations represent prices and payoffs where no group of traders can renegotiate for mutual gain outside the market clearing.[26] The Bondareva-Shapley theorem provides a precise condition for non-emptiness: the core C(v) is nonempty if and only if the game is balanced, meaning there exists no collection of coalitions with weights \{\beta_S\}_{S \subseteq N, S \neq \emptyset} such that $0 \leq \beta_S \leq 1 for all S, \sum_{S \ni i} \beta_S = 1 for all i \in N, and \sum_{S \subseteq N} \beta_S v(S) > v(N). This linear programming duality-based characterization, proven independently by Bondareva and Shapley, highlights that superadditivity—where v(S \cup T) \geq v(S) + v(T) for disjoint S, T—is necessary but insufficient for a nonempty core.[27][28]Epsilon-Core Variants
In cooperative game theory, the epsilon-core variants provide relaxations of the strict core concept, allowing for approximate stability in situations where the exact core may be empty. These variants introduce a small positive parameter ε to tolerate minor deviations from coalition values, making them particularly useful for analyzing games with limited side payments or indivisibilities that lead to empty cores. The strong epsilon-core, for a transferable utility game (N, v) with imputation set A, is defined as the set of imputations x ∈ A such that ∑_{i ∈ S} x_i ≥ v(S) - ε for all coalitions S ⊆ N and ε > 0. The least core represents the tightest such relaxation, defined as the strong epsilon-core for the minimal ε* > 0 where the set is non-empty, formally ε* = min { ε ≥ 0 | ∃ x ∈ A with ∑_{i ∈ S} x_i ≥ v(S) - ε ∀ S ⊆ N }. This minimal ε* quantifies the degree of instability in the game, and the least core itself is always non-empty for any finite cooperative game, as it can be found by solving a linear program that minimizes the maximum excess over coalitions. As ε approaches 0, the epsilon-core contracts to the core when the latter is non-empty, providing a continuity property that links approximate solutions to exact ones. A key property of these variants is their guaranteed existence even in unbalanced games, contrasting with the core's potential emptiness; for instance, in a three-player game where v({1,2}) = 2, v({1,3}) = 2, v({2,3}) = 2, v({i}) = 0, and v(N) = 2, the core is empty due to conflicting coalition demands (the pair inequalities imply ∑ x_i ≥ 3 > 2 = v(N)), but the least core contains imputations like (2/3, 2/3, 2/3) with ε* = 2/3. In exchange economies, the least core allocations approximate competitive equilibria, especially in large economies where small ε values capture near-equilibrium outcomes without requiring perfect balance.Shapley Value
The Shapley value, introduced by Lloyd Shapley in 1953, is a fundamental solution concept in cooperative game theory that fairly allocates the total value generated by a coalition among its members based on their average marginal contributions. It provides a unique imputation for transferable utility games by considering all possible orders in which players might join a coalition, rewarding each player for the incremental value they add at each step. This approach ensures an equitable distribution that respects the game's structure without requiring negotiations over specific coalition formations.[29] Formally, for a transferable utility game (N, v) with player set N and characteristic function v, the Shapley value \phi_i(v) for player i \in N is defined as the average marginal contribution of i across all permutations of the player set: \phi_i(v) = \frac{1}{|N|!} \sum_{S \ni i} (|S|-1)! \cdot (|N| - |S|)! \cdot [v(S) - v(S \setminus \{i\})], where the sum is over all coalitions S \subseteq N containing i, and S \setminus \{i\} denotes the coalition without i. This formula equivalently averages the marginal contributions over all possible orders of coalition formation, weighting each equally.[29] The Shapley value is uniquely characterized by four axioms:- Efficiency: The sum of the values allocated to all players equals the value of the grand coalition, \sum_{i \in N} \phi_i(v) = v(N). This ensures the total payoff is fully distributed without surplus or deficit.[29]
- Symmetry: If two players i and j make identical marginal contributions to every coalition, then \phi_i(v) = \phi_j(v). This treats indistinguishable players equally.[29]
- Dummy player: If a player i contributes nothing to any coalition, meaning v(S \cup \{i\}) - v(S) = 0 for all S \subseteq N \setminus \{i\}, then \phi_i(v) = 0. This assigns zero value to non-contributors.[29]
- Additivity: For two games (N, v) and (N, w), the value of the sum game (N, v + w) is the sum of the values, \phi_i(v + w) = \phi_i(v) + \phi_i(w) for all i. This allows decomposition of complex games into simpler ones.[29]