Fact-checked by Grok 2 weeks ago

Game complexity

Game complexity is the study of the computational resources required to determine optimal strategies, winning outcomes, or other decision problems in combinatorial games, which are two-player, perfect-information games without elements of chance or hidden information. These games, such as , , and Go, are analyzed under rules like normal play (where the last player to move wins) or misère play (where the last player loses), and their complexity often falls into standard computational classes like P, NP-complete, PSPACE-complete, or EXPTIME-complete. The field draws from (CGT), which provides tools like Sprague-Grundy theorem for impartial games (where both players have identical moves) and surreal numbers for valuing positions in partizan games (with asymmetric moves). In CGT, the complexity of a game is typically measured by the time or space needed to solve its , which can grow exponentially with board size or game length. For impartial games, the Sprague-Grundy theorem assigns a (Grundy number) to each position, reducing the problem to Nim-heap equivalents solvable via XOR operations, though computing these values can still be hard for complex rules. Partizan games require more advanced analysis, often using recursive definitions of game values as {left options | right options}, leading to challenges in automation and evaluation. Notable results highlight the hardness of many classic games: for instance, generalized Chess on an n×n board is EXPTIME-complete, meaning it requires exponential time in the worst case, while connection games like are PSPACE-complete, solvable in polynomial space but potentially exponential time. Puzzle variants, such as single-player games like Sudoku or , often land in NP-complete, where verifying a solution is efficient but finding one is hard. These classifications inform development, algorithm design, and theoretical bounds, with ongoing research exploring quantum-inspired variants and approximations for intractable cases.

Measures of Game Complexity

State-Space Complexity

State-space complexity refers to the total number of distinct legal positions or configurations that can be reached in a game from its initial state, often denoted as |S| to represent the of the state space. This measure captures the breadth of possible game states, excluding duplicates or unreachable configurations due to game rules. The concept was introduced in the context of by in his seminal 1950 analysis of chess, where he estimated the state-space complexity to highlight the challenges of computational simulation. Shannon's work laid the foundation for evaluating game scale in and . For board games like chess, state-space complexity is typically calculated through combinatorial enumeration, approximating the product of possible placements for each piece type while subtracting illegal positions (e.g., those violating rules like pawn promotion or king exposure). In Shannon's chess estimation, this yielded roughly $10^{43} positions, derived from arrangements of up to 32 pieces on 64 squares, adjusted for symmetries and constraints such as rights. Exact counts often require sophisticated algorithms to enumerate valid states, as naive products overestimate due to rule-based invalidations. This metric is crucial as it quantifies the "breadth" of the game world, serving as a prerequisite for assessing search spaces in solvers, where exhaustive exploration becomes infeasible beyond small |S|. A larger state space generally implies a potentially expanded , though the latter accounts for sequential paths rather than unique positions alone. For rough estimations in games with average b and maximum depth d, |S| is sometimes approximated as b^d, but precise is preferred when computationally viable to avoid under- or overestimation.

Branching Factor

The branching factor, denoted as b, represents the average number of legal moves available from any given in a game, serving as a key metric for assessing the breadth of the search space in game trees. In formal terms, it is calculated as the total number of edges in the game tree divided by the total number of non-leaf nodes, where edges correspond to legal transitions between positions. This distinguishes the effective , which counts only valid legal moves under the game's rules, from a raw that might consider all conceivable actions without constraints, though the former is standard in game analysis due to its relevance to actual play. For instance, in chess, estimated the average at approximately 30 legal moves per position, drawing from empirical data on typical games. This figure, sometimes cited as 30–35 to account for variations, underscores the rapid expansion of possibilities in complex games. Branching factors vary across a game's phases: the initial is often higher due to more open positions and piece mobility, the average reflects overall play, and the terminal decreases near as fewer moves remain viable. These variations contribute to the of the game tree, where the number of positions at depth d scales roughly as b^d, amplifying search challenges in deeper explorations. The is a critical parameter in algorithms like search, where the is O(b^d), with d as the search depth, highlighting the computational effort required for evaluating moves. This metric, introduced in early literature during the 1950s, enabled foundational analyses of game solvability. However, it assumes a uniform branching factor across the tree, an idealization that rarely holds in practice, as actual values fluctuate by position, game stage, and rules, potentially leading to over- or underestimations of .

Game-Tree Size

In and , the game-tree size refers to the total number of nodes in the full , which encompasses all possible sequences of moves from the initial to terminal states in a finite, perfect-information game. This measure captures the exhaustive structure of decision paths, where each node represents a after a sequence of moves, and the tree branches according to legal actions available to players. For finite games without cycles, the game tree is acyclic, and its size is exact; however, approximations are often used for practical estimation, such as b^d for the number of leaf nodes, where b is the average and d is the maximum depth (or average game length in plies). Unlike state-space complexity, which counts the number of regardless of how they are reached, game-tree size accounts for all paths through the , including duplicates arising from transpositions—situations where different move sequences lead to the same board . This distinction arises because the game models the sequential nature of play, preserving the order of moves, whereas the state space abstracts away path history to focus on positional configurations. As a result, game-tree size can be exponentially larger than the state space due to these redundant paths, emphasizing the of possible playouts rather than just distinct configurations. The size of the game tree can be computed exactly for small games but is typically approximated for larger ones assuming a . For a of depth d, the total number of nodes is given by the : \sum_{k=0}^{d} b^k = \frac{b^{d+1} - 1}{b - 1} This formula provides the full tree volume, including internal and leaf nodes; the leaf nodes alone, representing complete games, approximate to b^d. A seminal example is chess, where estimated the game-tree size (number of possible games) at approximately $10^{120}, known as the , based on an average of 30 and a typical game length of 40 moves per player. This approximation connects directly to the , as higher b or d dramatically increases the size. In theoretical analyses of perfect-information games, game-tree size serves as a key for the "volume" of decision paths, quantifying the of exhaustive search required for perfect play and highlighting the infeasibility of brute-force solving for complex games. It underscores the challenges in adversarial search algorithms like , where exploring the full tree becomes computationally prohibitive. The concept gained prominence in AI research during the , as researchers evaluated the feasibility of game-solving programs amid growing computational power, influencing developments in techniques and heuristic search to manage this vast .

Computational Complexity

Computational complexity in refers to the computational resources, specifically time and space, required to determine an optimal for a , including computing the 's value under perfect play and the corresponding moves from any position. This analysis classifies the problem of solving a within standard complexity classes, such as or , based on the input size (e.g., board dimensions). For trivial games with a fixed, small number of positions, such as those solvable by exhaustive in time, the complexity is O(1). In contrast, many finite, perfect-information games without chance elements are -complete, requiring exponential time in the worst case relative to the input size. A landmark classification is the proof that generalized chess on an n \times n board is EXPTIME-complete, implying that computing a perfect strategy demands time at least exponential in n. Retrograde analysis, a backward induction method commonly used to solve endgames by propagating values from terminal positions, has a time complexity of O(|S| \cdot b), where |S| denotes the number of reachable states and b the average branching factor; since |S| grows exponentially with board size, the overall complexity remains exponential. Space complexity in such analyses is typically O(|S|), directly tied to storing values for each state, though optimizations like bit-packing can reduce this. Briefly, this space requirement scales with the underlying state-space size, underscoring the memory challenges in large games. Key factors influencing include the choice of search strategy and techniques. algorithms, such as , explore the game tree recursively and achieve a time of O(b^d), where b is the effective and d the maximum depth to terminal positions; this contrasts with , which demands O(b^d) space for storing all nodes at the frontier, rendering it infeasible for deep trees. tables address redundancies by hashing positions to store and reuse previously computed values, potentially reducing both time and space by avoiding recomputation of equivalent subtrees, with space usage bounded by the table size (often a fraction of |S| via hashing). These complexities explain why most non-trivial games, including chess, remain unsolved despite advances in hardware: exhaustive evaluation of the game tree would require approximately $10^{120} operations, exceeding feasible computational limits even with optimizations like alpha-beta pruning. has provided practical speedups, enabling distributed evaluation of subtrees across multiple processors and reducing wall-clock time for partial solutions, as demonstrated in scalable algorithms for game-tree search. However, such parallelism yields at most speedups and does not alter the fundamental theoretical lower bounds for classes like .

Decision Tree Measures

Decision Complexity

Decision complexity measures the minimal computational effort required to determine the optimal outcome from the initial position in a game under perfect play by both players. It is defined as the number of leaf nodes in the smallest that establishes the (win, , or ) of the starting position, representing only the essential evaluations needed after applying optimal techniques. This metric is substantially smaller than the full game-tree size, as it eliminates irrelevant branches that do not influence the final decision. The concept was introduced in the early to differentiate the resources for optimal from those for exhaustive exploration, highlighting how strategic insights reduce search demands in two-player zero-sum games. in this tree relies on game rules, such as terminal conditions for wins, losses, or draws, to cut off subtrees where bounds on possible outcomes exceed or undercut the current best, ensuring focus on decision-relevant paths. In practice, alpha-beta approximates this structure by maintaining lower (alpha) and upper (beta) bounds during search, dynamically discarding branches that cannot improve the result. Calculating decision complexity exactly involves constructing the minimal proof tree, often via dynamic programming methods like , which evaluates positions backward from terminals for small games such as Connect-Four or Awari. For larger games, approximations assume perfect move ordering in alpha-beta search, yielding a time complexity of roughly b^{d/2}, where b is the average and d is the effective depth to a decision; this provides a theoretical lower bound on evaluations needed. In applications, decision underpins the efficiency of game-playing agents, where static functions approximate leaf values in the pruned to enable deeper searches within computational limits, balancing accuracy against resource constraints in algorithms like with alpha-beta. This measure informs the scalability of perfect-information game solvers and guides developments for complex domains.

Game-Tree Complexity

Game-tree complexity measures the scale of the search space in a game under the assumption that both players pursue optimal strategies, focusing on the effective number of nodes in the search tree rather than all possible sequences of moves. This concept arises in the context of two-player zero-sum games, where the algorithm evaluates positions by maximizing one player's score while assuming the opponent minimizes it, effectively irrelevant branches to determine the value of a position. The theoretical foundation stems from John von Neumann's , which proves that in finite two-player zero-sum games with , there exists a value of the game and optimal strategies for both players, enabling the construction of such decision trees. In a naive search without optimizations, the game tree size grows exponentially as b^d, where b is the average and d is the depth in plies (half-moves). However, alpha-beta pruning, which eliminates branches that cannot influence the final decision, significantly reduces this under optimal move ordering. Analysis shows that with perfect ordering, the number of nodes examined approximates b^{\lceil d/2 \rceil} + b^{\lfloor d/2 \rfloor} - 1; for typical cases with good but imperfect ordering, the effective is bounded by O(b^{3d/4}), providing a substantial reduction compared to the unpruned tree while still capturing optimal play. This measure is particularly suited to games like chess, where estimated the unpruned game-tree at approximately $10^{120} for a full game, but pruning makes deeper searches feasible in practice. The approach inherently handles two-player zero-sum scenarios with , where each decision anticipates the opponent's best response. tables further refine the tree by detecting and merging equivalent positions reached via different move sequences, avoiding redundant evaluations and lowering the effective node count. Limitations include its reliance on ; in imperfect- games like poker, the escalates due to the need to average over hidden states and information sets, often requiring alternative methods beyond standard trees.

Examples of Game Complexity

Tic-Tac-Toe Analysis

, also known as noughts and crosses, is a two-player played on a 3×3 grid. Players alternate turns placing their distinct symbols—typically X for the first player and O for the second—in empty cells, with the goal of forming an unbroken line of three symbols horizontally, vertically, or diagonally. The game ends in a win for the player achieving this, a loss for the opponent, or a if the board fills without a winner; it is finite, impartial, and features with no element of chance. As a , its outcome under optimal play is predetermined, providing a foundational example for studying game complexity. The state-space complexity of measures the number of distinct legal board positions reachable from the start, which totals 5,478 distinct legal board positions. When accounting for symmetries like rotations and reflections, this reduces to 765 unique positions. This figure excludes invalid configurations, such as those with unequal numbers of symbols beyond one or multiple winners. The average , the mean number of legal moves available per position across the game, is approximately 5, reflecting the decreasing options as the board fills (starting at 9 and dropping to around 4 by mid-game). The full game-tree size, representing the total number of possible play sequences or leaf nodes, is 255,168, capturing all valid paths to states. Decision complexity, which evaluates the reduced tree after applying symmetries and basic to eliminate redundant or invalid branches, yields about 26,830 nodes essential for determining optimal decisions. Retrograde analysis solves by starting from terminal positions and propagating outcomes backward: wins are positions from which any move leads to a loss for the opponent, losses are those where all moves allow an opponent win, and draws are the remainder. This reveals that perfect play forces a draw, as the first player cannot secure a win against optimal responses. A simplified diagram illustrates this, with the root node branching to three symmetry classes (center, corner, edge), each leading to subtrees that converge on drawing lines under best play. Tic-tac-toe was solved by hand in the through manual enumeration of all possibilities, confirming the draw outcome long before computational aids. It became one of the earliest games implemented on computers in the 1950s, with A. S. Douglas's program in demonstrating automated play on the machine, marking a in and game-solving history.

Complexities of Well-Known Games

The complexities of well-known combinatorial span orders of magnitude, reflecting differences in board size, rules, and move options that impact computational solvability. State-space quantifies reachable positions, the average moves per position, and game-tree size the total possible playouts, with estimates serving as proxies for search effort required. These metrics, derived from combinatorial bounds and simulations, reveal why some like and have been solved while others like and Go resist full analysis.
GameState-Space ComplexityBranching FactorGame-Tree SizeComputational Status
Chess$10^{46}35$10^{120}Unsolved
Checkers$10^{20}8$10^{31}Solved (draw, 2007)
Go$10^{170}250$10^{360}Unsolved
Othello$10^{28}10$10^{58}Solved (draw, 2023)
These figures represent standard approximations, with state-space often bounded by $3^b (where b is board cells) adjusted for legality, branching factors averaged over game phases, and game-tree sizes via b^d ( d as average depth). Checkers' solution, achieved after nearly two decades of computation, confirmed perfect play yields a draw. Othello's recent proof similarly established a draw under optimal , marking it as the most complex fully solved to date with its vast state space. Complexity escalates exponentially with board dimensions, as seen in variants like n×n , where state-space approximates $3^{n^2} due to per-cell occupancy, rendering even modest enlargements intractable. Such trends emphasize how modest rule expansions amplify search spaces, informing development in these domains.

Modern Games and AI Implications

While traditional combinatorial game complexity focuses on perfect-information games, modern research in game solving extends to imperfect-information settings such as heads-up no-limit hold'em poker, which features an estimated $10^{160} private states due to hidden cards, chance elements in dealing, and betting strategies. This game was computationally solved in 2017 using counterfactual regret minimization (CFR), an iterative algorithm that approximates equilibria by minimizing regret over billions of simulated hands, enabling the Libratus to outperform top human professionals. Video games, particularly (RTS) titles, introduce even greater challenges through partial observability, continuous time, and multi-unit control, rendering classical exhaustive search infeasible. exemplifies this, with an approximate state space of $10^{168} configurations arising from unit positions, resource allocations, and map interactions, alongside a of around $10^5 possible actions per decision cycle. DeepMind's AlphaStar, unveiled in 2019, achieved grandmaster-level performance by combining with a transformer-based neural architecture that processes raw screen pixels and issues multi-unit commands, partially mastering the game through in a league of evolving agents. Such systems rely on approximations like (MCTS), which samples promising paths in the vast tree rather than exploring all branches. Advancements in have profoundly impacted how game complexity is navigated, particularly through techniques that prune ineffective exploration. In Go, a perfect-information game with a state-space complexity of approximately $10^{170} legal positions, AlphaGo's 2016 victory over world champion demonstrated how policy and value neural networks integrated with MCTS could evaluate positions and guide search, reducing the effective complexity from intractable to solvable within hardware constraints. This approach effectively compresses the search space by prioritizing high-value moves, achieving play with far fewer simulations than brute-force methods would require. To adapt traditional metrics like to these modern contexts, researchers increasingly employ empirical simulations that estimate effective branching through sampled trajectories, avoiding the need for exact enumeration in hyper-large spaces. In the 2020s, neural enhancements to MCTS—such as those incorporating learned priors from —have further advanced this, enabling scalable planning in domains like multi-agent environments by dynamically adjusting exploration based on neural predictions. However, persistent challenges arise from multi-agent , where opponents' strategies introduce adversarial , and constraints demand decisions in milliseconds, amplifying complexity beyond static classical measures and necessitating architectures for practical viability.

Theoretical and Practical Considerations

Factors Influencing Complexity

Rule variations significantly affect game complexity by altering the structure of the game tree and state space. In alternating-turn games, players move sequentially, allowing for predictable branching in the , whereas simultaneous-move games require players to select actions concurrently, leading to a product of action spaces at each turn and exponentially larger joint decision spaces. For instance, algorithms for computing optimal strategies in two-player simultaneous-move games must account for correlated equilibria or equilibria, increasing computational demands compared to search in alternating setups. The introduction of elements, such as random events or dice rolls, further inflates the effective state space by incorporating probabilistic transitions, transforming deterministic games into ones where nodes expand the exploration requirements for value iteration or policy computation. Board size and scaling parameters drive in measures, as larger dimensions multiply the number of possible configurations and moves. In chess variants, increasing the board from to nxn results in state-space that grows exponentially with n, due to the of piece placements and legal positions. Similarly, in the n-queens puzzle, the number of potential states scales approximately as n!, reflecting the growth in permutation-based placements needed to avoid attacks, which underscores the rapid escalation in search space for larger n. Symmetries inherent in game boards or rules enable reductions in complexity through , where equivalent states are quotiented to eliminate redundancies. Rotational and reflectional symmetries, for example, form a that can halve or more the counted states in symmetric games like by identifying isomorphic positions under transformations. This approach, applied in , uses detection to prune the game tree, directly lowering the effective state-space complexity without altering the game's outcomes. Player asymmetry, particularly in non-zero-sum games, introduces additional layers of complexity by decoupling individual utilities from a single zero-sum payoff, requiring computation of correlated or mixed equilibria rather than simple values. Unlike zero-sum settings where one player's gain equals the other's loss, non-zero-sum structures allow for cooperative or competitive alignments that expand the solution space, with finding equilibria proven PPAD-hard even for concise representations. Environmental factors, such as time limits in practical play, contrast with theoretical analyses assuming infinite , forcing approximations that bound search depth and increase vulnerability to suboptimal decisions under pressure. In scenarios, players or algorithms shift to shallower explorations, amplifying the impact of branching factors on effective compared to exhaustive theoretical solving.

Methods for Reducing Complexity

One key approach to reducing the effective complexity of game trees involves search algorithms that prune unnecessary branches while preserving optimality. Alpha-beta pruning, an enhancement to the minimax algorithm, eliminates subtrees that cannot influence the final decision by maintaining lower (alpha) and upper (beta) bounds on the minimax value during the search. This technique can reduce the time complexity from the full minimax's O(b^d), where b is the branching factor and d is the depth, to approximately O(b^{d/2}) in the best case, assuming good move ordering. Iterative deepening, another search strategy, combines the space efficiency of depth-first search with the completeness of breadth-first search by performing successive depth-limited searches, increasing the depth limit incrementally until the desired horizon is reached. This method mitigates the overhead of uniform-depth searches and adapts well to time constraints in real-time game playing. Heuristic methods further alleviate computational demands by approximating values at internal nodes, allowing earlier cutoffs. functions provide static estimates of a position's desirability, typically based on material balance, positional features, and king safety in games like chess, enabling the search to terminate before reaching leaves and focus on promising branches. Transposition tables address repeated state evaluations across different search paths by storing and reusing previously computed values for identical positions, often using to map board states to unique keys for efficient lookup and collision avoidance. This reuse can dramatically cut redundant computations, especially in games with high transpositions like chess, where the same position may arise via multiple move sequences. Parallelization techniques distribute the search workload across multiple processors or machines, particularly effective for building exhaustive endgame databases. In the case of checkers, solving the full game required constructing databases for all positions with 10 or fewer pieces, totaling about 13 trillion positions, which demanded distributed computing on hundreds of processors and approximately 148 GB of storage for the compressed tables. This approach enabled backward induction from terminal positions to determine perfect play outcomes, confirming checkers as a draw under optimal conditions. Exploiting symmetries in game states reduces the search space by identifying and merging equivalent positions under transformations like rotations or reflections. Canonical forms represent states in a standardized way, eliminating isomorphic duplicates; for instance, in , automated detection of symmetries via or constraint solving can prune up to 90% of equivalent subtrees in symmetric boards. This method is particularly valuable in abstract strategy games, where board symmetries lead to redundant explorations. More recent advancements leverage to approximate optimal , bypassing exhaustive search in high-complexity games. Post-2010 methods, such as those combining deep neural networks with , learn and functions from , achieving superhuman performance in Go by focusing on high-probability moves rather than full enumeration. These approximations scale to games with branching factors exceeding 200, like Go, by prioritizing informative simulations over complete trees.

Open Problems in Game Complexity

One of the central open problems in game complexity is determining the exact game-tree size for major board games such as and Go, where current estimates rely on approximations like Claude Shannon's lower bound of approximately 10^{120} possible games for , but precise counts remain computationally infeasible and theoretically unresolved. Similarly, Go's game-tree is estimated at around 10^{360}, vastly exceeding , yet exact enumeration eludes exact calculation due to the in branching factors. These uncertainties highlight the practical limits of exhaustive search methods, with implications for fully solving these games to find optimal strategies. Generalizing complexity measures to broader game classes presents further challenges, particularly for multiplayer games beyond two players and variants incorporating elements. While many two-player perfect-information games like Go are proven EXPTIME-complete, multiplayer noncooperative games of incomplete information exhibit lower bounds suggesting at least EXPTIME-hardness, but exact classifications for specific multiplayer settings remain open. For games, the problem of values in simple stochastic games—two-player zero-sum games with probabilistic transitions—has unknown polynomial-time solvability, with the best algorithms still requiring subexponential but superpolynomial time, leaving its precise placement in complexity classes like PPAD or beyond unresolved. The potential role of in addressing game complexity introduces speculative yet promising frontiers, with recent work demonstrating algorithmic quantum speedups for solving zero-sum games through improved dynamic programming techniques that achieve near-optimal Nash equilibria faster than classical methods. Post-2020 research has shown quadratic speedups for quantum zero-sum games using multiplicative weight updates, hinting at exponential advantages for game-tree search in quantum settings, though fault-tolerant quantum hardware remains a barrier to practical application. These developments raise open questions about whether quantum algorithms can reduce effective complexity for EXPTIME-complete games like Go. Measurement gaps persist in standardizing "effective" , especially integrating -driven approximations that search spaces without exhaustive . Traditional metrics like game-tree overlook heuristics that achieve play in unsolved games, prompting calls for new benchmarks that quantify human- complexity, yet no unified framework exists as of 2025. Infinite games, such as on an unbounded board, pose unique challenges; while the mate-in-n problem is decidable via up to transfinite lengths like ω^4, determining the full decision complexity for arbitrary winning strategies remains open, potentially linking to higher recursion-theoretic degrees. As of 2025, neither chess nor Go has been theoretically solved to identify perfect play, with ongoing AI benchmarks like those from variants providing strong empirical strategies but no guarantees of optimality, underscoring persistent frontiers in computational .

References

  1. [1]
    [PDF] Algorithmic Combinatorial Game Theory - Erik Demaine
    Abstract. Combinatorial games lead to several interesting, clean problems in algorithms and complexity theory, many of which remain open.
  2. [2]
    [PDF] Solving games - CMU School of Computer Science
    The state-space complexity is defined as the number of legal game positions obtainable from the initial position of the game. The game-tree complexity is ...
  3. [3]
    XXII. Programming a computer for playing chess
    Programming a computer for playing chess. Claude E. Shannon Bell Telephone Laboratories, Inc., Murray Hill, NJ. Pages 256-275.<|control11|><|separator|>
  4. [4]
    Branching Factor - Chessprogramming wiki
    The Branching Factor is the number of children at each node, the outdegree. If this value is not uniform, an average branching factor can be calculated.
  5. [5]
    [PDF] Branching factor - Model AI Assignments
    Jul 13, 2018 · Branching factor is the number of children at each node, or outdegree, in tree data structures and game theory. For example, chess has a  ...Missing: definition | Show results with:definition
  6. [6]
    [PDF] Estimates for the Branching Factors of Atari Games - NSF PAR
    Abstract—The branching factor of a game is the average number of new states reachable from a given state. It is a widely used metric in AI research on board ...Missing: formula | Show results with:formula
  7. [7]
    Programming a Computer for Playing Chess
    314 - March 1950. XXII. Programming a Computer for Playing Chess (*) First presented at the National IRE Convention, March 9, 1949, New York, U.S.A. By ...Missing: PDF | Show results with:PDF
  8. [8]
    Calculation of branching factor - TalkChess.com
    Aug 24, 2024 · Re: Calculation of branching factor​​ (Effective) branching factor is the factor the search time increases per ply depth increase. As plies of depth are ...Effective branching factor questionReducing branching factorMore results from talkchess.comMissing: formula | Show results with:formula
  9. [9]
    3.2 Minimax | Introduction to Artificial Intelligence
    For example, chess has a branching factor b≈35 b ≈ 35 and tree depth m≈100 m ≈ 100 . To help mitigate this issue, minimax has an optimization - alpha-beta ...Missing: initial average
  10. [10]
    Minimax Algorithm in Game Theory | Set 1 (Introduction)
    Jun 13, 2022 · Time complexity : O(b^d) b is the branching factor and d is count of depth or ply of graph or tree. Space Complexity : O(bd) where b is ...
  11. [11]
    [PDF] Searching for Solutions in Games and Artificial Intelligence - Free
    6.2.4 Complexity. The property complexity in relation to games is used to denote two di erent measures, which we name the state-space complexity and the game- ...<|control11|><|separator|>
  12. [12]
    [PDF] Depth in Strategic Games
    The difficulty is measured in terms of the amount of computational resources required: the time and/or (memory) space needed to find the solution. The result of ...
  13. [13]
    Game Tree Evaluation - UC Irvine
    Apr 17, 1997 · 1 + b + b^2 + b^3 + ... + b^d = b^d (1 - 1/b^d)/(1 - 1/b). The stuff in parentheses at the end of the formula is very close to one, so the ...Missing: size | Show results with:size
  14. [14]
    [PDF] XXII. Programming a Computer for Playing Chess1
    The thesis we will develop is that modern general purpose computers can be used to play a tolerably good game of chess by the use of suitable computing routine ...
  15. [15]
    [PDF] Lecture 1 1 Introduction 2 Game Trees and the Value of a Game
    May 14, 2008 · Games of Chess tend to last on the order of 100 moves, and so the size of the game tree is quite large – too large in fact for modern ...
  16. [16]
    Computational Complexity Theory
    Jul 27, 2015 · A complexity class can now be defined to be the set of problems for which there exists a decision procedure with a given running time or running ...
  17. [17]
    Computing a perfect strategy for n × n chess requires time ...
    It is proved that a natural generalization of chess to an n × n board is complete in exponential time. This implies that there exist chess positions on an n ...
  18. [18]
    Limits of quantum computation in solving chess
    Mar 4, 2014 · Despite shortcuts, for all practical purposes, chess remains an exponential time problem and amounts to examining 10^40+ positions. We are ...
  19. [19]
    [PDF] SCALABLE ALGORITHMS FOR PARALLEL TREE SEARCH
    [37] developed a distributed algorithm for searching game trees with massively parallel systems. They implemented a distributed chess program called ...
  20. [20]
    Which games will survive? - Tilburg University Research Portal
    Allis, L. V., van den Herik, H. J., & Herschberg, I. S. (1991). Which games will survive? In D. N. L. Levy, & D. F. Beal (Eds.), Heuristic programming in ...
  21. [21]
    An analysis of alpha-beta pruning - ScienceDirect.com
    An analysis of alpha-beta pruning☆. Author links open overlay panelDonald E ... 13. D.E. Knuth. Structured programming with go to statements. Comput ...
  22. [22]
    [PDF] John von Neumann's Conception of the Minimax Theorem
    The first proof of the minimax theorem: von Neumann's 1928 paper. John (Johann) von Neumann (1903–1957) published his first paper on what he called. “Theorie ...
  23. [23]
    [PDF] An Analysis of Alpha-Beta Priming'
    AN ANALYSIS OF ALPHA-BETA PRUNING. 303 some people have confused procedure F1 with the stronger procedure F2; therefore the following account is based on the ...
  24. [24]
    Tic-Tac-Toe - GamesCrafters :: Games
    History. Tic-tac-toe, also known as Naughts and Crosses, is one of the most widely known games. Found everywhere from the temples of ancient Egypt to the ...History · Variants
  25. [25]
    What are the symmetries of a tic tac toe game board?
    Oct 17, 2011 · @YanKingYin This is known to be 5478 positions of which 958 are end positions (consistent with the 255168 games ignoring symmetry), or 765 ...Taking into account symmetry, how many possible games of tic-tac ...Number Of Uncompleted Tic Tac Toe Games - Math Stack ExchangeMore results from math.stackexchange.com
  26. [26]
    [PDF] Cooperative and Adaptive Algorithms
    The minimax value of a node is the utility of being in the corresponding state, assuming that both players play optimally from there to the end of the game. • ...
  27. [27]
    Canonical and Reduced Game Trees for Tic-Tac-Toe - Justin Skycak
    Mar 1, 2022 · There are 255168 unique ways that a game of tic-tac-toe can play out, so you can check your tree by verifying that there are 255168 leaf nodes.
  28. [28]
    How to find the complexity of decision trees in Tic Tac Toe by hand?
    Nov 6, 2018 · So complexity of decision trees in 3X3 Tic Tac Toe is 5 which is the number of digit of the leaf nodes (26,830).
  29. [29]
    [PDF] Tic-tac-toe and retrograde analysis - Alain Brobecker
    Retrograde analysis is possible in the ultra simple tic-tac-toe game. ... The diagram aside shows an example, but everyone will admit it's very easy and giving ...
  30. [30]
    First videogame | Guinness World Records
    The first videogame was an implementation of the board game draughts (or checkers), programmed by British teacher and physicist Christopher Strachey.
  31. [31]
    Checkers Is Solved - Science
    This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date.
  32. [32]
    [2310.19387] Othello is Solved - arXiv
    Oct 30, 2023 · Abstract:The game of Othello is one of the world's most complex and popular games that has yet to be computationally solved.
  33. [33]
    [PDF] Measuring the Size of Large No-Limit Poker Games
    Feb 26, 2013 · This number allows us to compare a game against other games such as chess or backgammon, which have 1047 and 1020 distinct game states ...
  34. [34]
    Algorithms for computing strategies in two-player simultaneous ...
    In this paper, we describe both novel and existing algorithms that compute strategies for the class of two-player zero-sum simultaneous move games.Missing: variations | Show results with:variations
  35. [35]
    [PDF] The Complexity of Simple Stochastic Games - arXiv
    Apr 20, 2007 · The optimal value of a vertex denotes the probability that the max player wins the game starting at that vertex, assuming both players employ.
  36. [36]
    [PDF] A Note on the Computational Complexity of Selfmate and ... - arXiv
    Aug 10, 2022 · Shitov [10] [11] showed that chess number is exponential in the board size, i.e. there exist two legal positions that require exponential many ...
  37. [37]
    [PDF] Symmetry Detection in General Game Playing
    Dec 28, 2020 · A transposition table is used to efficiently exploit the symmetries in many games. Introduction. Exploiting symmetries of the underlying domain ...
  38. [38]
    [PDF] Nash Equilibrium Solving for General Non-Zero-Sum Games from ...
    In fact, later research on computational complexity reveals that such difficulty is essential and intractable. Solving NE is generally PPAD- hard [8] for normal ...Missing: layers | Show results with:layers
  39. [39]
    Complexity, Attention, and Choice in Games Under Time Constraints
    Oct 9, 2025 · We test whether players adapt to tightening time constraints by reducing their information search and shifting to less computationally-demanding heuristics.
  40. [40]
    Depth-first iterative-deepening: An optimal admissible tree search
    In this paper, we study a family of heuristic search planners that are based on a simple and general heuristic that assumes that action preconditions are ...Missing: games original
  41. [41]
    Zobrist Hashing - Chessprogramming wiki
    Zobrist Hashing, a technique to transform a board position of arbitrary size into a number of a set length, with an equal distribution over all possible ...
  42. [42]
    State Space Complexity of Chess
    Apr 7, 2024 · Shannon's lower bound estimation of the game-tree complexity is around 10^120 possible chess games but the focus here is simply on the states ...
  43. [43]
    Complexity of Go, Chess and Checkers games - ResearchGate
    Go gaming automation is AI relevant because during a match the search space is enormous and the game tree size is around10360, versus the 10123 for Chess.
  44. [44]
    Lower bounds for multiplayer noncooperative games of incomplete ...
    These new games can be shown to be complete for their respective classes. We use these games to establish lower bounds on complexity of multiplayer games of ...
  45. [45]
    [PDF] Quantum Speedups for Zero-Sum Games via Improved Dynamic ...
    We consider this question for the fundamental optimization problem of computing -approximate Nash equilibrium in zero-sum games.
  46. [46]
    A Quadratic Speedup in Finding Nash Equilibria of Quantum Zero ...
    May 6, 2025 · This work proposes a new algorithm, Optimistic Matrix Multiplicative Weight Updates (OMMWU), for computing approximate Nash equilibria of quantum zero-sum ...
  47. [47]
    Important future directions in computational game theory - Medium
    Dec 10, 2018 · I will now describe four key open problems that I think addressing will play a pivotal role for the future of the field. Identifying new ...
  48. [48]
    [1201.5597] The mate-in-n problem of infinite chess is decidable
    Jan 26, 2012 · The mate-in-n problem of infinite chess is the problem of determining whether a designated player can force a win from a given finite position ...