Fact-checked by Grok 2 weeks ago

Solved game

In game theory, particularly within the framework of , a solved game is one whose outcome—win for the first player, win for the second player, or —can be definitively determined assuming both players employ optimal from any given position. This concept applies primarily to finite, two-player, zero-sum games with and no chance elements, such as those analyzed through exhaustive enumeration of game trees or retroanalysis techniques. Solutions are categorized by degrees of completeness: an ultra-weak solution establishes the value (win/loss/) of the initial position alone; a weak solution provides a strategy for the starting player to achieve that value against any opponent play; and a strong solution identifies the optimal move from every possible position in the game. Notable examples of solved games illustrate the progression of computational achievements in this area. (also known as noughts and crosses) is trivially strongly solved as a with perfect play, due to its small state space of approximately 255,168 possible game sequences. was weakly solved in 1988 by James Allen and Victor Allis, revealing a first-player win achievable through specific opening strategies, with a state-space complexity of about 4.5 × 10^12 positions. (English draughts) achieved a weakly solved status in 2007, confirming a under optimal play after an 18-year computational effort involving over 500 billion billion positions, marking it as one of the most complex popular games to be fully analyzed. Other solved games include Qubic (strongly solved in 1980 as a first-player win) and Awari (strongly solved in 2003 as a ). Despite these advances, many iconic games remain unsolved due to their immense complexity; for instance, chess has an estimated game-tree complexity of 10^123, rendering exhaustive solving infeasible with current technology, though narrow AI evaluations approximate optimal play in limited contexts. Similarly, Go, with around 10^170 possible positions, resists full solution but has seen superhuman performance via deep learning methods like AlphaGo. The pursuit of solving games not only advances and design but also deepens understanding of strategic decision-making in deterministic environments.

Types of Solutions

Ultra-weak solution

An ultra-weak solution determines the game-theoretic value—whether the first player wins, the second player wins, or the game is a —from the initial position, assuming optimal play by both players, without requiring the computation or specification of strategies for any positions. This level of solution, the most minimal among game-solving categories, was first formalized by L. Victor Allis in his 1994 PhD thesis. Key characteristics of an ultra-weak solution include its reliance on limited computational resources, often achieved through from terminal positions to confirm the initial position's value, but halting analysis once the root outcome is established. This approach avoids the exhaustive enumeration needed for deeper strategic insights, making it feasible for games where full exploration is intractable. In contrast to more comprehensive solutions, it provides no guidance on optimal play beyond predicting the end result under perfect conditions. A historical example is , which was solved as a draw via simple manual enumeration in the mid-20th century, demonstrating that perfect play by both players leads to a tie from the starting position. Another illustrative case is the game of on an n \times n board for arbitrary n, proven to be a first-player win in 1949 through theoretical arguments, though explicit strategies remain unknown for large boards. This form of solution distinctly differs from stronger variants, such as the strong solution, which requires determining optimal moves from every possible position, as it prioritizes outcome prediction over strategic completeness.

Weak solution

In and , a to a combinatorial game determines the optimal for the initial move from the starting position, along with the game's outcome under perfect play by both players, without requiring a complete of all subsequent subgames or positions. This approach establishes whether the first player can force a win, a second-player win, or a draw, and identifies the specific opening moves that achieve this result, assuming optimal responses thereafter. Weak solutions are particularly valuable for impartial games under normal play convention, where the last move wins, as they provide practical guidance for perfect play initiation without the exhaustive computation needed for fuller analyses. The computational feasibility of weak solutions stems from their focus on exploring the from the root position to a sufficient depth, often leveraging search algorithms like alpha-beta to evaluate branches efficiently and prune suboptimal paths. This method reduces the complexity compared to evaluating the entire , making it applicable to games with larger state spaces where strong solutions—requiring strategies for every position—would be prohibitive. For instance, databases or tables may be used selectively for terminal positions near the opening, but the emphasis remains on the initial rather than retrograde analysis of all reachable states. A seminal example is , weakly solved in 1988 by James D. Allen, who demonstrated that the first player can force a win in at most 41 moves with optimal play, identifying seven symmetric opening moves (such as the center column) that lead to victory regardless of the second player's responses. This solution relied on exhaustive computer search with alpha-beta pruning on a 7x6 board, evaluating approximately 10^12 positions but focusing primarily on the opening phase without fully tabulating all 4.5x10^12 possible game states. Victor Allis independently arrived at a similar shortly thereafter, confirming the first-player win and optimal strategies using a knowledge-based approach combined with search. Unlike an ultra-weak solution, which merely proves the outcome without specifying moves, Allen's work provided an executable algorithm for securing the win from the start.

Strong solution

A strong solution to a game provides the game-theoretic value—win, loss, or draw—and an optimal for every possible legal , assuming perfect play by both players. This level of analysis ensures that, from any reachable state, the outcome is deterministically known regardless of how the game arrived there, distinguishing it from weaker forms of solving that focus only on initial or limited . Achieving a strong solution typically involves constructing a complete game tree or an exhaustive database of all positions, which is computationally intensive due to the exponential growth in state space for most games. Retrograde analysis, a backward-induction method starting from terminal positions and propagating values upward through the tree, is commonly employed to build such databases efficiently. This process demands significant resources, as even modest games can have trillions of states; for instance, the 7×6 Connect Four board encompasses approximately 4.5 × 10¹² positions. A landmark example of a strong solution is , which was strongly solved around 1995 by John Tromp using brute-force methods to compile a database of positions, confirming the first-player winning strategy under perfect play. Simpler games like also exemplify strong solutions, where manual or computational analysis confirms a draw under perfect play across all approximately 5,478 reachable positions. The implications of a strong solution are profound, as it eliminates all uncertainty in the game, rendering the outcome fully deterministic when perfect play is assumed. This not only aids in understanding but also enables perfect automated play, though the ' size often limits practical storage and consultation for larger games.

Perfect Play

Definition of perfect play

In the context of solved games, perfect play refers to a in which each player, at every decision point, selects the move that guarantees the best possible outcome for themselves—prioritizing a win over a , and a over a loss—under the assumption that the opponent is likewise employing perfect play. This recursive notion ensures that no player can improve their result by unilaterally deviating from the strategy, leading to a predetermined outcome when both adhere to it. The concept of perfect play is applicable specifically to finite, deterministic, two-player, zero-sum games with and no elements of chance, where the complete game tree can in principle be evaluated backward from terminal positions. In such games, every position has a definitive value (win, loss, or ) under optimal responses, allowing players to navigate the tree by always choosing moves that align with this value. This foundational idea is rooted in game theory's , which formalizes how players in zero-sum conflicts seek to maximize their minimum guaranteed payoff against an adversarial opponent, ensuring equilibrium through saddle-point strategies. The notion of perfect play was first formalized by in his 1921 papers on the theory of games, where he introduced strategies as complete methods of play and analyzed optimal responses in symmetric zero-sum settings. It was rigorously established by in 1928 through his proof of the for two-person zero-sum games, providing the mathematical guarantee of optimal strategies' existence.

Outcomes with perfect play

In solved games, the outcome under perfect play is fixed and falls into one of three categories: a win for the first , a win for the second , or a draw. This result arises from analyzing the complete game tree starting from the initial position, where each represents a game state and branches denote legal moves, assuming both players select optimal strategies to maximize their chances of victory. The value at the root of the tree—computed via or equivalent methods—determines whether the starting player can force a win, the responding player can defend to victory, or the game inevitably ends in . For impartial games under the normal play convention, where the player making the last move wins, the Sprague-Grundy theorem offers a systematic evaluation tool, particularly useful for sums of independent subgames. The theorem assigns a non-negative , known as the Grundy number or , to each position based on the minimum excludant (mex) of the Grundy numbers of its successor positions. If the initial position's Grundy number is zero, it is a second-player win, as any move by the first player leads to a position with nonzero Grundy number, allowing the second player to respond by restoring a zero-Grundy state; conversely, a nonzero Grundy number indicates a first-player win. In these finite, acyclic impartial games, draws do not occur, as the game must terminate with a last move. In contrast, partizan games—where players have distinct move options—or games permitting explicit draw conditions (such as board-filling without a decisive line) can yield draws with perfect play. For illustration, in , exhaustive analysis confirms that optimal moves by both players always result in a filled board without three-in-a-row, leading to a draw. Establishing the outcome in a solved game provides a foundational benchmark, enabling deeper theoretical exploration of variants, such as misère versions where the last move loses, by leveraging the normal-play solution to bound or predict altered results.

Fully Solved Games

Strongly solved games

A strongly solved game is one where the optimal outcome is determined for every possible position, allowing perfect play from any starting point. One prominent example is Qubic, a four-dimensional variant of played on a grid where players aim to align four marks in a row along any line. Strongly solved in 1994 by Victor Allis and colleagues using proof-number search algorithms, Qubic was shown to be a first-player win with perfect play, requiring exhaustive exploration of its complex despite symmetries reducing the search space. Tic-tac-toe, a simple 3×3 grid game, is trivially strongly solved as a draw with perfect play, due to its small state space that allows complete manual or computational analysis of all positions. This was demonstrated computationally in 1952 by A. S. Douglas's program on the computer. The game of , an impartial subtraction game with heaps of objects, was strongly solved in 1901 by Charles L. Bouton's nim-sum theorem, which determines the winning strategy and optimal moves from any position using the XOR of heap sizes. Connect Four, played on a 7×6 grid, was strongly solved following the 1988 weak solution by James D. Allen and Victor Allis, confirming a first-player win with perfect play from any position, leveraging its state-space complexity of about 4.5 × 10^12 positions. The abstract strategy game Hex, involving connection of opposite board edges on a hexagonal grid, has been strongly solved on small boards, confirming first-player wins in all cases. For instance, the 6×6 board was solved in 1994 using computational methods that accounted for the game's impartial nature and bridging strategies, while boards up to 9×9 were solved in subsequent years through advanced search techniques like depth-first search with pattern databases. These achievements highlight early successes in applying computer science to combinatorial games, though larger boards remain unsolved due to exponential complexity. Othello (also known as Reversi), a 8×8 of capturing opponent discs, was strongly solved in 2023 by Hiroki Takizawa using alpha-beta pruning and massive computation, proving that perfect play by both players leads to a draw. This involved solving the full of approximately 10^28 possible positions. As of November 2025, these solves, including recent advancements like , underscore the boundaries of brute-force and search methods in , while spurring advancements in algorithms like alpha-beta pruning and that have broader applications in optimization and decision-making systems.

Weakly and ultra-weakly solved games

Weakly and ultra-weakly solved games represent cases where the outcome from the initial position is determined without exhaustive strategies for all possible positions, often through mathematical proofs or limited computations suitable for simpler games. These solutions establish whether the first player can force a win, the second player can force a win, or the game ends in a under perfect play, providing valuable insights into without the full complexity of strong solutions. Gomoku, the 5-in-a-row game on an infinite board, is weakly solved as a first-player win via strategy-stealing arguments, though practical variants impose restrictions on the first player (as in Renju) to prevent overwhelming advantage and promote balanced play. Checkers (also known as English draughts), played on an 8×8 board, was weakly solved in 2007 by Jonathan Schaeffer and his team after nearly two decades of computation. The result is a draw with perfect play from the starting position, achieved through a combination of retrograde endgame databases covering positions with 10 or fewer pieces (about 3.9 × 10^{13} states) and forward proof-number searches that evaluated around 10^{14} positions in the critical proof tree. The total game tree encompasses roughly 5 × 10^{20} possible positions, pruned via parallel computing on up to 200 processors, transposition tables, and symmetry exploitations like the forced-capture rule; this remains one of the most computationally intensive game solves to date. These solutions typically rely on manual enumeration for small games, or mathematical theorems, or early computer assistance—using or proof-number methods—for medium-sized games, enabling the determination of initial-position values without full positional databases. Despite their elegance, weakly and ultra-weakly solved games offer limited practical guidance, as they do not provide complete playbooks or optimal moves from arbitrary mid-game positions, making them primarily useful for educational purposes, casual analysis, or understanding theoretical outcomes in combinatorial games.

Partially Solved Games

Definition and examples

Partially solved games are those in which perfect play outcomes are determined for substantial subsets of positions, such as endgames or opening lines, but the complete game tree remains unsolved due to prohibitive computational demands. These games typically rely on precomputed databases that catalog optimal moves for frequent or critical configurations, enabling near-perfect play in those scenarios while leaving overall victory probabilities indeterminate without exhaustive analysis. Unlike strongly solved games, where the result is known from the initial position, partial solutions provide tactical precision in bounded domains but cannot guarantee global strategy. A prominent example is chess endgames, where tablebases—exhaustive databases of positions—have solved all configurations with up to seven pieces. The Lomonosov 7-piece tablebases, completed in by researchers at Moscow State University's Department, encompass 423,836,835,667,331 unique legal positions, allowing engines to play flawlessly in these scenarios and revealing insights like sequences exceeding 500 moves to . As of 2025, progress continues on 8-piece tablebases, with partial computations uncovering extended winning lines up to 584 moves, though full completion remains challenging due to the database's estimated 10^16 positions; larger endgames, such as those with 20 or more pieces, are analyzed only for specific pawnless or simplified structures rather than comprehensively. In Go, artificial intelligence has enabled partial solutions for openings. The 2016 AlphaGo program, developed by DeepMind, used deep neural networks and to evaluate thousands of opening moves with superhuman accuracy, influencing professional play by highlighting previously underexplored strategies like the 5-5 point invasion. However, Go's full 19x19 board, with approximately 10^170 possible positions, eludes complete solution, rendering endgame outcomes and midgame transitions reliant on approximations despite these advances.

Computational methods

Computational methods for partially solving games rely on algorithms that approximate optimal play within feasible computational limits, focusing on s, openings, and search heuristics to navigate vast game trees. Retrograde analysis serves as a foundational technique for constructing databases by starting from terminal positions and propagating evaluations backward through the game tree, determining the optimal outcome for each reachable state. This method builds exhaustive solutions for reduced-piece configurations, enabling perfect play in those subsets. For broader searches, particularly in openings, alpha-beta pruning enhances the algorithm by eliminating branches that cannot influence the final decision, significantly reducing the number of nodes evaluated. tables complement this by hashing board positions to store and reuse prior evaluations, avoiding redundant computations when equivalent states are reached via different move sequences. Advanced approaches incorporate (MCTS), which builds an asymmetric search tree through random playouts and balances exploration-exploitation via upper confidence bounds, proving effective for high-branching-factor games like Go. In chess, neural networks provide sophisticated evaluation functions; for instance, Stockfish's NNUE () uses a shallow network trained on vast datasets to estimate position values, replacing traditional handcrafted heuristics. Game tree complexity poses a primary challenge, with the total nodes scaling exponentially as b^d, where b is the average and d is the depth, rendering full exploration intractable for most games. Mitigation strategies include parallel search algorithms that distribute node evaluation across multiple processors, hashing techniques like transposition tables for , and precomputed endgame databases such as the Nalimov tablebases, which store distance-to-mate information for up to six pieces using and compression. Looking ahead, holds theoretical promise for accelerating searches through algorithms like quantum-inspired methods, potentially offering speedups in sampling or optimization. However, as of 2025, no practical breakthroughs have enabled quantum solutions for large-scale game solving beyond small combinatorial instances.

References

  1. [1]
    [PDF] Solving games - CMU School of Computer Science
    Games of which the game-theoretic value is known are labeled solved. Three degrees of solutions have been defined by Allis [1]. Lincke [11] distinguishes.
  2. [2]
    Solvable Games: Theory and Practicality - tom rocks maths
    Mar 13, 2024 · Solvable games are turn-based, perfect-knowledge games with limited moves, having a weak solution where the outcome is determined from the ...
  3. [3]
    [PDF] Checkers Is Solved RESEARCHARTICLES - CS@Cornell
    Jul 19, 2007 · This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. ... 20 April 2007; accepted 6 July 2007. Published ...
  4. [4]
    [PDF] Searching for Solutions in Games and Artificial Intelligence - Free
    Allis, L. Victor. Searching for Solutions in Games and Arti cial Intelligence /. L. Victor Allis ; [ill. by the author]. - [S.l. : s.n.]. (Wageningen : Ponsen ...
  5. [5]
    [PDF] Solving Checkers - Department of Computing Science
    Allis defines three levels of solving [Allis, 1994]:. 1. Ultra-weakly solved. The game-theoretic value for the game has been determined. An ultra-weak solution ...
  6. [6]
    [PDF] Solving the Game of Checkers - Department of Computing Science
    Strongly solved games include simple domains such as tic-tac-toe, and more complex domains such as chess and checkers endgames (for which databases enumerating ...
  7. [7]
    [PDF] othello is solved - arXiv
    Jan 2, 2024 · of the game. For weak solutions, alpha-beta search [13] is often used, while retrograde analysis [14] is frequently used for strong solutions.
  8. [8]
    [PDF] CAP 4630 Artificial Intelligence
    Weak: Provide an algorithm that secures a win for one player, or a draw for either, against any possible moves by the opponent, from the beginning of the game.
  9. [9]
    [PDF] A Knowledge-based Approach of Connect-Four - John Tromp
    A Shannon C-type strategy program, VICTOR, is written for Connect-Four, based on nine strategic rules. Each of these rules is proven to be correct, implying.
  10. [10]
    John's Connect Four Playground - John Tromp
    The first person to (weakly) solve the game of Connect-4 was James D. Allen ... Only 15 days later, Victor Allis announced his independently discovered solution, ...
  11. [11]
    [PDF] Connect Four - MIT
    Mar 9, 2010 · ▻ Recall: a strong solution to a game means knowing the outcome of the game from any given board position. ▻ One strategy to try would be ...
  12. [12]
    (PDF) Checkers Is Solved - ResearchGate
    Aug 6, 2025 · This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date.
  13. [13]
    [PDF] Combinatorial Game Theory
    Aug 1, 2011 · To solve an impartial game, we have to identify the P-positions and the N-positions. The following properties need to be satisfied: 1. Every ...
  14. [14]
    Combinatorial Game Theory Background - UC Berkeley math
    “Games of No Chance” are 2-player perfect-information games. A SUM of such games is naturally defined as the game in which each player at his turn, may choose ...
  15. [15]
    [PDF] John von Neumann's Conception of the Minimax Theorem
    The first purpose of this paper is to tell the history of John von Neumann's devel- opment of the minimax theorem for two-person zero-sum games from his ...
  16. [16]
    [PDF] Emile Borel and the foundations of game theory - Knowledge Base
    Borel's 1921 paper was written in the aftermath of the First World War, and he expressed some hope that game-theoretic analysis could contribute to our ...
  17. [17]
    [PDF] Theory of Impartial Games - MIT
    Feb 3, 2009 · Nim has been solved (we use the term solved loosely here, but there are several categories of “solutions” to games) for all starting ...
  18. [18]
    [PDF] Tic-Tac-Toe on Graphs - East Tennessee State University
    Aug 15, 2015 · In the traditional game, perfect play from both players will result in a draw each time. In addition to the simplicity of the game, this makes ...
  19. [19]
    1952 | Timeline of Computer History
    Alexander Douglas was a Cambridge University PhD candidate when he designed one of the earliest computer games, a version of Tic-Tac-Toe.Missing: solved | Show results with:solved
  20. [20]
    [PDF] Nim, A Game with a Complete Mathematical Theory Charles L ...
    Apr 11, 2007 · Nim, A Game with a Complete Mathematical Theory. Charles L. Bouton. The Annals of Mathematics, 2nd Ser., Vol. 3, No. 1/4. (1901 - 1902), pp.
  21. [21]
    Gomoku Family - ICGA Wiki
    Aug 31, 2019 · Victor Allis proved by a complete computer-search that it was a first-player win and thereby weakly-solved the game. Because of the first-player ...<|control11|><|separator|>
  22. [22]
    Endgame Tablebases - Chessprogramming wiki
    Creating tables of chess 7-piece endgames on the Lomonosov supercomputer. ... Complete 7-piece tablebases are out! by Albert Silver, CCC, April 12, 2013 ...
  23. [23]
    Syzygy endgame tablebases: KvK
    Syzygy tablebases allow perfect play with up to 7 pieces, both with and without the fifty-move drawing rule, i.e., they allow winning all won positions and ...Endgames · Metrics · LegalMissing: 2012 | Show results with:2012
  24. [24]
    Mastering the game of Go with deep neural networks and tree search
    Jan 27, 2016 · Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go ...
  25. [25]
    Lomonosov tablebases
    Welcome to the online 7-man tablebases. 7-man tablebases were calculated in 2012 on Computer Science department of Moscow State University.
  26. [26]
    Eight-piece tablebases – a progress update and some results
    Aug 29, 2025 · The two most comprehensive tablebases currently are the Lomonosov and Syzygy, both of which cover all possible endgames with up to seven pieces.
  27. [27]
    Retrograde Analysis - Chessprogramming wiki
    Retrograde analysis is the basic algorithm to construct Endgame Tablebases. A bijective function is used to map chess positions to Gödel numbers which index a ...
  28. [28]
    Alpha-Beta - Chessprogramming wiki
    The Alpha-Beta algorithm (Alpha-Beta Pruning, Alpha-Beta Heuristic ) is a significant enhancement to the minimax search algorithm that eliminates the need ...Beta-Cutoff · Aspiration Windows · Fail-High · Fail-LowMissing: solving | Show results with:solving
  29. [29]
    [PDF] Monte Carlo Tree Search in Go - Department of Computing Science
    This chapter describes the application of Monte Carlo Tree Search (MCTS) meth- ods to the game of Go. Computer Go was the first major success story for MCTS,.
  30. [30]
    Introducing NNUE Evaluation - Strong open-source chess engine
    Aug 7, 2020 · The NNUE evaluation was first introduced in shogi, and ported to Stockfish afterward. It can be evaluated efficiently on CPUs, and exploits the ...
  31. [31]
    [PDF] Lecture 9: Games I
    Second, game trees can be enormous. Chess has a branching factor of around 35 and go has a branching factor of up to 361 (the number of moves to a player on his ...
  32. [32]
    Parallel Search - Chessprogramming wiki
    Parallel Search, also known as Multithreaded Search or SMP Search, is a way to increase search speed by using additional processors.
  33. [33]
    Nalimov Tablebases - Chessprogramming wiki
    Nalimov Tablebases are 3-to-6-man endgame tablebases developed by Eugene Nalimov, providing depth to mate information.
  34. [34]
    [PDF] Quantum Speedups for Monte Carlo Tree Search
    Aug 2, 2021 · In this work we describe a quantum algorithm to speed up the stochastic 'random rollout' step performed in a variant of MCTS that does multiple ...