Fact-checked by Grok 2 weeks ago

Evaluation function

An evaluation function in is a that estimates the expected or relative desirability of a state in a , particularly in adversarial games like chess, where it approximates the value of non-terminal positions when full exploration of the game tree is computationally infeasible. Pioneered by in his 1950 paper on programming computers to play chess, the function assesses long-term advantages and disadvantages of a position, such as material balance and positional control, to guide algorithms like in selecting moves. Typically formulated as a linear combination of board features—Eval(s) = w₁f₁(s) + w₂f₂(s) + … + wₙfₙ(s), where fᵢ represent quantifiable attributes (e.g., number of pieces or center control) and wᵢ are weights derived from expert knowledge or machine learning—it enables efficient depth-limited search by treating cutoff nodes as pseudo-terminals. In practice, a strong evaluation function must be both computationally efficient and highly correlated with actual winning probabilities, often incorporating domain-specific heuristics like piece values (pawn = 1, queen = 9) or mobility metrics to outperform opponents in complex games. Its effectiveness has been central to advancements in game AI, from early programs to modern systems, where tuning via self-play or refines weights to better predict outcomes.

Fundamentals

Definition

An evaluation function is a static function employed in for game-playing programs to assign a numerical score to a given game state or position, approximating its desirability or strength from the perspective of a specific player. This score serves as an estimate of the expected outcome under optimal play, guiding when exhaustive search of the game tree is infeasible due to computational limits. In essence, it provides a quick assessment of position quality without simulating all possible future moves, relying on domain-specific features to differentiate between favorable and unfavorable states. Mathematically, an evaluation function can be represented as \text{eval}(s) \mapsto v, where s denotes the game state (or position) and v is a scalar indicating the estimated . In zero-sum two-player , v typically ranges from negative (certain for the current ) to positive (certain win), with values centered around 0 representing approximate , such as a draw or balanced positions. For terminal positions, \text{eval}(s) exactly matches the game's outcome: +∞ or +1 for a win, -∞ or -1 for a , and 0 for a draw, aligning with standard utilities. Key assumptions underlying evaluation functions include their nature, which compensates for incomplete about future developments by approximating the value of a position. They are designed to be independent of search depth, providing a consistent static assessment regardless of how deeply the is explored, though their accuracy improves when applied closer to states. This approach presupposes familiarity with basic concepts, such as the ternary outcomes of win, loss, and in perfect-information games. The concept of the evaluation function originated in the context of early game programs during the , notably in the work of Herbert Simon, Allen Newell, and J.C. Shaw on chess and programs at and . Their paper described evaluation routines that numerically scored board positions based on material and positional factors, marking a foundational step in using heuristics for complex problem-solving in games. These early efforts integrated evaluation functions with search algorithms like to enable practical gameplay on limited hardware. In adversarial search algorithms for two-player zero-sum games, evaluation functions approximate the utility of positions at leaf nodes of the game tree, where exhaustive search to terminal states becomes infeasible owing to the exponential explosion caused by high branching factors—often exceeding 30 in complex games—and the need for depths of 10 or more plies to capture meaningful strategy. This depth-limited approach treats non-terminal positions at the cutoff horizon as pseudo-terminals, applying the evaluation to estimate long-term prospects and avoid the "horizon effect" where important future developments are overlooked. The integration with the minimax algorithm is foundational: the evaluation function provides static estimates for non-terminal cutoff positions, enabling recursive value propagation up the tree. Formally, the minimax value v(s) for a state s at depth limit d is defined as v(s) = \begin{cases} \text{utility}(s) & \text{if } s \text{ is terminal} \\ \max_{a \in \text{actions}(s)} v(\text{result}(s, a)) & \text{if maximizing player and } d > 0 \\ \min_{a \in \text{actions}(s)} v(\text{result}(s, a)) & \text{if minimizing player and } d > 0 \\ \text{eval}(s) & \text{if depth limit reached} \end{cases} with the optimal move selected as the action maximizing (or minimizing) this backed-up value at the root. This substitution allows minimax to approximate optimal play despite incomplete search, as pioneered in early computational game analyses. Evaluation functions synergize with by supplying scalar scores that tighten the (best max value) and (best min value) bounds during search, permitting early cutoff of subtrees proven irrelevant to the root decision—for instance, if a node's value exceeds for a minimizer, all remaining siblings can be pruned. This reduces the effective from b to roughly \sqrt{b} in ordered trees, exponentially accelerating evaluation while preserving optimality. A central arises between evaluation accuracy and computational cost: highly accurate functions, incorporating nuanced features like king safety or , demand more processing time per leaf, constraining search depth on limited hardware, whereas simpler linear combinations of and enable deeper plunges—often 2-4 plies more—for superior tactical . Empirical studies confirm that modest accuracy gains can yield disproportionate strength improvements when paired with deeper search. The role of evaluation functions has evolved from fixed-depth minimax implementations in 1970s programs, which rigidly halted at a preset horizon and relied heavily on eval for all assessments, to modern iterative deepening frameworks starting in the 1980s. These incrementally deepen searches (e.g., 1 ply, then 2, up to time allowance), reusing shallower results to refine move ordering and principal lines while applying evaluations primarily at the deepest leaves, thus mitigating time overruns and enhancing reliability. Beyond games, evaluation functions generalize to heuristic guidance in non-adversarial optimization, estimating solution quality to steer local search toward global optima in domains like scheduling or , though their primary impact remains in competitive, zero-sum settings.

Construction Methods

Handcrafted Functions

Handcrafted evaluation functions rely on domain expertise to manually define and weight key features of a position, forming a approximation of its value. Experts identify informative attributes, such as material balance (the relative value of captured and remaining pieces) and positional advantages (like control of the board center), then combine them linearly to produce a score that guides search algorithms toward promising moves. This approach typically employs a of the form \text{eval}(s) = \sum_{i=1}^{n} w_i \cdot f_i(s), where s represents state, f_i(s) are normalized feature values (often ranging from -1 to 1 or scaled to piece units), and w_i are expert-assigned weights reflecting each feature's strategic importance. Early implementations, such as the 1967 Greenblatt chess program (also known as MacHack VI), exemplified this simplicity by primarily using material counts—assigning fixed values like 1 for pawns and 9 for queens—augmented with basic positional bonuses for piece development and king safety. The design process begins with , where developers choose a set of measurable properties based on and expert analysis, followed by weighting to balance their contributions. Weights are refined iteratively through empirical testing: positions from game databases are evaluated against known outcomes (e.g., wins, losses, or draws), and adjustments are made manually or via optimization techniques like to minimize prediction errors. For instance, in chess, developers might test thousands of grandmaster games to tune penalties for weaknesses like isolated pawns. Common features include king safety (assessing exposure to attacks), (evaluating chains or passed pawns for advancement potential), and (counting legal moves for pieces to quantify activity). This manual process ensures the function captures strategic nuances but requires deep game knowledge. These functions offer key advantages in interpretability—experts can inspect and justify individual terms—and computational speed, as they avoid the overhead of data-driven models, enabling real-time evaluation in resource-constrained environments. However, they are inherently limited by human expertise, potentially overlooking subtle patterns or struggling with novel positions outside the designer's experience, making them brittle to evolving strategies. In modern , handcrafted functions have been surpassed in strength by learned alternatives, such as those in , which achieved superhuman performance without manual , though they persist as baselines or in hybrid systems for their transparency and efficiency.

Learned Functions

Learned evaluation functions in game AI employ techniques, primarily neural networks, to approximate the value of game states by learning patterns from large datasets of positions and outcomes. These functions are typically trained using , where positions are labeled with win/loss/draw outcomes or estimated values from expert play or simulations, or through methods that update estimates based on temporal differences in rewards. Common architectures include feedforward neural networks for simpler board representations and convolutional neural networks (CNNs) for spatial games like Go, which capture local patterns such as piece interactions and board control. Training paradigms for these functions often involve , where agents generate their own data by simulating games against themselves, producing policy-value networks that simultaneously learn move probabilities and state values. Alternatively, imitation learning from expert human games provides initial supervision, with loss functions combining for value prediction and for policy matching, often augmented by regularization terms like L2 penalties to prevent . In setups, temporal difference (TD) methods bootstrap value estimates from subsequent states, enabling without full game rollouts. Key advancements trace back to the with , which used TD(λ) learning on a network to master through , achieving expert-level performance without human knowledge. Post-2010s developments shifted to , with CNN-based evaluators demonstrating superior in complex games, as seen in early Go applications predicting expert moves at human dan levels. Efficiency improvements came via quantized networks and specialized architectures like efficiently updatable neural networks (NNUE), which enable rapid incremental updates during search by representing inputs as sparse half-keypoint features, reducing computational overhead on CPUs. These learned functions excel at modeling non-linear interactions among game elements, such as subtle positional trade-offs that handcrafted rules might overlook, leading to more accurate evaluations in high-branching-factor games. However, their black-box nature complicates interpretability and , and training requires substantial computational resources, often involving millions of simulated games on GPU clusters. To mitigate these issues, approaches integrate learned components with handcrafted features, such as material counts or mobility scores, blending data-driven insights with domain-specific priors for enhanced robustness and faster convergence. By 2025, learned evaluation functions have achieved widespread adoption in commercial and open-source game engines, powering top-performing systems in chess and through NNUE integrations that rival or exceed traditional methods in strength while maintaining real-time efficiency. Ongoing research explores multimodal inputs, incorporating not just current board states but also move histories or auxiliary data like time controls, to further refine evaluations in dynamic environments.

Exact Evaluation via Tablebases

Exact evaluation via tablebases refers to the use of precomputed databases that provide perfect assessments of win, loss, or draw outcomes for endgame positions in games like chess, limited to configurations with a specific maximum number of pieces on the board. These tablebases are constructed through retrograde analysis, a backward induction technique that exhaustively enumerates all reachable positions from terminal states, determining the optimal result and distance to it for each. For instance, in chess, 7-piece tablebases cover all positions involving up to seven pieces (including kings), storing outcomes such as distance-to-mate (DTM) or distance-to-zeroing (DTZ) metrics. The construction process employs dynamic programming, originating from Richard Bellman's foundational work on dynamic programming in the , which applies to solve subgames iteratively. Starting from terminal positions (e.g., checkmates or stalemates), the algorithm generates predecessor positions via "un-move" generation and classifies them based on the results of their successors, propagating values until all positions up to the piece limit are resolved. Storage is achieved through efficient indexing, such as bijective mappings to Gödel numbers or , combined with compression techniques like bit-packing to manage the enormous state space; for example, the Lomonosov 7-piece tablebases, computed in 2012, encompass over 500 trillion positions in approximately 140 TB after compression. In practice, tablebases are integrated into game engines for lookup during adversarial search, replacing approximate evaluation at leaf nodes with precise values when a falls within the database's scope, often probed via tables to avoid redundant computations and ensure efficient access even in play. This probing enhances search accuracy without additional computational overhead beyond the initial storage cost. Scalability is constrained by the of positions, making tablebases feasible primarily for endgames where the reduced material limits the state space compared to midgame or opening phases; the full 7-piece chess tablebases were completed in 2012 using the Lomonosov at . Advantages include zero for covered positions, enabling perfect play and theoretical insights, while disadvantages encompass massive requirements and applicability only to shallow depths, beyond which approximate methods must supplement. Extensions involve real-time probing in modern engines, such as or , which access remote or local tablebases during analysis. Ongoing research leverages for larger bases; as of 2025, partial 8-piece tablebases have been generated for specific configurations, with efforts like those by Marc Bourzutschky resolving significant subsets to uncover new records; as of August 2025, progress includes the identification of a 400-move winning line in pawnless 8-piece endgames, though a complete set remains computationally prohibitive at estimated petabyte scales.

Applications in Games

In Chess

In chess, evaluation functions assess the relative strength of a position by quantifying advantages in , positional elements, and strategic factors tailored to the game's tactical and piece-based nature. Standard material values assign 1 point to a , 3 points each to a and , 5 points to a , and 9 points to a , forming the core of most evaluations since these reflect average exchange equivalences derived from extensive analysis. Beyond , chess-specific features include penalties for weaknesses, such as -0.3 to -0.5 points for isolated or doubled pawns that limit and create vulnerabilities, and king safety metrics like , which penalize positions where enemy pieces exert pressure near the (e.g., -0.2 points per unprotected square in the king's vicinity). Piece contributes positively, often adding 0.1 points per legal move to reward active and . Piece-square tables enhance positional evaluation by using 2D arrays—one for each piece type and color—to assign bonuses or penalties based on board location, reflecting strategic ideals like central control or edge avoidance. For instance, a knight on c3 might receive +0.5 points for its central influence in the midgame, while the same knight on a1 incurs a -0.4 penalty for poor activity; these values are scaled in centipawns (1/100 of a pawn unit) for precision. Modern implementations, such as those in early versions, employ tapered tables with separate midgame and endgame variants, interpolating between them based on remaining (e.g., full midgame weighting at 40+ pieces, shifting to endgame at under 20) to adapt to phase-specific priorities like pawn promotion potential. Handcrafted evaluation functions in chess engines like pre-NNUE combined these features into a linear formula summing , piece-square table scores, and adjustments for and safety, tuned via automated testing on millions of positions. A basic of such an might look like:
double evaluate(Board board) {
    double score = 0.0;
    // [Material](/page/Material) [balance](/page/Balance)
    score += (board.white_pawns - board.black_pawns) * 100;
    score += (board.white_knights - board.black_knights) * 300;
    // ... similar for other pieces
    // Piece-square tables
    for each piece in board {
        score += pst[piece.type][piece.square] * (piece.color == WHITE ? 1 : -1);
    }
    // [Pawn structure](/page/Pawn_structure) penalties
    score -= isolated_pawn_penalty(board.white_pawns) - isolated_pawn_penalty(board.black_pawns);
    // King safety and mobility
    score += king_safety(board.white_king) - king_safety(board.black_king);
    score += mobility(board.white_pieces) - mobility(board.black_pieces);
    return score / 100.0;  // Convert to pawn units
}
This approach, rooted in expert-designed heuristics, provided interpretable but limited accuracy compared to learned methods. Neural network integrations, particularly NNUE (Efficiently Updatable Neural Network), revolutionized chess evaluation by hybridizing handcrafted speed with learned pattern recognition; Stockfish adopted NNUE in August 2020, using a shallow network with half-KP input features (king-piece pairs) for efficient CPU updates during search. Trained on billions of positions generated from self-play or high-depth searches, NNUE evaluates via a forward pass yielding a score in pawn units, outperforming pure handcrafted evaluation by approximately 90 Elo points in standard time controls. By 2023, Stockfish transitioned to fully neural-based evaluation. As of November 2025, Stockfish 17.1 achieves an Elo rating of approximately 3644 in CCRL benchmarks. For exact evaluation in endgames, chess engines integrate tablebases, precomputed databases providing perfect outcomes for positions with up to 7 pieces; Syzygy tablebases, released in , cover all such configurations (over 423 trillion positions) and supply metrics like distance-to-mate () or distance-to-zero (DTZ) moves, enabling seamless probing during search to return precise win// probabilities or distances (e.g., +5 for a position 5 moves from ). When applicable, these override heuristic functions, ensuring optimal play in late-game scenarios. The evolution of chess evaluation functions traces from early hardware-limited systems like Belle in the 1970s, which used basic material and mobility heuristics on dedicated chess machines, to modern neural hybrids. Belle's evaluation emphasized simple piece values and captures, achieving competitive play by 1978. The 2018 launch of introduced AlphaZero-inspired with pure neural evaluation trained via , surpassing traditional engines in strategic depth. By 2025, NNUE-dominant engines like (Elo ~3644) and Leela variants demonstrate 100+ Elo superiority over handcrafted predecessors through scalable neural architectures, blending historical heuristics with data-driven accuracy.

In Go

Evaluation functions in Go must account for the game's unique strategic elements, including territory control, where players aim to enclose empty areas on the 19x19 board; , which measures potential control over surrounding points; eye formation, essential for securing group vitality by creating unfillable internal spaces; and ko threats, tactical sequences to recapture positions under superko rules to prevent loops. These aspects integrate with scoring systems, such as rules that tally enclosed plus captured stones, or rules focusing solely on , influencing how programs assess positions. Traditional handcrafted evaluation functions in early Go programs emphasized local features like liberty counting, which tallies adjacent empty intersections for each group to gauge safety, and shape evaluation distinguishing thick (secure, influential) from thin (vulnerable) formations. These were often linear combinations of weighted terms, manually tuned by professional players to approximate win probabilities. For instance, programs from the employed such handcrafted approaches, focusing on local tactical assessments. Neural network advancements revolutionized Go evaluation with in 2016, introducing value networks as deep convolutional neural networks (CNNs) trained via on the full 19x19 board to estimate directly from the position. The value output approximates the sigmoid of the network's : v(s) \approx \sigma([NN](/page/NN)(s)), where s is the board state and \sigma is the , providing a scalar between 0 and 1 representing the current player's winning chance. This approach outperformed handcrafted methods by capturing global patterns and long-term strategy without explicit . Modern engines like , developed from 2018 onward, build on this with enhanced architectures incorporating nested residual blocks for improved efficiency and accuracy in evaluating complex positions, enabling stronger long-term strategic assessment through reinforcement learning. Open-source efforts such as Leela Zero, an implementation of principles, have democratized access, training distributed networks that rival proprietary systems while emphasizing efficiency for broader adoption. By 2025, ongoing developments aim toward lightweight models integrated with real-time to facilitate deployment on mobile devices. Post-AlphaGo programs like (2024) have achieved performance, with continued open-source advancements in 2025. These innovations highlight efficiency gains, allowing high-fidelity evaluations on resource-constrained hardware. Go's immense state space, estimated at approximately $10^{170} possible positions, renders exact methods like tablebases infeasible and necessitates learned approximations over exhaustive handcrafted rules, as traditional approaches struggle with the game's holistic, pattern-driven complexity.

References

  1. [1]
    [PDF] Artificial Intelligence Chapter 5 - Informed search algorithms
    An evaluation function returns an estimate of the expected utility of the game from a give position, just as heuristic functions return an estimate of the ...
  2. [2]
    [PDF] A Chess-Playing Machine - Paradise
    May 16, 2019 · The simplest process is to con sider all the possible moves in the given position and choose the one that gives the best immediate evaluation.
  3. [3]
    [PDF] CS 540 Intro to Artificial Intelligence
    A linear evaluation function of the features is a weighted sum of f1, f2, f3, ... w1 * f1 + w2 * f2 + w3 * f3 + … + wn * fn. – where f1, f2, …
  4. [4]
    3.2 Minimax | Introduction to Artificial Intelligence
    As a result, we turn to evaluation functions, functions that take in a state and output an estimate of the true minimax value of that node. Typically, this is ...
  5. [5]
    [PDF] 6 ADVERSARIAL SEARCH - Artificial Intelligence: A Modern Approach
    An evaluation function returns an estimate of the expected utility of the game from a given position, just as the heuristic functions of Chapter 4 return an ...<|control11|><|separator|>
  6. [6]
    [PDF] Games: evaluation functions - CS221 Stanford
    An evaluation function Eval(s) is a (possibly very weak) estimate of the value. Vminmax(s). Analogy: FutureCost(s) in search problems. CS221. 8 ...
  7. [7]
    [PDF] chess-playing programs and the problem - Bitsavers.org
    CHESS-PLAYING PROGRAMS AND THE PROBLEM OF COMPLEXITY. Allen Newell, J. C. Shaw, and H. A. Simon*. Man can solve problems without knowing how he solves them ...
  8. [8]
    Programming a Computer for Playing Chess
    APPENDIX. THE EVALUATION FUNCTION FOR CHESS The evaluation function f(P) should take into account the "long term" advantages and disadvantages of a position ...
  9. [9]
    [PDF] XXII. Programming a Computer for Playing Chess1
    The thesis we will develop is that modern general purpose computers can be used to play a tolerably good game of chess by the use of suitable computing routine ...
  10. [10]
    An analysis of alpha-beta pruning - ScienceDirect.com
    The alpha-beta technique for searching game trees is analyzed, in an attempt to provide some insight into its behavior.
  11. [11]
    [PDF] Games: evaluation functions Computation Speeding up minimax
    An evaluation function Eval(s) is a (possibly very weak) estimate of the value Vminmax(s). The first idea on how to speed up minimax is to search only the tip ...
  12. [12]
    Depth-first iterative-deepening: An optimal admissible tree search
    Depth-first iterative-deepening is an asymptotically optimal algorithm for exponential tree searches, used in chess and for the Fifteen Puzzle.
  13. [13]
    [PDF] Learning Evaluation Functions for Global Optimization
    STAGE learns a problem-speci c heuristic evaluation function as it searches. The heuristic is trained by supervised linear regression or Least-Squares. TD( ) to ...
  14. [14]
    [PDF] Adversarial Search - CS@Cornell
    This is what most game playing programs use. •. Steps in designing an evaluation function: 1. Pick informative features. 2. Find the weights that make the ...
  15. [15]
    The Greenblatt chess program | Proceedings of the November 14-16 ...
    The Greenblatt chess program. Authors: Richard D. Greenblatt, Donald E ... This paper describes the state of the program as of August 1967 and gives ...
  16. [16]
    [PDF] From Simple Features to Sophisticated Evaluation Functions
    Abstract. This paper discusses a practical framework for the semi{ automatic construction of evaluation functions for games. Based on a.
  17. [17]
    [PDF] Playing Chess with Limited Look Ahead - arXiv
    Jul 4, 2020 · Clas- sical chess engines use handcrafted static evaluation functions that are optimized by chess experts.Missing: seminal | Show results with:seminal
  18. [18]
    Move Evaluation in Go Using Deep Convolutional Neural Networks
    Dec 20, 2014 · The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player.
  19. [19]
    Temporal difference learning and TD-Gammon - ACM Digital Library
    Temporal difference learning and TD-Gammon. Author: Gerald Tesauro ... PDF. References. [1]. Berliner, H. Computer Backgammon Sci. Amer. 243, 1, (1980) ...Missing: original | Show results with:original
  20. [20]
    [PDF] Efficiently Updatable Neural-Network-based Evaluation Functions ...
    Apr 28, 2018 · Conclusion. Page 13. In this paper we have introduced an evaluation function for Shogi based on neural networks, and described the design and ...
  21. [21]
    NNUE - Chessprogramming wiki
    A Neural Network architecture intended to replace the evaluation of Shogi, chess and other board game playing alpha-beta searchers running on a CPU.
  22. [22]
    Introducing NNUE Evaluation - Strong open-source chess engine
    Aug 7, 2020 · The NNUE evaluation computes this value with a neural network based on basic inputs. The network is optimized and trained on the evaluations of millions of ...
  23. [23]
    Origins and Development of NNUE in Chess Engines - beuke.org
    Oct 7, 2025 · At its core, an NNUE is a neural network-based evaluation function used inside a chess engine's search. Unlike the deep convolutional ...
  24. [24]
    Retrograde Analysis - Chessprogramming wiki
    Retrograde analysis is the basic algorithm to construct Endgame Tablebases. A bijective function is used to map chess positions to Gödel numbers which index ...
  25. [25]
    About Lomonosov tablebases
    The tablebases contain the exact evaluations (draw or moves to mate) for all positions with no more than 7 pieces on the board. The total number of positions in ...
  26. [26]
  27. [27]
    Lomonosov Tablebases - Chessprogramming wiki
    In July 2012, 4+3 DTM-tablebases (525 endings including KPPPKPP) were completed, the 5+2 DTM-tablebases during August 2012.Missing: size | Show results with:size
  28. [28]
    8-piece endgame tablebases - first findings and interview!
    May 11, 2022 · Consider that at the time, the 6-piece set in Nalimov format took over a terabyte in storage space, and it was expected that the 7-piece would ...
  29. [29]
    Eight-piece tablebases – a progress update and some results
    Aug 29, 2025 · As tablebases advanced in solving more complex endgames, researchers were able to determine the longest possible forced win in positions with a ...Missing: 2024 | Show results with:2024
  30. [30]
    Chess Pieces Value - Chess Terms
    A pawn is worth one point, a knight or bishop is worth three points, a rook is worth five points and a queen is worth nine points.Chess Piece Values · Engine Evaluations · Exchange Tests
  31. [31]
    Evaluation - Chessprogramming wiki
    Evaluation is a heuristic function to determine the relative value of a chess position, i.e., the chances of winning.Evaluation Function · Analog Evaluation · Asymmetric Evaluation · Lazy Evaluation
  32. [32]
    Piece-Square Tables - Chessprogramming wiki
    ### Summary of Piece-Square Tables
  33. [33]
    [2209.01506] Neural Networks for Chess - arXiv
    Sep 3, 2022 · AlphaZero, Leela Chess Zero and Stockfish NNUE revolutionized Computer Chess. This book gives a complete introduction into the technical inner workings of such ...
  34. [34]
    Syzygy endgame tablebases: KvK
    From May to August 2018 Bojun Guo generated 7-piece tables. The 7-piece tablebase contains 423,836,835,667,331 unique legal positions in about 18 Terabytes.Endgames · Metrics · Legal
  35. [35]
    32 Best Chess Engines of 2025 | Based On Their Ratings - RankRed
    Igel is a UCI-compliant, open-source chess engine written in C and C++. It primarily utilizes Neural Network Evaluation (NNUE) as its main evaluation function.
  36. [36]
    [PDF] Counting the Score: Position Evaluation in Computer Go
    This paper de- scribes both the exact and the heuristic methods for position evaluation that are used in the Go program Explorer, and outlines some requirements ...
  37. [37]
    Nemesis Software: review - British Go Association
    Nov 19, 2015 · Installation is easy. This review describes four software products: Nemesis Go Master, Nemesis Go Master Deluxe (a superior version of Nemesis ...
  38. [38]
    [1902.10565] Accelerating Self-Play Learning in Go - arXiv
    Feb 27, 2019 · Authors:David J. Wu. View a PDF of the paper titled Accelerating Self-Play Learning in Go, by David J. Wu. View PDF. Abstract:By introducing ...
  39. [39]
    Computer Go: An AI oriented survey - ScienceDirect.com
    The goal of this paper is to present Computer Go by showing the links between existing studies on Computer Go and different AI related domains.Missing: Nemesis | Show results with:Nemesis