Fact-checked by Grok 2 weeks ago

Computer Go

Computer Go is a subfield of focused on developing computer programs capable of playing the ancient Go, a strategic game originating in over 2,500 years ago that involves placing black and white stones on a 19x19 grid to control territory. Unlike chess, Go's immense complexity—featuring a of around 250 possible moves per turn and an estimated 10^170 possible game positions—posed unique challenges for early AI approaches, delaying significant progress until advances in . The field began in the late with rudimentary programs, evolved through decades of incremental improvements, and achieved a breakthrough in 2016 when DeepMind's defeated world champion , marking the first time a computer bested a top human player in full-scale Go without handicaps. Early efforts in Computer Go emerged in the and , with the first playable program developed by Albert Zobrist in 1968 as part of his thesis on , capable of beating complete beginners at a level of 20-25 kyu. By the and , personal computers enabled the creation of stronger programs like Many Faces of Go, which reached approximately 5 kyu strength by the early 2000s, though still far below professional levels; competitions such as the Ing Cup and Computer Go Olympiad began in this era, fostering development but highlighting the limitations of traditional search algorithms like alpha-beta pruning. A pivotal innovation arrived in 2006 with the introduction of (MCTS) by Rémi Coulom and others, which dramatically improved program performance by simulating random playouts to evaluate positions, leading to bots like Crazy Stone and achieving dan-level play (around 1 ) by 2008 and winning against professionals with handicaps. The modern era of Computer Go was transformed by and techniques, culminating in 's development by in 2015-2016; this program combined convolutional neural networks for move prediction (policy network) and outcome evaluation () with MCTS, trained initially on millions of human games and then through , enabling it to defeat European champion 5-0 in 2015 and 4-1 in 2016, an event viewed by over 200 million people worldwide. Subsequent iterations like (2017), which learned solely from without human , surpassed its predecessor in just 40 days and defeated the original 100-0, while extended these methods to chess and , demonstrating the generality of the approach. Today, open-source programs such as Leela Zero and , inspired by AlphaGo's architecture and runnable on consumer hardware, have far exceeded professional human strength, with models as of 2025 rated at over 14,000 equivalent (far beyond 18 ), dominating competitions like the World Computer Go Championship and UEC Cup. By 2025, further advancements in architectures and distributed training have pushed these AIs to even greater strengths. These advancements have not only revolutionized AI research in areas like and under uncertainty but also influenced Go strategy among human players, with awarded an honorary 9- professional rank by the Korean Baduk Association.

Introduction and Historical Overview

Defining Computer Go

Computer Go is the subfield of focused on developing algorithms and programs capable of playing the Go at human-mastery or superhuman levels, serving as a longstanding benchmark for evaluating machine intelligence due to the game's profound strategic demands. Go, originating in ancient over 2,500 years ago, is played on a typically consisting of 19 horizontal and 19 vertical lines, creating 361 intersections where players place stones. Two players alternate turns, with Black starting first; each places one stone of their color (black or white) on an empty intersection, aiming to surround territory and opponent's stones while securing their own positions. Capturing occurs when a player's stones fully surround an opponent's stone or group, depriving it of all adjacent empty intersections known as liberties; captured stones are removed from the board and become prisoners, which add to the capturer's score. A crucial rule, the ko prohibition, prevents immediate recapture of a single-stone to avoid repetitive cycles that could loop indefinitely. At the game's end, both players in succession, and scoring counts the empty intersections fully enclosed by each player's stones () plus the number of prisoners held, with the player controlling more points declared the winner; compensation for Black's first-move advantage, called komi, is often added to White's score in even games. Go poses unique challenges for AI compared to other perfect-information games like chess, primarily due to its enormous state space and average of around 250 legal moves per position—roughly seven times higher than chess's 35—rendering traditional methods computationally infeasible without advanced heuristics or learning techniques. Performance in computer Go is evaluated using standardized metrics, including ratings adapted for the game, traditional Go ranks (kyu levels for novices decreasing from 30 kyu to 1 kyu, and levels from 1 to 9 for experts), and empirical win rates against human opponents of verified . For example, a 1- amateur corresponds to an approximate rating of 2000–2200, while top professionals exceed 2700 equivalents. Since the emergence of AI research in the 1950s, Go has been viewed as a premier testbed for , emblematic of the field's aspirations to replicate human-like and in machines.

Early Development (1950s–1990s)

The early development of computer Go began in the mid-20th century amid broader efforts in to simulate board games, highlighting the game's immense computational challenges from the outset. In the 1950s and 1960s, foundational analyses underscored Go's complexity, with early estimates placing the average —the number of legal moves per position—at around 250, far exceeding chess's approximately 35 and rendering exhaustive search impractical even on emerging computers. The first known Go program appeared in 1960, developed by Lefkovitz as an exploratory effort in , though it was rudimentary and limited to basic move generation. By 1968–1969, Albert Zobrist created the first program capable of playing a complete game, incorporating search with an to evaluate board positions by estimating territorial control through potential propagation across the board. Zobrist's work also introduced hashing techniques for efficient position representation, a method that became foundational in later programs. These initial efforts relied on limited to shallow depths, often evaluating only tactical aspects like captures, and achieved strengths equivalent to absolute beginners, around 25–30 kyu. During the 1970s and , developers shifted toward , integrating hand-crafted heuristics to mimic human intuition and address the limitations of pure search in handling Go's strategic depth. Programs like Walter Ryder's 1971 thesis implementation used abstracted representations of groups and eyes to evaluate midgame stability, while Bruce Wilcox's Interim series (starting in 1972) and later (early ) employed sector lines and for fuseki (opening) and joseki (corner sequences) advice. , one of the earliest knowledge-engineered systems, incorporated rules for shape evaluation and tactical reading, competing in human tournaments like the 1985 Ing Cup and marking a step toward practical play. These systems augmented with alpha-beta pruning to reduce the effective in tactical subtrees, but performance remained weak, typically below 20 kyu, as heuristics struggled with global strategy and the game's interconnectedness. Developers prioritized databases for local features, such as shapes or snapback threats, yet the vast search space—estimated at 10^{170} possible positions—overwhelmed even optimized searches, confining programs to yose or life-and-death problems. By the 1990s, computer Go saw incremental advances with the emergence of commercial software and stronger engines, though still far from professional levels. Programs like Chen Zhixing's Handtalk (early 1990s) and Goemate utilized extensive databases of joseki and tesuji (tactical moves), combined with improved evaluation functions based on and scoring, achieving typical strengths of 5–10 kyu on 19x19 boards. evolved into commercial releases, becoming the first widely available Go software for personal computers, while others like Many Faces of Go by Fotland incorporated adaptive heuristics and deeper tactical search. Alpha-beta pruning remained central, but its efficacy diminished in the midgame due to the high and lack of sharp distinctions, often requiring domain-specific reductions like liberty counting for sequences. Key limitations persisted: programs excelled in local tactics but faltered in global balance, such as sabaki (reducing enemy moyo) or thick-thin distinctions, with overall play handicapped by 20+ stones against dan-level humans. This era established computer Go as a for challenges, paving the way for probabilistic methods in the following decades.

Rise of Monte Carlo Methods (2000s–2014)

The adoption of methods marked a pivotal shift in computer Go during the early , transitioning from knowledge-intensive approaches to simulation-based search that scaled effectively with computational power. Traditional methods struggled with Go's vast and lack of a reliable , but (MCTS) addressed these by building an asymmetric search tree guided by random playouts, or rollouts, to estimate position values without requiring deep domain heuristics. In this framework, rollouts involve simulating complete games from a given board using simple random or lightly informed policies to approximate win probabilities, replacing exact evaluation with statistical sampling that improves accuracy through multiple iterations. A breakthrough came in 2006 with the introduction of Upper Confidence Bound applied to Trees (UCT), an enhancement to MCTS that balances and by selecting moves based on an upper bound , prioritizing promising branches while avoiding overcommitment to early favorites. Rémi Coulom first implemented UCT in the Go program , which rapidly demonstrated its potential by topping the 9x9 leaderboard on the Computer Go Server (CGOS) shortly after its July 2006 debut and winning multiple tournaments, including the 19x19 KGS Computer-Go Tournament in November 2006. This application of UCT to Go, building on prior bandit-based ideas, enabled programs to achieve stronger play by adaptively focusing simulations on uncertain positions. Between 2007 and 2010, UCT-based programs proliferated, reaching amateur levels on larger boards and dominating competitions. The open-source framework, developed by Enzenberger and colleagues, incorporated UCT with enhancements like progressive widening and achieved top rankings in events such as the 2008 Computer Olympiad, where it played at approximately 2-3 amateur strength on 19x19. Similarly, Pachi, an efficient UCT implementation by Petr Baudiš, emphasized modularity and reached 4 on the KGS server by 2010 through optimized playouts and parallelization, often outperforming proprietary rivals in open tournaments. Crazy Stone, also by Coulom, excelled in this era, securing victories like the 2007 UEC Cup in , where it finished first ahead of Katsunari and , and a silver medal at the 12th Computer Olympiad, establishing methods as the dominant paradigm. From 2010 to 2014, refinements further boosted performance, particularly through Rapid Action Value Estimation (), which augmented UCT by sharing value estimates across similar actions in different tree branches using all-moves-as-first (AMAF) heuristics, accelerating learning in Go's pattern-rich states. Introduced by Sylvain Gelly and David Silver, RAVE was integrated into programs like Pachi and improved rollout efficiency by up to 50% in early tests, enabling deeper searches. Hybrid approaches combined MCTS with , where databases of expert game motifs guided playout policies or pruned low-value moves, as seen in Fuego's tactical search integrations that enhanced midgame evaluation without full neural components. By 2014, these advances yielded programs approaching 5-dan strength on 9x9 boards, as demonstrated by Zen's competitive but losing performance (close games, 0-4) against top human in that format, but remained weaker on 19x19, typically at 2-3 amateur due to the exponential search space demands.

Deep Learning Breakthroughs (2015–Present)

The advent of in Computer Go began with DeepMind's in 2015, which integrated convolutional neural networks (CNNs) as policy networks to suggest moves and value networks to evaluate board positions, combined with (MCTS) for decision-making. This architecture enabled to defeat , the 5-dan European Go champion, 5-0 in October 2015. In March 2016, an enhanced version beat , a 9-dan world champion, 4-1 in a high-profile match in , marking the first time a computer program defeated a top human player in full Go under standard rules. These victories demonstrated how could approximate human-like intuition in a game with vast complexity, surpassing traditional search-based methods. By 2017, DeepMind advanced to , which trained entirely through without any human game data, starting from random moves and iteratively improving via simulated . After just three days of on specialized hardware, defeated the original (the Lee Sedol version) 100-0 in a private match, showcasing rapid learning and the discovery of novel strategies beyond human knowledge. An online variant, AlphaGo Master, achieved a 60-game winning streak against top professional players in early 2017, including victories over several 9-dan pros in rapid . Later that year, extended this approach to multiple board , learning Go, chess, and from scratch using the same method and a single architecture, outperforming in Go after approximately 13 hours of on 5,000 TPUs by winning 60 games to 40 against it. These developments highlighted the generality of for strategic , with achieving superhuman performance across domains without rule-specific adjustments. From 2018 onward, open-source initiatives democratized these techniques, with Leela Zero released in October 2017 as a community-driven replication of , relying on distributed for training via a deep residual and MCTS. In 2019, emerged as another open-source engine, accelerating by up to 50 times through optimizations like efficient guidance and reduced computational overhead, reaching strengths equivalent to 7-dan professional level with distributed training on modest hardware. By 2020, such programs had pushed estimated ratings beyond 3500, far exceeding top human professionals (around 3500 for 9-dan players like ), establishing superhuman benchmarks in both tactical precision and long-term strategy. As of 2025, and similar programs continue to improve through distributed training, achieving ratings exceeding 14,000 in internal benchmarks, dominating all computer Go competitions. Recent years (2023–2025) have emphasized larger models for efficiency, enabling stronger play on consumer hardware without proportional increases in compute. While no major superhuman leaps have occurred since , the emphasis has shifted to practical applications, such as human-AI collaboration tools in for game analysis and teaching, fostering efficiency in training and real-time play.

Core Challenges in Computer Go

Strategic and Combinatorial Complexity

The game of Go presents immense combinatorial due to its vast search space. On a standard 19×19 board, the average —the number of legal moves available per turn—is approximately , compared to about 35 in chess. This high arises from the open nature of the board, where stones can be placed almost anywhere without immediate capture, leading to an explosion in possible positions. The total number of legal board positions is estimated at roughly 2.08 × 10^{170}, far exceeding the analogous state-space of chess (around 10^{46}) and even the number of atoms in the (about 10^{80}). Strategically, Go emphasizes global balance and long-term planning over the localized tactics dominant in chess. Players must manage (stones that exert pressure across the board to restrict opponent expansion), (secure enclosed areas for scoring), and the life or death of groups (clusters of connected stones that require at least two "eyes"—empty adjacent points—to survive capture). These elements demand evaluating interconnected threats and opportunities across the entire board, where a single move can influence distant regions, unlike chess's focus on piece trades and king safety. The opening phase, known as fuseki, exhibits high variability as players establish initial frameworks without fixed sequences, allowing diverse approaches to corner enclosures and central influence. Local corner patterns called joseki offer standard responses but branch into numerous variations depending on board context, often leading to fights over shape and efficiency. Advanced play involves ko fights—reciprocal captures where repeating a position is forbidden, requiring threats elsewhere to regain the ko—and concepts like sente (initiative, forcing the opponent to respond) versus gote (a responding move that cedes initiative), which dictate the tempo and sequencing of exchanges. These dynamics amplify strategic depth, as optimal play hinges on balancing local gains with global position. Human professional players typically accumulate thousands of games over their careers, drawing on intuition honed through selective study and experience, whereas AI systems like AlphaGo train on millions of self-play games and perform millions of Monte Carlo simulations during decision-making to explore the space brute-force. This disparity underscores Go's challenge for AI: capturing nuanced strategy requires not just computational power but approximations of human-like to navigate the complexity efficiently.

Evaluation and Search Space Issues

In Computer Go, position evaluation presents unique challenges due to the game's emphasis on territorial control rather than capturing pieces. Unlike chess, where material balance provides a straightforward , Go lacks a simple material count, as stones do not have inherent values and their strength depends on interconnected groups and potential influence over board areas. Effective requires a holistic assessment of potential, including subtle factors like stone , shape efficiency, and future influence, which traditional rule-based functions often fail to capture accurately without extensive search. This complexity leads to "greedy" decisions in early programs, where immediate territorial gains overlook long-term strategic vulnerabilities. The search space in Go exacerbates these evaluation issues, encompassing an estimated 10^{170} legal board positions and rendering full lookahead impossible even with advanced techniques. Search depth is severely limited, typically to a few moves in complex midgame positions, as the averages around 250 legal moves per turn—far higher than chess's 35. In the endgame, this manifests as the , where fixed-depth searches fail to anticipate distant threats or opportunities, such as long sequences that can capture groups just beyond the search horizon, leading to misjudged outcomes. While neural networks have mitigated some challenges, issues like high variance in rare, long-term scenarios and computational scaling for ultra-long games persist as of 2025. Early (MCTS) implementations amplified these problems through noise in simulations, where random playouts from leaf nodes produced inaccurate win rate estimates due to frequent blunders and lack of strategic guidance. Without informed policies, these playouts exhibited high variance in win rates, with success rates fluctuating significantly as simulation counts increased—for instance, dropping from 71% at 1,000 playouts to 61% at 256,000 on smaller boards. This noise often masked true position values, requiring millions of iterations to achieve reliable statistics. Scalability further compounds these hurdles on the standard 19x19 board, necessitating frameworks to handle the computational demands of MCTS and neural s. Programs like relied on specialized hardware, such as multiple GPUs for parallel policy and inferences, with performance scaling sublinearly beyond two GPUs but enabling play through 1,920 CPUs and 280 GPUs in distributed setups. Later advancements, including tensor units (TPUs), have addressed neural bottlenecks, though the sheer volume of simulations still demands massive parallelization for professional-level analysis. Neural value functions have emerged as a partial to these challenges by approximating holistic position strengths without exhaustive search.

Technical Components

Board State Representation

In computer Go, the standard Go board is represented as a 19×19 , where each can be in one of three states: empty, occupied by a black stone, or occupied by a white stone. This basic structure is typically encoded using binary matrices or arrays, with separate 19×19 planes for each color and empty spaces to facilitate efficient updates during simulations. Additional channels may encode game-specific elements, such as the number of (empty adjacent intersections) for groups of stones, which is crucial for capture detection, often using binned integer values across multiple planes to represent liberty counts from 1 to 8 or more. status, which prevents immediate recapture in simple ko situations, is handled by tracking the most recent ko point as an additional flag or coordinate in the state data. For compact representations optimized for speed and memory, bitboards are employed in some implementations, where the board state is packed into 64-bit integers (or arrays thereof for the full 361 intersections), with each bit indicating the presence of a stone of a specific color. This allows bitwise operations for rapid neighbor detection, group connectivity via union-find structures, and simulation of moves, particularly useful in tactical reading or rollouts. Zobrist hashing provides another efficient method for transposition tables, generating a unique 64-bit (or larger) hash key by XORing precomputed random values for each stone position, color, and ko point; this enables quick detection of repeated positions without storing the full board. Bitboards and hashing are particularly valuable for handling the vast state space, reducing storage needs while supporting fast equality checks. In neural network-based systems, board states are input as multi-channel feature planes to capture richer contextual information beyond raw stone positions. For instance, the original used a 19×19×48 stack for the policy network, comprising planes for stone colors (3 planes), recent move history (8 planes for turns since last play), liberties (8 planes), potential captures (8 planes each for opponent and self-atari sizes), and specialized flags like ladder outcomes and legal move sensibleness (5 planes total), all one-hot encoded relative to the current player. simplified this to a 19×19×17 tensor, with 8 planes each encoding the current player's and opponent's stone positions over the preceding 7.5 turns of the game (the most recent 8 positions for each, zero-padded if necessary, to encode history and prevent repetitions), plus 1 plane for the player to move. These planes enable the network to process spatial patterns and temporal dynamics directly. Key challenges in board representation include handling the game's symmetries and enforcing rules like superko, which prohibits cycles beyond simple by banning any prior board position recurrence. Rotational (90°, 180°, 270°) and symmetries (horizontal, vertical, diagonal) are often addressed by normalizing inputs or augmenting representations to reduce redundancy, though this increases computational overhead during evaluation. Superko enforcement typically relies on hashing the full state history or using stacked history planes to detect repeats, ensuring legal play without exhaustive storage of all past boards. These mechanisms are essential for maintaining game integrity in search algorithms like MCTS. Modern open-source systems like extend these representations with additional feature planes, such as liberties of adjacent groups and potential capture indicators, totaling around 22 planes to capture more nuanced tactical information, as of 2023.

Search and Decision Algorithms

Traditional search and decision algorithms in Computer Go have relied on deterministic tree search techniques adapted from classical game , focusing on exploring possible move sequences to select optimal actions. The foundational approach is the algorithm, which recursively evaluates game positions by assuming the current player maximizes their score while the opponent minimizes it. In practice, searches are depth-limited due to computational constraints, terminating at a fixed depth where a static assesses the board state based on factors like control, influence, and connectivity. However, in Go, this results in shallow searches—typically 5-10 plies deep on standard hardware—failing to capture long-term strategic interactions, as the game's high (around 250 legal moves) leads to an enormous search space exceeding 10^170 positions. To enhance efficiency, alpha-beta pruning is integrated into , maintaining lower (alpha) and upper (beta) bounds on position values to prune branches that cannot influence the root decision. This reduces the effective significantly in ordered trees, from b to approximately √b, where b is the , allowing deeper exploration in tactical subproblems like capturing groups or resolving ko fights. Despite these gains, alpha-beta remains inadequate for global Go strategy, as even optimized implementations in programs like GNU Go could only evaluate thousands of positions per second, limiting play to amateur levels around 10 kyu. Iterative deepening addresses by repeatedly performing depth-limited searches, incrementally increasing the depth limit until the allocated time expires, ensuring the best move at the deepest feasible level is always available. This method reuses computations from shallower iterations for move ordering, improving alpha-beta cutoff rates in subsequent deeper searches. , a variant, further refines this by using a narrow window around the previous best line (principal variation) to probe for better moves, widening only when necessary. In Computer Go, these techniques enable adaptive response to varying time controls in tournaments, though they still constrain overall depth due to Go's complexity. Transposition tables mitigate redundant computations by hashing board states to a table storing previously evaluated values, depths, and best moves, allowing reuse when the same position is reached via different move orders. , a standard method using random 64-bit keys for board features, ensures low collision rates and efficient updates for incremental changes like stone placements. In Go programs, these tables, often sized in gigabytes, prevent re-evaluating isomorphic positions during search, boosting speed by up to 50% in selective tactical reads, but memory demands and hashing collisions pose challenges for full-board global searches. The general value computation in these algorithms follows a recursive form: for a maximizer, the value is the maximum over legal moves of the minimizer's value in the resulting , or at leaves, a static ; formally, v(s) = \max_{a \in A(s)} v'(P(s, a)) where v'(s') = -\max_{a \in A(s')} v(P(s', a)) for the opponent (with negation for zero-sum), and P denotes the , often discounted by a factor \beta < 1 for future values in approximations, though Go adaptations emphasize undiscounted terminal scoring. Alpha-beta bounds refine this to prune suboptimal branches. These methods, while foundational, underscore Go's demand for selectivity and knowledge integration beyond brute-force exploration.

Evolving AI Architectures

Traditional and Knowledge-Based Approaches

Traditional and knowledge-based approaches in computer Go dominated the field's early decades, relying on deterministic rules and expert-encoded heuristics to mimic human strategic intuition without probabilistic simulations or . These methods encoded Go-specific knowledge directly into programs through if-then rules and , focusing on local tactics, positional evaluation, and predefined sequences to navigate the game's vast . By the and , such systems formed the core of competitive programs, emphasizing hand-crafted logic over broad search exploration. Rule-based evaluation was central to these approaches, assigning hand-coded scores to key board features like eyes (vital for group survival), thickness (for and connection), and cutting points (to sever opponent structures). Programs scanned the board for these elements using pattern templates, computing a static value by aggregating scores—e.g., positive points for secure eyes or thick shapes, negative for weak connections. Additionally, pattern databases stored thousands of joseki (standard opening sequences), enabling programs to recognize and suggest moves from memorized expert plays, often exceeding 1,000 entries to cover common corner and side developments. This scoring provided quick assessments but prioritized local safety over global strategy. Expert systems exemplified these techniques, with programs like (developed in the 1980s by Bruce Wilcox) employing extensive if-then rules for tactical decisions, such as capturing strings, defending links, or responding to threats. integrated pattern lenses for shapes, dead groups, and joseki, using hierarchical rule application to prioritize urgent local maneuvers over long-term planning. Similar systems in the era encoded Go knowledge into production rules, drawing from expert analysis to handle tactics like responses or simple ko fights. Despite their sophistication, these approaches proved limited by in novel positions, where unscripted configurations led to poor decisions due to the absence of matching rules. Achieving competence required an estimated 10^6 rules or more, as the of Go positions demanded exhaustive coverage for reliable play, rendering maintenance and scaling impractical. Hybrid methods addressed some gaps by combining rule-based evaluation with search (alpha-beta ) for endgame solving, leveraging tsumego databases of precomputed life-and-death problems to resolve enclosed groups efficiently—e.g., GoTools integrated static rules with transposition tables for problems up to 14 empty points. These systems were eventually replaced by more adaptive methods in the mid-2000s. Monte Carlo Tree Search (MCTS) emerged as a pivotal algorithm in computer Go, enabling programs to navigate the game's vast search space through iterative simulations without relying on domain-specific heuristics for evaluation. Unlike traditional alpha-beta search, MCTS builds an asymmetric search tree incrementally, focusing computational effort on promising branches while using random playouts to estimate node values. This approach proved particularly effective for Go's high branching factor, typically around 200-300 legal moves per position, allowing programs to achieve strong performance on 19x19 boards by the late 2000s. The MCTS algorithm operates through four distinct phases repeated over multiple iterations until time expires for a move decision. In the selection phase, starting from the root node representing the current board state, the algorithm traverses the existing tree by selecting child nodes according to a policy that balances exploitation of known good moves and exploration of uncertain ones. This is typically guided by the Upper Confidence bound applied to Trees (UCT) formula, which selects the action a maximizing \frac{Q(s,a)}{N(s,a)} + C \sqrt{\frac{\ln N(s)}{N(s,a)}} where Q(s,a) is the average value from simulations ending with action a in state s, N(s,a) is the visit count for that action-state pair, N(s) is the total visits to state s, and C is an exploration constant. The traversal continues until reaching a leaf node that is either terminal or insufficiently explored. The expansion phase follows by adding one or more child nodes to the selected leaf, representing new possible moves from that position, thereby growing the search tree selectively toward areas of interest. In the simulation (or playout) phase, a random game is completed from the expanded node to a terminal state using lightweight, often uniform random move selection, though Go-specific heuristics like prioritizing captures can improve efficiency. The outcome—win (1) or loss (0) for the player—is the raw value estimate. Finally, the backpropagation phase updates the statistics along the path from the simulated leaf back to the root, incrementing visit counts and accumulating the simulation value into Q values for each node and action, typically using a simple average. After thousands to millions of iterations, the root's most-visited move is selected as the program's play. The UCT exploration constant C controls the trade-off between exploitation and ; a typical value of C \approx 1.4 (derived from \sqrt{2} for normalized rewards in [0,1]) performs well in Go, though tuning via experimentation is common to adapt to specific hardware or time controls. To manage Go's high during selection and expansion, progressive widening limits the number of considered moves at a , gradually increasing this limit (e.g., via k n(s)^d where n(s) is visits to s, k and d < 1 are parameters) as the is visited more often, preventing the tree from expanding too broadly too soon. Key enhancements to basic MCTS address Go's strategic correlations across moves. All-Moves-As-First (AMAF) and its variant Rapid Action Value Estimation () share statistics across simulations by tracking values for actions independently of exact position, using a combined score like q(s, m) = (1 - \alpha) \hat{q}(s, m) + \alpha \hat{q}_{\text{RAVE}}(s, m) with \alpha decreasing as direct visits n(s, m) grow, accelerating learning for correlated moves in Go's global board interactions. Prior knowledge injection further refines MCTS by biasing initial Q values or simulation policies with expert-derived patterns, such as tactical shapes or opening books, integrated via weighted averages during to guide early tree growth without overriding simulation data. MCTS in Go incurs significant computational cost, with strong programs performing millions of simulations per move on multi-core hardware to achieve reliable evaluations, as each iteration involves and on a 19x19 board. Parallelization strategies mitigate this: leaf parallelization runs independent simulations from leaves across threads; root parallelization maintains multiple independent trees merged at the end; and tree parallelization shares a single tree with locking mechanisms and virtual loss (temporarily penalizing ongoing branches) to diversify worker explorations and reduce contention. In modern Go AIs, priors briefly guide move selection within MCTS, enhancing efficiency beyond pure simulation-based methods.

Neural Network and Reinforcement Learning Systems

Modern architectures in computer Go primarily consist of networks and value networks, which together guide decision-making during gameplay. networks, typically implemented using convolutional neural networks (CNNs) or residual networks (ResNets), take the current board state as input and output a over possible moves. For a standard 19x19 Go board, this involves predicting probabilities for up to 361 legal positions, often via a softmax applied to the final layer. These networks enable the AI to approximate an optimal move selection , improving upon earlier hand-crafted heuristics by learning directly from game data. Value networks complement policy networks by estimating the expected outcome of a , outputting a scalar between -1 and 1 representing the probability of a win for the current player. In practice, this scalar win prediction is derived from the same shared trunk of convolutional layers as the policy network, followed by a dedicated value head. When integrated with search algorithms, these networks provide evaluations that unpromising branches and focus exploration on high- moves, significantly enhancing overall performance. Reinforcement learning forms the core training paradigm for these networks, relying on self-play to generate training data without human supervision. The process involves policy iteration, where the AI plays games against versions of itself, using the outcomes to update both policy and value estimates. The training loss combines cross-entropy for policy improvement (matching improved move probabilities from search), mean squared error for value accuracy (comparing predicted win rates to actual game results), and L2 regularization to prevent overfitting: \mathcal{L} = (z - v)^2 - \pi^T \log p + c \|\theta\|^2 Here, z is the actual game outcome, v is the predicted value, \pi represents target policy probabilities from self-play, p is the predicted policy, and \theta are the network parameters. This approach, exemplified by AlphaZero, achieves superhuman performance starting from random initialization, reaching levels competitive with top programs after approximately 9 hours of training on specialized hardware and fully surpassing them within 13 hours. In the 2020s, advancements have further refined these systems for greater efficiency and capability. Distributed actor-learner frameworks, as in KataGo, parallelize self-play across multiple GPUs to accelerate data generation and training, achieving competitive strength in 19 days on 28 V100 GPUs while incorporating auxiliary predictions like territory ownership for better generalization. Transformers have emerged to capture longer-range dependencies on the board, treating positions as sequences or image patches to model global context more effectively than pure CNNs, with vision transformer variants showing improved move prediction accuracy in experimental evaluations. Efficiency gains have also been pursued through techniques like model distillation, compressing larger networks into deployable versions while retaining much of their strength, though primarily explored in broader AI contexts adaptable to Go.

Notable Programs and Achievements

Pioneering and Modern Go AIs

One of the earliest notable computer Go programs was GNU Go, an open-source implementation developed by the GNU Project starting in 1999. It employed traditional search augmented with handcrafted and tactical knowledge to evaluate board positions, making it accessible for hobbyists and researchers to study and modify. GNU Go's strengths lay in its portability across platforms and its role as a baseline for comparing later algorithms, though it remained at amateur levels on full 19x19 boards due to the limitations of exhaustive search in Go's vast state space. In the , commercial efforts advanced the field with programs like The Many Faces of Go, created by David Fotland beginning in 1981 and released commercially in the early . This Windows-based software integrated a robust playing engine with educational tools, including a joseki (opening patterns) tutor and fuseki (strategic openings) database, allowing users to learn while competing against the AI. Its unique features, such as selective search focusing on critical board areas and a database of over 20,000 professional games, positioned it as a versatile tool for both play and study, achieving strengths up to mid-dan amateur level. The advent of Monte Carlo Tree Search (MCTS) in the mid-2000s marked a pivotal shift, with emerging in 2006 as the first program to apply Upper Confidence Bound for Trees (UCT), a variant of , to Go. Developed by Sylvain Gelly and colleagues at INRIA, incorporated pattern-based modifications to UCT for better exploration, enabling it to reach 3-dan strength on 9x9 boards through efficient simulation of random playouts. Its strengths included rapid adaptation to opponent styles via and parallelization for faster computation, establishing as the dominant paradigm for computer Go. Building on this, , released in 2008 by an academic team at the led by Mark Enzenberger and Martin Müller, provided an open-source for board games with a focus on Go. Fuego's modular design separated search, evaluation, and game logic, facilitating experimentation with MCTS enhancements like Rapid Action Value Estimation (). Its key features encompassed support for and integration with external knowledge sources, yielding strong performance on smaller boards and serving as a foundation for subsequent research tools. The modern era began with DeepMind's proprietary in 2016, which combined deep neural networks for policy and value estimation with MCTS to achieve play on 19x19 boards. 's innovative architecture allowed it to intuit strategic elements like territory control that eluded prior programs, relying on from human games followed by . Its successor, (2017), eliminated human data entirely, learning solely through self-play to surpass the original version in efficiency and strength within days of training. Open-source alternatives proliferated post-AlphaGo, with Leela Zero launched in 2018 by Gian-Carlo Pascutto as a faithful reimplementation of AlphaZero's paradigm. This community-driven project uses distributed to generate training data, featuring a deep for move prediction without any encoded human knowledge. Leela Zero's strengths include its accessibility for global contributors and continuous improvement through crowdsourced games, reaching levels by 2019. Recent updates, such as enhanced network architectures in 2023–2024, have pushed its estimated rating above 3500 on benchmarks like the Fox Go Server, with versions like b40c512 exhibiting refined positional judgment. As of 2025, Leela Zero continues to improve through ongoing distributed training. KataGo, introduced in 2019 by developer . Wu (lightvector), emphasizes computational efficiency in and , allowing high-strength models to run on consumer hardware. Its unique features include advanced data efficiency techniques, such as dynamic temperature scaling for diverse and ownership map predictions for better evaluation, enabling faster convergence than predecessors. KataGo's modular engine supports analysis tools like win-rate estimation and branching factorization, making it popular for online play and study. As of 2025, KataGo maintains its lead through continued distributed enhancements. Facebook AI's ELF OpenGo, released in 2018, offers an open-source reimplementation of integrated into the ELF (Extensible Library Framework) platform. Developed by Yuandong Tian and team, it incorporates scalable with a massive exceeding 200 million , achieving superhuman performance verified by a 20–0 record against top professionals. ELF OpenGo's strengths lie in its and extensibility for other , providing utilities for parallel training and policy iteration. From 2023 to 2025, no major proprietary leaders have emerged, but community-driven bots based on Leela Zero and dominate online platforms like the Online Go Server (OGS) and KGS. These include customizable variants like KaTrain, which integrates for adjustable difficulty and real-time analysis, fostering widespread amateur and professional play without the resource demands of earlier proprietary systems.

Key Milestones and Matches

One of the earliest notable achievements in computer Go occurred in 2007, when the program , utilizing upper confidence tree (UCT) search, secured the first victories against professional human players on a 9x9 board. This milestone demonstrated early progress in handling smaller boards, where computational demands are lower than on the standard 19x19 grid, but still highlighted the gap to full-board mastery. The field advanced dramatically in 2016 with DeepMind's defeating the world champion in a best-of-five match by a 4-1 score, marking the first time a beat a top professional on a full 19x19 board without handicaps. The match, held in , captivated global attention; 's innovative Move 37 in Game 2—a highly unconventional shoulder hit that commentators deemed unlikely (with a 1 in 10,000 probability under human play)—shifted the momentum and exemplified AI's capacity for creative, non-intuitive strategies beyond human patterns. This victory not only validated deep neural networks combined with but also spurred widespread analysis of Go games, influencing professional training worldwide. In 2017, an upgraded version, AlphaGo Master, achieved an undefeated 60-0 record in online games against top professionals, including multiple wins over world number one . Later that year, DeepMind released , which learned Go solely through without any human game data or domain-specific knowledge, reaching performance in just 40 days of training on 4,000 TPUs and surpassing AlphaGo Master after three days. 's Elo rating of 5,185 in evaluations underscored its transformative self-improvement, winning 100-0 against the prior AlphaGo version in a 100-game match. By 2019, open-source advancements like , an -inspired engine with enhanced training efficiency and larger neural networks, outperformed in key benchmarks such as win rates on standard test sets and computational resource utilization. 's distributed training on volunteer hardware achieved higher playing strength, topping public server rankings like CGOS with ratings exceeding 3,800. In computer Go competitions, programs continued to dominate; for instance, in the 2023 edition of major algorithmic tournaments, AI systems secured top positions, reflecting the field's shift toward superhuman consistency. From 2024 onward, tools like Leela Zero and have become integral to professional training, with top players such as Shin Jinseo and Kim Jiseok crediting them for revealing strategic reasoning and improving decision-making in complex positions—Leela Zero's policy and value outputs, in particular, provided interpretable insights unlike earlier black-box AIs. No major formal human-AI challenges have occurred since 2017, as professionals now view such matches as unwinnable against modern engines, redirecting focus to collaborative analysis for skill enhancement.

Competitions and Evaluation

Tournament History

The earliest dedicated computer Go tournaments emerged in the , marking the transition from academic experiments to competitive events. The first known North American Computer Go Championship took place in 1984 at the conference in , organized by Peter Langston, where Bruce Wilcox's program emerged victorious on a 19x19 board. This event set a precedent for regional competitions, evolving into annual North American championships from 1988 to 2000, often held alongside the US Go Congress. Concurrently, the Ing Wei-Chi Educational Foundation sponsored the inaugural Ing Cup in 1985 in , , establishing the first international series with substantial prizes and drawing programs from and the West; precursors included informal gatherings like the 1984 Acornsoft Tournament in . These early tournaments emphasized and , with events typically featuring 4-8 programs competing in formats under rules. By the 2000s, computer Go competitions standardized around global formats, coinciding with the rise of Monte Carlo Tree Search (MCTS) algorithms that revolutionized program performance. The World Computer Go Championship, integrated into the International Computer Games Association's Olympiad, began for full 19x19 boards in 2000 in London, attracting top programs like Goemate and Many Faces of Go in its initial years. MCTS, introduced by Rémi Coulom in 2006, quickly dominated tournament strategies, enabling programs to simulate millions of random playouts for decision-making; by 2007, MCTS-based entries like Crazy Stone won the newly launched Computer Go UEC Cup in Tokyo, an annual event hosted by the University of Electro-Communications. KGS server tournaments also debuted in 2004, providing online platforms for frequent, accessible competitions that complemented in-person events like the Ing Cup's final editions and ran monthly until 2017. These developments shifted focus from hand-crafted heuristics to probabilistic search, with tournaments expanding to 10-20 entrants and incorporating time controls mimicking human play. The 2010s saw sustained growth in international events, with a post-AlphaGo pivot in 2016 toward evaluating neural network-enhanced AIs and contrasting open-source initiatives against proprietary systems. The Ing Cup series, which concluded around 2000 but influenced ongoing formats, gave way to specialized cups like the GLOBIS-AQZ program's participation in UEC events, highlighting Japanese advancements in hybrid MCTS-neural architectures. Major tournaments included the annual Computer Go UEC Cup and World Computer Go Championship, where programs like Zen secured multiple titles through 2015, but AlphaGo's victory over Lee Sedol prompted new emphases on transparency and accessibility. Post-AlphaGo, Chinese-hosted events such as the 2017 World AI Go Open in Ordos emphasized open-source replicas like Leela Zero (launched 2018), fostering community-driven progress over closed DeepMind technologies, while KGS tournaments continued as benchmarks for amateur and experimental bots until 2017. Formats remained handicap-free on standard boards, prioritizing raw strength over adjusted play. In the 2020s, computer Go tournaments have continued with a mix of in-person and online structures to test professional-level play without human intervention. The annual UEC Cup has persisted, with its 16th edition held in July 2024 and the 17th scheduled for November 2025, featuring leading open-source neural systems like . Chinese competitions, including the 2024 World Artificial Intelligence Go Championship in , have highlighted advancements. This era reflects a shift toward handicap-free, full-board simulations that mirror top human matches, as seen in UEC Cup continuations and other events, where open-source neural systems have surpassed proprietary benchmarks by integrating . Events now prioritize conceptual innovations, such as for deeper searches, over exhaustive listings of results, maintaining a focus on advancing general techniques.

Standardization of Scoring and Benchmarks

In computer Go, scoring standardization addresses ambiguities inherent in human play to ensure consistent evaluation of AI performance. Traditional methods include area scoring, which counts a player's surrounded empty points plus their own stones on the board (common in rules), versus territory-only scoring, which counts only surrounded empty points (used in rules). Area scoring simplifies automated computation by avoiding adjustments for stones placed in one's own , making it preferable for AI tournaments despite minor strategic differences from territory-only systems. Computers face unique challenges in rule enforcement, such as positional superko, which prohibits repeating any prior board position regardless of whose turn it is, to prevent infinite cycles that AIs might enter during search. This rule is strictly implemented via hash tables tracking board states, as loose enforcement could lead to non-terminating games. Pass-move handling is standardized: the game ends after two consecutive passes, after which programs must identify dead stones using protocols like GTP's final_status_list command to compute the final score accurately. Benchmarks for AI evaluation include standardized komi values, typically 6.5 points under rules to compensate White for Black's first-move advantage, ensuring balanced win rates near 50% in . Elo ratings, derived from win rates in tournaments, provide a relative strength measure; for instance, top programs like achieve ratings exceeding 5000 Elo against professional-level opponents. Test suites such as tsumego problems evaluate tactical reading, with benchmarks like the 119-problem set from TsumeGo Explorer assessing solvers' speed and accuracy on life-and-death scenarios. In the , tournament formalization advanced with rulesets like those in the KGS Computer Go Tournaments (using area scoring and positional superko) and the UEC Cup (Japanese rules on 19x19 boards with 30-minute time controls). These ensured fair AI-vs-AI play without human intervention. By the 2020s, automated resignation thresholds became standard, where AIs resign if win probability drops below a set value (e.g., 20% in AlphaGo's matches), accelerating evaluations and mimicking human etiquette.

Broader Impacts

Influence on General AI Research

Advancements in computer Go, particularly through DeepMind's , have profoundly shaped () by demonstrating the efficacy of mechanisms, where agents improve by competing against versions of themselves without human data. This approach, starting from a blank slate with only the game's rules, enabled to achieve superhuman performance in Go, chess, and through iterative and updates. The method's success highlighted how can generate diverse training data autonomously, accelerating learning in complex environments. This paradigm has been adapted to , where it facilitates the of emergent skills in unstructured settings without predefined rewards or demonstrations. For instance, the framework employs to enable robots to autonomously learn abilities, such as object grasping and stacking, by exploring physical interactions in . In , AlphaZero's techniques inspire generative models that use self-play-like exploration to optimize molecular structures, iteratively refining candidates for properties like binding affinity through simulated evaluations and policy improvements. Monte Carlo Tree Search (MCTS), a cornerstone of early computer Go systems like , has extended beyond games to enhance planning in real-world domains requiring sequential decision-making under uncertainty. In and scheduling, MCTS algorithms optimize and by simulating multiple scenarios to evaluate trade-offs in operations. Similarly, in , MCTS guides proof search by expanding promising proof paths in systems, improving efficiency on complex mathematical problems such as those in the HOL Light prover. The neural architectures developed for Go, combining deep convolutional networks with value and policy heads, provided early evidence of scalable in high-dimensional spaces, indirectly influencing the design of -based models. While relied on convolutions, its integration of neural guidance in search foreshadowed hybrid systems, and subsequent Go AIs incorporated mechanisms to better capture long-range board dependencies, paralleling developments in transformers for sequence modeling. This evolution underscored the value of attention-like focusing in neural search, contributing to broader adoption in architectures handling spatial and sequential data. Between 2023 and 2025, these Go-inspired techniques culminated in AlphaEvolve, a 2025 DeepMind system that extends self-improvement via evolutionary algorithms and large language models to autonomous and optimization. AlphaEvolve iteratively generates, evaluates, and refines algorithms for tasks like and problems, achieving novel solutions that outperform human-designed baselines in efficiency. By benchmarking self-improving agents on diverse domains, AlphaEvolve serves as a milestone for tracking progress toward (AGI), quantifying gains in autonomous problem-solving capabilities.

Applications Beyond Go

Techniques from computer Go, particularly deep neural networks and , have been extended to through , a system developed by DeepMind that was first unveiled in 2020. employs policy and value networks akin to those in to model evolutionary relationships and spatial configurations of , achieving unprecedented accuracy in forecasting three-dimensional protein folds from sequences alone. This breakthrough earned its lead developers, and John Jumper, half of the 2024 , recognizing the tool's transformative role in enabling rapid advancements in and . By 2025, 's database has predicted structures for over 200 million proteins, facilitating research in areas like disease mechanisms and design. In gaming domains beyond Go, AlphaZero demonstrated the versatility of self-play reinforcement learning by mastering chess and shogi from scratch in 2017, surpassing human champions and traditional engines through Monte Carlo tree search (MCTS) guided by a single neural network architecture. Building on this, MuZero extended the approach to imperfect-information environments like Atari video games in 2019, learning optimal policies without prior knowledge of game rules by constructing latent models during planning. These systems achieved superhuman performance across diverse tasks, with MuZero scoring a mean normalized score of 124% across Atari benchmarks, highlighting the transferability of Go-inspired algorithms to strategic and exploratory decision-making. Real-world applications of MCTS from computer Go have emerged in optimization challenges, such as in smart grids and control. In energy systems, MCTS optimizes real-time charging schedules for electric by simulating multiple future scenarios. For , MCTS-based controllers manage mixed human-autonomous flows at intersections, improving throughput by 3.5% and cutting fuel use by 6.5% in arterial networks. By 2025, these techniques have advanced quantum simulations, where MCTS samples quantum states in transformer-based models to solve the many-electron , enforcing conservation laws and enabling simulations beyond classical limits for molecular systems. Go AIs like serve as practical tutors for human players, including professionals, by providing detailed game analysis and strategic insights. 's engine estimates win probabilities, identifies key moves, and reviews positions with high precision, allowing pros to refine opening theories and endgame tactics through tools like KaTrain, which integrates the for interactive training sessions. This has democratized access to elite-level feedback, with many top players incorporating reviews into daily practice to adapt to evolving styles post-AlphaGo.

References

  1. [1]
    AI and Play, Part 2: Go and Deep Learning - CHM
    Aug 5, 2020 · As Part 1 described, while chess AI had beaten humans back in 1997, Go was so much more complex that it seemed to be the last bastion of human ...
  2. [2]
    [PDF] Computer Go: from the Beginnings to AlphaGo
    Short History of Computer Go. Page 8. Computer Go History - Beginnings. ✤ 1960's: initial ideas, designs on paper. ✤ 1970's: first serious program - Reitman & ...
  3. [3]
    History of Go-playing Programs - British Go Association
    Jan 2, 2018 · The first Go program was probably written by Albert Zobrist in 1968 as part of his thesis on pattern recognition.
  4. [4]
    AlphaGo - Google DeepMind
    AlphaGo mastered the ancient game of Go, defeated a Go world champion, and inspired a new era of AI systems. A Go board – a wide grid with black and white ...
  5. [5]
    Computer Go at Sensei's Library
    Jul 21, 2025 · Earlier History (before 2015) Go has a higher state-space complexity than many other popular, deterministic games of perfect information that ...Current State of Computer Go... · The Challenge of Computer Go · Notes · Articles
  6. [6]
    How to Play | British Go Association
    Oct 26, 2017 · The main object of the game is to use your stones to form territories by surrounding vacant areas of the board.
  7. [7]
    None
    ### Extracted and Summarized Content
  8. [8]
    Go Ranks & Ratings: Kyu, Dan and Elo - Go Magic
    Jan 10, 2025 · Discover the structure of Go's ranking system, from beginner kyu levels to advanced dan ranks, and explore the journey toward mastery.
  9. [9]
    [PDF] Searching for Solutions in Games and Artificial Intelligence - Free
    It is a diverging, perfect-information game with sudden death. Its state-space complexity and game-tree complexity are similar to that of go- moku. The ...<|control11|><|separator|>
  10. [10]
    (PDF) Computer Go: An AI oriented survey - ResearchGate
    Aug 9, 2025 · The goal of this paper is to present Computer Go by showing the links between existing studies on Computer Go and different AI related domains: ...
  11. [11]
    A model of visual organization for the game of GO
    A model of visual organization for the game of GO. Author: Albert L. Zobrist.
  12. [12]
    Reflections on building two Go programs | ACM SIGART Bulletin
    Recently I constructed a similar program, called NEMESIS...the Go Master (tm). It has taken 1 person-year, 13.5K lines of C, 146 kilobytes of memory, and an IBM ...
  13. [13]
    [PDF] Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
    Abstract. Monte-Carlo evaluation consists in estimating a position by averaging the outcome of several random continuations, and can serve.Missing: MoGo UCT
  14. [14]
    Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search
    Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search. by Rémi Coulom. Abstract. Monte-Carlo evaluation consists in estimating a position by ...Missing: MoGo UCT
  15. [15]
    UCT - Chessprogramming wiki
    UCT (Upper Confidence bounds applied to Trees), a popular algorithm that deals with the flaw of Monte-Carlo Tree Search.
  16. [16]
    [PDF] MoGo: Improvements in Monte-Carlo Computer-Go using UCT and ...
    Dec 12, 2006 · MoGo was the first Go program to use UCT (July 2006). Sylvain ... Coulom 2006). Stop as soon as UCT gets an unseen position; add this ...
  17. [17]
    MoGo at Sensei's Library
    Jun 26, 2024 · Rémi Coulom who participated in Yizao's internship. ... 5 November 2006 MoGo won the 19th KGS Computer-Go Tournament in both divisions.
  18. [18]
    [PDF] Bandit Algorithms for Tree Search - arXiv
    In this paper we fo- cus on bandit algorithms based on Upper Confidence. Bounds (UCB) [2] applied to tree search, such as UCT. (Upper Confidence Bounds applied ...
  19. [19]
    [PDF] FUEGO – An Open-source Framework for Board Games and Go ...
    Recent versions of GNU GO [19] contain a hybrid system adding Monte-Carlo search to GNU GO. However, it is several hundred Elo rating points weaker than state ...
  20. [20]
    PACHI: State of the Art Open Source Go Program - SpringerLink
    We present a state of the art implementation of the Monte Carlo Tree Search algorithm for the game of Go. Our Pachi software is currently one of the ...
  21. [21]
    Pachi at Sensei's Library
    Jan 20, 2024 · Pachi is a go playing program (engine). It reached 4d on kgs server (account Pachi2 cluster version) and also runs well on lightweight hardware (Raspberry Pi, ...
  22. [22]
    [PDF] Crazy Stone wins First UEC Cup - Rémi Coulom
    The First UEC Cup took place on December 1–2, 2007, at the University of Electro-Communications, in Tokyo,. Japan. It is a new computer-Go tournament, ...Missing: 2007-2010 | Show results with:2007-2010<|control11|><|separator|>
  23. [23]
    Crazy Stone (software) - Wikipedia
    17/06/2007 - Silver medal in the 19x19 tournament at the 12th Computer Olympiad, Amsterdam. 02/12/2007 - Won the first Computer Go UEC Cup. 04/09 ...
  24. [24]
    Monte-Carlo tree search and rapid action value estimation in ...
    In this article we describe two extensions to the Monte-Carlo tree search algorithm, which significantly improve the effectiveness of the basic algorithm.
  25. [25]
    [PDF] Monte-Carlo Tree Search and Rapid Action Value Estimation in ...
    Mar 22, 2011 · The MC–RAVE algorithm combines these two value estimates in a principled fashion, so as to minimise the mean squared error. The second extension ...
  26. [26]
    [PDF] Achieving Master-Level Play in 9×9 Computer Go - David Silver
    Abstract. The UCT algorithm uses Monte-Carlo simulation to estimate the value of states in a search tree from the current state.
  27. [27]
    [PDF] Monte-Carlo Tree Search (MCTS) for Computer Go
    ○ UCT (Kocsis & Szepesvari 2006). ○ Crazy Stone (Coulom 2006). ○ Mogo (Wang & Gelly 2006). Page 100. MCTS for Computer Go. 100. Conclusion. ○ Monte-Carlo ...
  28. [28]
    Mastering the game of Go without human knowledge - Nature
    Oct 19, 2017 · Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules.
  29. [29]
    AlphaGo Zero: Starting from scratch - Google DeepMind
    Oct 18, 2017 · The paper introduces AlphaGo Zero, the latest evolution of AlphaGo, the first computer program to defeat a world champion at the ancient ...
  30. [30]
    DeepMind's updated AlphaGo has been secretly savaging pro ...
    Jan 5, 2017 · With a streak of 60 wins and the defeat of reigning Go champion Ke Jie, DeepMind's AlphaGo is making the same mark on the game of Go that ...
  31. [31]
    AlphaZero: Shedding new light on chess, shogi, and Go
    Dec 6, 2018 · In late 2017 we introduced AlphaZero, a single system that taught itself from scratch how to master the games of chess, shogi (Japanese ...<|control11|><|separator|>
  32. [32]
    leela-zero/leela-zero: Go engine with no human-provided ... - GitHub
    A Go program with no human provided knowledge. Using MCTS (but without Monte Carlo playouts) and a deep residual convolutional neural network stack.Missing: 2018 | Show results with:2018
  33. [33]
    [1902.10565] Accelerating Self-Play Learning in Go - arXiv
    Feb 27, 2019 · We greatly accelerate self-play learning in Go, achieving a 50x reduction in computation over comparable methods.
  34. [34]
    KataGo Distributed Training
    This site hosts KataGo's first public-distributed training run! With the help of volunteers, we are attempting to resume training from the end of KataGo's ...Missing: AI 7-
  35. [35]
    [PDF] an estimation method for game complexity - Alexander Yong
    Jan 29, 2019 · Based on master games collected by the psychologist Adriaan de Groot, he estimated ≈ 103 options per (white, black) move pair, and that an ...
  36. [36]
    [PDF] Combinatorics of Go - John Tromp
    Just as with legal positions, we can estimate the size of the game tree, and hence the number of games, using Heuristic Sampling. A natural stratifier ...
  37. [37]
    How AI-Based Training Affected the Performance of Professional Go ...
    In this paper, we present the ProfessionAl Go annotation datasEt (PAGE), containing 98,525 games played by 2,007 professional players and spans over 70 years.
  38. [38]
    Mastering the game of Go with deep neural networks and tree search
    Jan 27, 2016 · Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go ...
  39. [39]
    [PDF] Counting the Score: Position Evaluation in Computer Go
    Lack of speed and quality problems were the two main reasons for deviating from the standard approach in the game of Go: full-board position evaluation is ...
  40. [40]
    [PDF] Monte Carlo Tree Search in Go - Department of Computing Science
    This chapter describes the application of Monte Carlo Tree Search (MCTS) meth- ods to the game of Go. Computer Go was the first major success story for MCTS,.Missing: hybrid 2010-2014
  41. [41]
    [PDF] Mastering the game of Go with deep neural networks and tree search
    Jan 28, 2016 · AlphaGo also differs from prior work by using slower but more powerful representations of the policy and value function; evaluating deep neural ...
  42. [42]
    [PDF] Mastering the Game of Go without Human Knowledge
    AlphaGo Zero is the program described in this paper. It learns from self-play reinforcement learning, starting from random initial weights, without using ...
  43. [43]
  44. [44]
    (PDF) Bitboard Methods for Games - ResearchGate
    Aug 10, 2025 · Bitboards allow the efficient encoding of games for computer play and the application of fast bitwise-parallel algorithms for common ...
  45. [45]
  46. [46]
    [PDF] AI Game-Playing Techniques: Are They Useful for Anything Other ...
    Go is a perfect-information zero-sum game like Chess and Checkers but one in which traditional search techniques do not appear to be sufficient. Poker is an ...
  47. [47]
    The Grand Challenge of Computer Go - Communications of the ACM
    Mar 1, 2012 · For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, ...Missing: Goemate | Show results with:Goemate
  48. [48]
    Reflections on building two Go programs - ACM Digital Library
    It took 7 person-years, 8K lines of LISP, 3 megabytes of memory, and an IBM mainframe. Recently I constructed a similar program, called NEMESIS...the Go Master ...
  49. [49]
    [PDF] Search versus Knowledge for Solving Life and Death Problems in Go
    This paper presents TSUMEGO EXPLORER, a different approach to solving tsume-Go problems, with a focus on efficient search techniques rather than extensive ...
  50. [50]
    [PDF] Bandit based Monte-Carlo Planning - General Game Playing
    The main idea in this paper it to apply a particular bandit algorithm, UCB1. (UCB stands for Upper Confidence Bounds), for rollout-based Monte-Carlo plan- ning.
  51. [51]
    [PDF] Parallel Monte-Carlo Tree Search
    To be effective tree parallelization requires two techniques: adequately handling of (1) local mutexes and (2) virtual loss. Experiments in 13×13. Go reveal ...
  52. [52]
    [1712.01815] Mastering Chess and Shogi by Self-Play with a ... - arXiv
    Dec 5, 2017 · In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from ...
  53. [53]
    [2309.12675] Vision Transformers for Computer Go - arXiv
    Sep 22, 2023 · This paper explores using vision transformers in the game of Go, analyzing prediction accuracy, win rates, memory, speed, size, and learning ...
  54. [54]
    GNU Go - GNU Project - Free Software Foundation (FSF)
    GNU Go is a free program that plays the game of Go. GNU Go has played thousands of games on the NNGS Go server.Computer Go Links · Free Go software directory · Download · Table of Contents
  55. [55]
    Biography of The Many Faces of Go.
    2) fill a liberty of enemy group with least number of liberties). Sure ... My go playing engine is currently about 40K lines of C, with a joseki database of about ...
  56. [56]
    [PDF] GNU Go 3.8
    This is Edition 3.8 of The GNU Go Project documentation, for the 3.8 version of the GNU Go program. Published by the Free Software Foundation Inc. 51 Franklin ...
  57. [57]
    (PDF) Modification of UCT with Patterns in Monte-Carlo Go
    We have developed a Monte-Carlo Go program, MoGo, which is the first computer Go program using UCT. We explain our modification of UCT for Go application ...Missing: developer | Show results with:developer
  58. [58]
    Leela Zero at Sensei's Library
    Sep 10, 2024 · Leela Zero is an open-source, community-based project attempting to replicate the approach of AlphaGo Zero. It has reached superhuman strength.
  59. [59]
    breakwa11/GoAIRatings: Estimate Go AI ratings by real games
    Leela Zero best network. At 1600 playouts. Name, Size, Rank, ELO, Games. LZ ZQ F1, 5x64, 4D, 2574, 650. LZ 6b2h, 6x128, 5D, 2762, 649. LZ ManKit Pong 10BF ...
  60. [60]
    lightvector/KataGo: GTP engine and self-play learning in Go - GitHub
    KataGo remains one of the strongest open source Go bots available online. KataGo was trained using an AlphaZero-like process with many enhancements and ...Releases · Sign in · Pull requests 24 · SecurityMissing: 2019 | Show results with:2019<|separator|>
  61. [61]
    ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero
    ELF OpenGo is the first open-source Go AI to convincingly demonstrate superhuman performance with a perfect (20:0) record against global top professionals.
  62. [62]
    [PDF] ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero
    In this paper, we propose ELF OpenGo, an open-source reimplementation of the AlphaZero (Silver et al., 2018) algorithm for the game of Go. We then apply ELF ...
  63. [63]
    finding easier bots to play ? - General Go Discussion
    Aug 22, 2025 · It lets you roll back moves and get estimates score. pretty good for beginners getting a feel for the game. benjito August 22, 2025 ...
  64. [64]
    Top-7 Online Go Servers - Go Magic
    Jul 13, 2023 · Where to play Go online? This guide will help you explore the best platforms and apps for Go Game | Baduk | Weiqi.Online Go Server (OGS) · GoQuest · Kiseido Go Server (KGS) · WBaduk
  65. [65]
    Go, going, gone? | Games | The Guardian
    Apr 29, 2009 · Yet even a few years ago Go looked like an impossible computing task: the "search space" for each move was too big. At each turn, especially ...<|separator|>
  66. [66]
    AlphaGo Zero: Starting from scratch
    ### Summary of AlphaGo Zero (2017)
  67. [67]
    After AI beat them, professional go players got better and more ...
    Jan 23, 2024 · For many decades, it seemed professional Go players had reached a hard limit on how well it is possible to play. Then AI beat them.
  68. [68]
    AI's Victories in Go Inspire Better Human Game Playing
    Mar 13, 2023 · But a decades-long record of professional Go player moves gave researchers a way to assess the human strategic response to an AI provocation. A ...
  69. [69]
    North American Computer Go Championships - Smart Games
    The North American Computer Go Championship. The first Computer Go Tournament in the US was organized by Peter Langston as part of the 1984 Usenix conference.
  70. [70]
    Computer Go - Past Events
    This page lists events in which computer Go programs have competed against one another. For events in which they have competed against humans, see Human- ...
  71. [71]
    World Computer Go Championships - Smart Games
    Results ; Loh-Tsi Wang, Taiwan, Friday ; Kaihu Chen, Taiwan, Peanut ; Allan Scarf, England, Microgo II ; Chang Sheng Shu, Taiwan, Magic Go ...Missing: 2023 | Show results with:2023
  72. [72]
    Computer Go Tournaments on KGS - Weddslist
    A series of Computer Go Tournaments started on KGS (then known as the "Kiseido Go Server"). These tournaments were run by Nick Wedd, the author of these web ...Missing: 2024 | Show results with:2024
  73. [73]
    Computer Go Archive - Google Groups
    [Computer-go] UEC Cup will be held tomorrow. Hi, After final 5 round ... [Computer-go] GLOBIS-AQZ has been released. Hi, Yu Yamaguchi has released GLOBIS ...
  74. [74]
  75. [75]
    Computer Go Tournaments on KGS - Rules - Weddslist
    These tournaments are open only to computer programs. People may not enter. All tournaments use Chinese (area) rules, which are interpreted as specifying ...Missing: standard | Show results with:standard
  76. [76]
    Komi at Sensei's Library
    What have we learned from AI · komi=5.5: 57.9% (distance from 50%: 7.9 pp) · komi=6.5: 47.5% (distance from 50%: 2.5 pp) · komi=7.5: 37.7% (distance from 50%: 12.3 ...History of Komi · Statistical Komi Analysis · What have we learned from AI · Fair komiMissing: benchmarks | Show results with:benchmarks
  77. [77]
    The Tenth UEC Cup Computer Go
    The UEC Cup Computer Go Tournament was finished in the 10th competition. ... Japanese rules on 19x19 board. Time control of 30 minutes without Byoyomi ...
  78. [78]
    From Tabula Rasa to Emergent Abilities: Discovering Robot Skills...
    Feb 28, 2025 · We present URSA, a method enabling robots to autonomously discover and learn their abilities in a reset-free manner without extensive human guidance.Missing: AlphaZero drug
  79. [79]
    What AlphaGo Zero Means for Artificial Intelligence Drug Discovery
    Oct 25, 2017 · Discusses the implications of AlphaGo Zero for biomedical research and artificial intelligence drug discovery.Missing: tabula rasa
  80. [80]
    Monte-Carlo Tree Search for Production and Logistics - ResearchGate
    Dec 3, 2016 · PDF | In this paper we review recent advances of randomized AI search in solving industrially relevant optimization problems.
  81. [81]
    [PDF] Monte Carlo Tableau Proof Search - arXiv
    In this work, we study MCTS methods that can guide the search in automated theorem provers, and evaluate their impact on interactive theorem proving problems in ...
  82. [82]
    [1706.03762] Attention Is All You Need - arXiv
    Jun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
  83. [83]
  84. [84]
    Meet AlphaEvolve, the Google AI that writes its own code—and just ...
    May 14, 2025 · AlphaEvolve doesn't just generate code from its training data. It actively explores the solution space, discovers novel approaches, and ...
  85. [85]
    Press release: The Nobel Prize in Chemistry 2024 - NobelPrize.org
    Oct 9, 2024 · The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry 2024 with one half to David Baker.Popular information · 2024 · Nobelpriset i kemi 2024
  86. [86]
    Demis Hassabis & John Jumper awarded Nobel Prize in Chemistry
    Oct 9, 2024 · John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold, a groundbreaking AI system that predicts ...
  87. [87]
    A general reinforcement learning algorithm that masters chess ...
    Dec 7, 2018 · By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In ...
  88. [88]
    Mastering Atari, Go, chess and shogi by planning with a learned model
    Dec 23, 2020 · The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function ...Missing: games | Show results with:games
  89. [89]
    MuZero: Mastering Go, chess, shogi and Atari without rules
    Dec 23, 2020 · MuZero's ability to both learn a model of its environment and use it to successfully plan demonstrates a significant advance in reinforcement ...
  90. [90]
    A novel real-time energy management strategy based on Monte ...
    Oct 1, 2022 · To be specific, the Monte Carlo Tree Search (MCTS) is employed to search for optimal control sequence in the velocity feasible range in the ...
  91. [91]
    Monte Carlo Tree Search-Based Mixed Traffic Flow Control ...
    May 31, 2020 · A model-free approach is presented, based on the Monte Carlo tree search (MCTS) algorithm, for the control of mixed traffic flow of ...
  92. [92]
    Solving the many-electron Schrödinger equation with a transformer ...
    Sep 29, 2025 · The quantum state sampling employs layer-wise Monte Carlo tree search (MCTS) that naturally enforces electron number conservation while ...
  93. [93]
    Impact of Go AI on the Professional Go World – Response
    Aug 25, 2020 · Lee writes on three areas of the professional go world that go ai has affected: go as a career or life path,; professional players' race to ...<|control11|><|separator|>