Fact-checked by Grok 2 weeks ago

KataGo

KataGo is a free and open-source program that employs techniques, including inspired by , to play the strategic Go at superhuman levels. Developed by David J. Wu under the GitHub username lightvector, it was initially released in early 2019 following a seven-day training run on high-performance GPUs, marking a significant advancement in open-source Go AI. KataGo's core innovation lies in its accelerated training process, which incorporates enhancements such as improved and targets, auxiliary training heads for territory estimation, and efficient search algorithms like , enabling it to reach professional-strength play from scratch in days on modest hardware or performance in months on a single high-end GPU. The supports a wide range of board sizes from 7x7 to 19x19, multiple rule sets including and , variable komi values, and games, all handled by a single without human-provided game knowledge. Its are publicly available through ongoing distributed training efforts, which as of 2025 have amassed over 4.3 billion training examples from more than 87 million games contributed by over 1,300 users worldwide. Since its inception, KataGo has been supported by financial backing from Jane Street, allowing for extended runs that have positioned it as one of the most powerful open-source Go engines, surpassing earlier programs like Leela Zero in efficiency and strength while providing tools for game analysis, score prediction, and human-like play simulation. It excels in practical applications, such as integrating with platforms for player review and study, and has demonstrated the ability to solve complex historical Go problems, such as those in the classic Japanese text Igo Hatsuyōron, by identifying optimal moves and outcomes unattainable by analysis alone. By 2025, recent versions like v1.16 incorporate specialized models for mimicking moves across ranks from 20k to 9d, enhancing its utility for and in the Go .

Overview

Development history

KataGo was initiated in 2018 by David J. Wu, a researcher specializing in game , as an open-source project aimed at advancing techniques for the game of Go. Wu, who previously developed the Arimaa-playing program bot_Sharp, sought to build upon AlphaZero-style self-play methods by introducing optimizations tailored to Go, enabling more efficient training without reliance on proprietary systems like . The core motivation was to democratize access to high-performance Go through faster learning algorithms that reduced computational demands while maintaining or exceeding superhuman play strength. A pivotal milestone came with the publication of the preprint paper "Accelerating Self-Play Learning in Go" on February 27, 2019, which detailed innovative techniques such as global pool training, auxiliary heads for move reasoning, and score-based loss functions, achieving a 50-fold reduction in computation compared to prior methods. Concurrently, Wu launched the project's repository under the username lightvector, making the GTP engine and self-play learning code publicly available. Released under the permissive , KataGo encouraged widespread adoption and modification, fostering community-driven enhancements to the engine, implementations, and integrations with Go-playing platforms. Over time, KataGo evolved through iterative releases, incorporating feedback from users and researchers to refine its architecture and performance. A significant update occurred in version 1.12.0, released on January 8, 2023, which transitioned the training backend from to for improved efficiency and compatibility in distributed processes. This shift, alongside a new nested residual bottleneck design, marked a key advancement in the project's , allowing for more rapid experimentation and stronger models without altering the core search mechanisms.

Release and adoption

KataGo's initial release, version 1.0, occurred on February 27, 2019, establishing it as a GTP-compatible engine capable of playing and analyzing Go games through learning enhancements. This version laid the foundation for its open-source availability on , where it quickly gained traction among developers and Go enthusiasts for its efficient architecture and search algorithms. Subsequent major updates expanded its capabilities and accessibility. Version 1.15.0, released on July 19, 2024, introduced human-like imitation features via a model trained to predict moves across player ranks from 20 kyu to 9 . The stable release of version 1.16.3 followed on June 28, 2025, incorporating performance optimizations such as improved Metal backend support for and enhanced training data handling for larger board sizes. Version 1.16.4, released in October 2025, added experimental eval-caching and further bug fixes in distributed training. Precompiled binaries for Windows, , and are distributed through releases, while weights are freely hosted on katagotraining.org to support user experimentation and integration. KataGo's adoption has grown significantly within Go communities and digital platforms. It serves as the default engine for game analysis on Online-Go.com (OGS), enabling users to review matches with AI feedback directly in the interface. Mobile applications, such as AI KataGo Go, have extended its reach to casual players; the iOS version launched on October 24, 2021, followed by the Android release on November 24, 2023, allowing offline play against the AI on smartphones and tablets. The project's community impact is bolstered by the volunteer-driven distributed training initiative "kata1," which commenced on November 28, 2020, and leverages global contributors to generate high-quality data. This effort has enabled free access to advanced models, including the b28c512 network—featuring 28 residual blocks and 512 channels—released in May 2024, which marked a substantial over prior iterations through extended training on distributed resources.

Technical architecture

Neural network design

KataGo's is a convolutional network (ResNet) utilizing pre-activation blocks, similar to those in and but with tailored enhancements for Go. The base architecture employs a trunk of blocks, with model sizes varying from 20 to 40 layers; for instance, smaller models like b6c128 feature 6 blocks with 128 channels, while larger ones such as b20c256 use 20 blocks with 256 channels, and recent variants like b18c384nbt incorporate 18 nested blocks for efficiency. The network processes inputs tailored to Go's strategic depth, using 18 binary spatial feature planes for a standard 19x19 board. These planes encode current stone positions for both players (2 planes), stones with 1, 2, or 3 liberties (3 planes), illegal moves due to , superko, or (1 plane), locations of the last 5 moves (5 planes), stones ladderable 0, 1, or 2 turns ago (3 planes), moves that catch the opponent in a (1 plane), and pass-alive areas for self and opponent (2 planes), along with a plane indicating locations on the board. Additional global features include real-valued scalars for game length, ruleset parameters like ko bans and komi, and positional summaries such as the number of captured stones. This design provides the network with rich, game-specific context beyond raw board states, including indicators of self-atari risks to avoid suicidal moves. Outputs are generated via dedicated heads attached to the shared trunk. The policy head produces a 19x19 spatial map of move s plus a separate for passing, yielding probabilities over all possible actions after softmax application. The value head outputs scalars estimating the for the current player and the expected score difference. Complementing these is an head, delivering a 19x19 heatmap where each value represents the predicted probability that the intersection will belong to the current player at the game's end, aiding in evaluation. To enhance global awareness, the architecture integrates global pooling layers after convolutional stages, computing channel-wise statistics like means and maxima across the entire board for incorporation into subsequent layers. Domain-specific features, such as counts and outcomes, further distinguish KataGo from generic setups by accelerating convergence on Go's tactical nuances. Since version 1.12, the training framework has utilized , facilitating flexible implementations that support variable board sizes from 7x7 to 19x19 without architectural changes. These policy and value outputs integrate with during gameplay.

Search mechanism

KataGo employs an AlphaZero-style (MCTS) algorithm, where simulations are guided by a neural network's and outputs to explore the game efficiently. The network provides prior probabilities for move selection, incorporated into the PUCT formula for balancing exploration and exploitation: \text{PUCT}(c) = V(c) + c_{\text{PUCT}} \cdot P(c) \cdot \frac{\sqrt{\sum_{c'} N(c')}}{1 + N(c)}, with c_{\text{PUCT}} = 1.1, where V(c) is the average , P(c) the prior, and N(c) the visit count. estimates from the network back up through the to evaluate utilities, rapid assessment of positions without full rollouts. To enhance exploration, KataGo adds Dirichlet noise to the root node's policy priors, mixing 75% of the raw policy with 25% noise sampled from a parameterized by \alpha = 0.03 \times 19^2 / N, where N is the number of simulations. Virtual loss is applied during parallel searches to temporarily penalize nodes under exploration by multiple threads, preventing over-visitation and promoting balanced tree growth. These mechanisms ensure robust sampling in high-branching-factor positions typical of Go. KataGo extends standard MCTS with Monte Carlo Graph Search (MCGS), representing the search as a to merge transpositions—identical board states reached via different move sequences—reducing redundant computations. This is particularly efficient for handling ko fights and repeated subpositions, where traditional structures would duplicate subtrees, leading to exponential growth in memory and time. Additionally, during search, KataGo identifies and fills in dead groups using Benson's algorithm for pass-aliveness, irrelevant branches and lowering the effective branching factor by excluding moves in unconditionally dead regions. The neural network's policy priors guide initial move proposals, while value estimates inform backups after each simulation; temperature scaling via softmax (e.g., root of 1.03) introduces controlled variability in early-game playouts, aiding diverse generation and simulating human-like decision when configured. For move selection in gameplay, KataGo allocates a fixed number of visits, such as per move in standard benchmarks, choosing the action with the highest visit count to balance accuracy and computational constraints. It also supports pondering, continuing background searches on the current position during the opponent's turn to anticipate responses and accelerate subsequent decisions. KataGo accommodates multiple rule sets, including , , and Tromp-Taylor, with support for variable komi values and board sizes from 7x7 to 19x19, ensuring compatibility across diverse competitive formats without altering the core search logic.

Training process

Initial self-play training

KataGo's initial training employs a pure approach, bootstrapping its s from scratch without any prior human game data or supervision. The process begins with random play, where the guides (MCTS) to simulate games on the 19×19 Go board. These self-play games generate training data consisting of board positions paired with move probabilities derived from MCTS-improved policies and game outcomes as value labels. Iteratively, a new is trained on this data to predict better policies and values, which then informs subsequent self-play games, enabling progressive improvement toward superhuman strength. The key phases of this bootstrap training involve alternating between data generation and model updates. First, the current , augmented with MCTS, plays thousands of against itself, varying the number of simulations per move to focus computation on challenging positions and reduce variance in training targets. Positions from these recent —typically the most recent 500,000—are then used to train a successor , emphasizing samples with high policy divergence from the prior model to accelerate learning. This cycle repeats, scaling the architecture progressively (e.g., from 6 blocks and 96 channels to 20 blocks and 256 channels) to handle increasing complexity, culminating in a model capable of matching or exceeding prior state-of-the-art systems like ELF OpenGo. The initial full-strength run utilized 28 NVIDIA V100 GPUs over 19 days, generating 4.2 million self-play games and 241 million position samples to produce the first strong model (b20 × c256). This hardware setup equated to approximately 1.4 GPU-years of computation, a fraction of the resources required by earlier systems. The training incorporates domain-specific features, such as Go's stone ladder and liberty representations, which encode board state more efficiently than raw images, significantly reducing sample complexity compared to general-purpose architectures. Central to the training are specialized loss functions that target the neural network's prediction heads. The primary policy loss uses to align the network's move probabilities with those improved by MCTS during self-play. The value loss employs (MSE) between predicted win rates and actual game outcomes, scaled by a coefficient of 1.5 for emphasis. An auxiliary ownership loss, weighted at 1.5 divided by the board size squared, predicts territorial control per intersection to enhance evaluation, while additional auxiliary heads for score beliefs and opponent policy further refine predictions without dominating the main objectives. A small regularization (coefficient 3×10^{-5}) prevents . These components, combined with techniques like dynamic komi adjustment and visit , enable rapid . Efficiency gains from these innovations allow KataGo to reach high levels in just days on modest , contrasting with AlphaZero's multi-week timelines on vastly more resources for similar milestones. Overall, the approach achieves a 50-fold reduction in computational cost to surpass ELF OpenGo's performance, demonstrating the impact of tailored enhancements in learning for Go. The neural network's and heads, briefly, output scalar win probabilities and move distributions, respectively, directly supporting the MCTS integration during training.

Distributed and ongoing training

Following its initial training phase, KataGo's development has been sustained through the Kata1 project, a community-driven distributed training initiative hosted at katagotraining.org. Launched on November 28, 2020, this effort leverages volunteer-contributed GPUs worldwide to generate games and update neural networks, resuming from the g170 checkpoint achieved in June 2020. Participants install custom client software, such as the or command-line tools integrated with KataGo version 1.16.2 (released June 4, 2025), to run simulations on their idle after creating an on the platform. The process involves volunteers producing vast quantities of training data through distributed , which is then uploaded and aggregated at a central for processing. As of November 2025, the project has accumulated over 4.3 billion rows of training data from more than 87 million games contributed by 1,321 unique users, with recent activity including 2.1 million rows and 45,000 games uploaded in the prior 24 hours alone. This data fuels periodic releases of updated neural networks—totaling 908 models to date—with new versions generated frequently based on aggregated contributions, often resulting in stronger iterations without reliance on proprietary computing resources. Among recent advancements, the b28c512 architecture (28 residual blocks and 512 channels) represents a pinnacle of open-source models, with variants like kata1-b28c512nbt-s11803203328-d5553431682 released as recently as November 13, 2025, and the confidently rated strongest net kata1-b28c512nbt-adam-s11165M-d5387M from October 2025. Experimental runs have incorporated limited from human games to predict moves across various player ranks and historical time periods, enhancing features like human-like play analysis, as seen in dedicated "extra networks" trained on large human datasets. The infrastructure's is evident in its ability to harness contributions from thousands of volunteer GPUs globally, enabling the production of networks that surpass earlier versions in strength through collective, decentralized compute power. This model supports ongoing enhancements, such as the April 2025 introduction of experimental action-value heads in version 1.16.0, which aim to improve efficiency and adaptability for deployment on resource-constrained devices.

Features and applications

Analysis and visualization tools

KataGo provides robust interfaces for and visualization, enabling integration with various graphical user interfaces (GUIs) to facilitate game review and strategic insight. Its core tools include a standard Go Text Protocol (GTP) implementation with extensions such as the kata-analyze command, which outputs winrate estimates, expected scores, distributions, and probabilities for board positions. Additionally, a JSON-based engine supports efficient of multiple games or positions, making it suitable for backend services and automated evaluations. These interfaces allow seamless integration with popular GUIs like Sabaki, , KaTrain, and others, where KataGo serves as the backend engine for generating winrate graphs, branch (exploring alternative move sequences), and suggestion modes that highlight recommended plays. Key visualizations derived from KataGo's include ownership heatmaps, which estimate territorial control by displaying the probability that each will belong to a specific player at the game's end, aiding in the assessment of positional advantages. Policy maps visualize the neural network's move probabilities across the board, helping users understand likely strategic focuses, while score leader graphs track the evolving lead in estimated final scores throughout a game, providing a dynamic view of momentum shifts. These outputs are configurable via settings files, such as gtp_example.cfg, allowing users to adjust parameters like search depth for balancing speed and thoroughness in analysis. In usage modes, KataGo supports post-game analysis of human or AI-played games by loading Smart Game Format (SGF) files and annotating them with winrate variations, ownership estimates, and best-move suggestions. Real-time hints can be enabled during live play, offering immediate feedback on move quality without disrupting the flow. It also accommodates bookups—precomputed opening sequences for rapid evaluation—and puzzle modes for tactical exercises, where users can explore specific scenarios. A unique feature is multi-principal variation (multi-PV) output, which generates and displays several alternative lines of play with their respective winrates and scores, enabling deeper exploration of branching possibilities. For broader accessibility, KataGo includes plugins for platforms like Online Go Server (OGS), allowing server-side analysis directly within web-based interfaces. Outputs can be exported to annotated SGF files, preserving visualizations and comments for archiving, sharing, or further study in compatible software. These tools collectively emphasize practical utility for players and analysts, leveraging KataGo's search computations to deliver interpretable insights without requiring advanced technical expertise.

Human-like play modes

In version 1.15.x of KataGo, released in July 2024, a new experimental feature was introduced to enable more human-like gameplay through the Human Supervised Learning (SL) Network, embodied in the model file b18c384nbt-humanv0.bin. This model is trained via supervised learning on a large dataset of human Go games dating back to the 1800s, covering players across various skill levels, allowing it to predict and imitate human moves across various skill levels and historical periods. The training focuses on capturing patterns from human play records, enabling configurable imitation of specific ranks ranging from 20 kyu to 9 dan or eras such as 20th-century styles. Subsequent versions like v1.16 (April 2025) further refined these capabilities. The mechanism integrates the Human SL Network's policy predictions with KataGo's standard during (MCTS), blending the human-derived policy head to influence move selection and reduce characteristic "AI alienness," such as overly optimistic attempts that humans typically avoid. This blending occurs by weighting the human model alongside the default policy, adjustable via configuration files like gtp_human5k_example.cfg, which simulates lower-rank human play without fully sacrificing the engine's underlying strength. As a result, the generates moves that align more closely with human intuition while maintaining strategic depth. Applications of this feature include creating realistic training opponents in Go applications and games; for instance, the July 2025 update to incorporated a Human-trained opponent to provide more natural experiences. It also supports analysis of style deviations in game reviews, highlighting where a player's moves diverge from expected human patterns at a given rank or era. Despite these capabilities, the mode remains experimental and is not enabled by default, as the retains elements in evaluation and search that can only be partially adjusted for realism.

Performance and limitations

Competitive strength

KataGo has established itself as one of the strongest open-source Go engines. On internal evaluations, its top networks reach ratings exceeding 14,000, with recent models like the b28c512nbt series demonstrating consistent gains of 200-300 points over prior iterations during distributed training. This positions KataGo at the top of open-source engines, surpassing competitors in strength per computational resource. In comparisons, KataGo outperforms Leela Zero in training efficiency, reaching superhuman levels with approximately 10 times fewer GPU-years than Leela Zero's multi-year effort, while matching or exceeding ELF OpenGo's performance after just 1.4 GPU-years on 27 V100 GPUs. It approaches the capabilities of using far fewer resources—about 50 times less computational cost overall—thanks to optimizations in and search. A 2024 analysis against perfect play solvers showed KataGo achieving near-optimal decisions in midgame positions, with success rates over 90% in scenarios using moderate search depths of 500 visits. KataGo's achievements include widespread adoption for training, where it aids in review and strategy development. It has dominated online tournaments on platforms like Online-Go.com, maintaining top rankings among AI participants in 2025 events, and demonstrated superiority over 9-dan s even under handicaps, such as in a 2025 match against Naoyuki Nakane where KataGo spotted six stones and secured victory. Elo progression tracks reveal steady improvement, with networks gaining hundreds of Elo points per billion training samples, while human-AI matches confirm its edge, often winning 90%+ against top pros without concessions. Hardware-scaled tests indicate KataGo runs 5-10 times faster than equivalents on similar setups, enabling broader accessibility. KataGo excels on the standard 19x19 board but demonstrates versatility across sizes, scaling effectively to smaller boards like 9x9 for variant play and larger ones up to 50x50 through specialized training runs that preserve high performance. This adaptability stems from its , allowing robust play without rule changes, as evidenced by dedicated networks trained on rectangular and oversized boards in 2025.

Adversarial vulnerabilities

KataGo, despite its superhuman performance in Go, exhibits significant vulnerabilities to adversarial inputs that can induce suboptimal decisions, such as early passes in otherwise winnable positions. A seminal 2022 study demonstrated that adversarial policies, trained specifically to exploit KataGo, achieve over 97% win rates against it even at superhuman search settings, by employing subtle board perturbations like noise in empty areas or cyclic patterns that create illusory threats. These perturbations mislead KataGo's policy and value heads, causing the to misestimate move quality and game outcomes, which in turn propagates errors through its (MCTS) by inflating false threats and directing exploration toward unpromising branches. A July 2024 update in highlighted how such patterns trick KataGo into prematurely passing, forfeiting games it would otherwise win convincingly. To address these flaws, KataGo's developers incorporated adversarial training starting in December 2022, adding hand-crafted positions from known cyclic and "" attacks to the training dataset, which improved robustness in subsequent models like the December 2023 and May 2024 versions. However, this mitigation does not confer full immunity; for instance, a refined attack still defeats the December 2023 model with a 91% win rate at moderate search depths, while sophisticated cyclic attacks succeed at 56% against the May 2024 model even with extensive computation. Activating human-like play modes in KataGo may partially reduce vulnerability by constraining its policy to more intuitive moves, though this has not been rigorously quantified against adversarial policies. These vulnerabilities underscore the brittleness of AI systems in strategic games like Go, where even advanced architectures fail against targeted exploits that humans can often intuitively avoid or replicate. The 2022 adversarial policies transfer zero-shot to other superhuman Go AIs, including derivatives, revealing shared failure modes across MCTS-based systems. Recent 2024-2025 evaluations, including those from FAR.AI and AAAI proceedings, confirm that while basic attacks are increasingly mitigated, novel sophisticated variants—such as larger cyclic groups—continue to bypass defenses as of November 2025, emphasizing the ongoing challenge of achieving comprehensive robustness.

References

  1. [1]
    [1902.10565] Accelerating Self-Play Learning in Go - arXiv
    Feb 27, 2019 · We greatly accelerate self-play learning in Go, achieving a 50x reduction in computation over comparable methods.Missing: creator | Show results with:creator
  2. [2]
    lightvector/KataGo: GTP engine and self-play learning in Go - GitHub
    KataGo remains one of the strongest open source Go bots available online. KataGo was trained using an AlphaZero-like process with many enhancements and ...Releases · Sign in · Pull requests 24 · Security
  3. [3]
    KataGo/docs/KataGoMethods.md at master · lightvector/KataGo
    Insufficient relevant content. The provided text is a GitHub page header and navigation menu, with no substantive information about KataGo's search algorithm or methods. No details on MCTS, policy/value guidance, virtual loss, Dirichlet noise, evaluation process, temperature scaling, move selection, pondering, rule support, or enhancements like fill-in dead groups are present.
  4. [4]
    KataGo Distributed Training
    It can predict score and territory, play handicap games reasonably, and handle many board sizes and rules all with the same neural net. This site hosts KataGo's ...Networks · Games · Contributions · Extra Nets
  5. [5]
    Deep-Learning the Hardest Go Problem in the World
    Dec 6, 2019 · This version of KataGo has also been released to the Go player community for several months now. ... KataGo is also popular for game analysis ...Updates And A New Run · Training Summary · Results
  6. [6]
  7. [7]
    Release New Human-like Play and Analysis · lightvector/KataGo
    ### Summary of New Human-like Play and Analysis Features in KataGo v1.15.0
  8. [8]
  9. [9]
    Releases · lightvector/KataGo
    ### First Release of KataGo
  10. [10]
    Networks for kata1 - KataGo Distributed Training
    Here are stats and download links for all the neural networks from this run. If you are a casual user, you want "Network File" - you can directly use this .bin.Missing: binaries | Show results with:binaries
  11. [11]
    In-browser AI Analysis - OGS Development - Online Go Forum
    Oct 31, 2021 · Currently, OGS does already offer game analysis via KataGo within the OGS interface. However, this is perk for supporters. This analysis ...
  12. [12]
    AI KataGo Go - App Store - Apple
    Rating 3.4 (7) · Free · iOSOct 24, 2021 · Enjoy Go with the strongest AI KataGo on mobile without worrying about battery. 1. Enjoy Go (Baduk) with the strongest AI KataGo on your ...
  13. [13]
    AI KataGo Go Latest Version for Android/iOS APK - TapTap
    Sep 2, 2025 · Enjoy Go easily with the strongest artificial intelligence Katago on your mobile. ... Release date04/02/2021. Last Updated on09/02/2025.
  14. [14]
    New katago b28c512 model improved katago's strength as much as ...
    May 7, 2024 · The new b28c512 model has a >300 elo improvement (katago internal rating against other katago) vs the last best model.stronger katago policy network won 1 game off of cs zero 9 dan. Can ...KataGo's 1.4.5 and final neural nets for a completed run! - RedditMore results from www.reddit.comMissing: project | Show results with:project
  15. [15]
  16. [16]
    KataGo's Supported Go Rules (Version 2) - GitHub Pages
    This page describes the full rigorous rules implemented by KataGo. Updated from rules version 1 right around the start of 2021 to fix an issue where basic ...<|control11|><|separator|>
  17. [17]
    Extra Networks - KataGo Distributed Training
    Extra networks include human-trained, strength-finetuned, small board, large board, and special position networks, not part of KataGo's main training.
  18. [18]
  19. [19]
    KataGo/docs/Analysis_Engine.md at master · lightvector/KataGo
    **Summary of 'Human SL Analysis Guide' Section:**
  20. [20]
  21. [21]
    KataGo v1.15.x - new human-like play and analysis - Online Go Forum
    Jul 20, 2024 · KataGo v1.15.x is released! There is a new model b18c384nbt-humanv0.bin.gz trained to predict human moves. You can configure the rank of human player to ...
  22. [22]
  23. [23]
  24. [24]
    What do you think Shin Jinseo and Katago's ranks would be on OGS?
    Jan 29, 2025 · Goratings says his elo is 3800, but the highest bot on here has 3500, which is 14d, and that can't be right since bots are better than humans.
  25. [25]
    Katago gives a 9 Dan Professional 6 Handicaps!!! And... - YouTube
    Aug 11, 2025 · Go to channel · He is not Nakane! But.. Korean Champion VS Katago 3 Handicaps Legendary Full Game Commentary. Go Games Series New 515 views.Missing: matches | Show results with:matches
  26. [26]
    KataGo network trained on large boards, up to 50x50, is ... - Reddit
    May 11, 2025 · I'm excited to share a KataGo fork trained for large boards (up to 50x50!), developed with guidance from lightvector (KataGo's creator).KataGo v1.12.0 is released - new neural net architecture - RedditNew katago b28c512 model improved katago's strength as much as ...More results from www.reddit.comMissing: project distributed
  27. [27]
    [2211.00241] Adversarial Policies Beat Superhuman Go AIs - arXiv
    Nov 1, 2022 · We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it, achieving a >97% win rate against KataGo ...Missing: initial | Show results with:initial
  28. [28]
    Can AI be superhuman? Flaws in top gaming bot cast doubt - Nature
    Jul 8, 2024 · In 2022, researchers reported training adversarial AI bots to defeat KataGo, the best open-source Go-playing AI system, which typically ...
  29. [29]
    Positional adversarial training
    In December 2022, KataGo's main training run began to incorporate adversarial training using hand-written positions derived from cyclic attacks.
  30. [30]
    Beyond the Board: Exploring AI Robustness Through Go - FAR.AI
    Jun 18, 2024 · KataGo's positionally adversarially trained December 2023 network is vulnerable to a new cyclic attack (illustrated right) and wins even when ...
  31. [31]