Fact-checked by Grok 2 weeks ago

Computer Othello

Computer Othello is the branch of focused on developing computer programs that play the Othello, also known as Reversi, a deterministic, perfect-information for two players on an 8×8 grid. Players alternate turns placing discs of their color on empty squares, capturing opponent's discs by flanking them in straight lines—horizontally, vertically, or diagonally—between their new disc and an existing one of the same color; a move is legal only if it flips at least one opponent disc, and the game ends when the board is full or neither player can move, with victory going to the player controlling more discs. The game's combinatorial complexity, with roughly 10^28 possible board positions and 10^58 game sequences, has made it a for in search algorithms, evaluation functions, and techniques. The history of computer Othello dates back to the late 1970s, with early programs emerging alongside the game's popularization after its modernization in by Hasegawa. By 1980, programs like The Moor, developed by Mike Reeve and , achieved notable success by winning one game in a six-game match against human world champion Hiroshi Inoue, marking an initial milestone in human-computer competition. In the early 1980s, by Paul Rosenbloom advanced the field through superior evaluation of piece capture and board mobility, outperforming predecessors and dominating early computer tournaments. Further progress in the late 1980s came with BILL, created by Kai-Fu Lee and Sanjoy Mahajan at Carnegie Mellon University, which incorporated Bayesian learning to adapt evaluation functions dynamically and reliably defeated IAGO. The 1990s saw a leap with Michael Buro's Logistello, which refined alpha-beta search, transposition tables, and pattern-based evaluations through extensive self-play—over 100,000 games—leading to its decisive 6-0 victory over human world champion Takeshi Murakami in 1997, establishing computer superiority in Othello. Subsequent programs like Hannibal and Zebra continued refinements, but research intensity declined after Logistello's retirement in 1998. A major breakthrough occurred in 2023 when Japanese bioinformatician Hiroki Takizawa, using a modified version of the Edax program on Japan's MN-J cluster, weakly solved by exhaustively searching 10^18 positions and proving that perfect play from the initial position results in a draw for both players. This computation, leveraging alpha-beta pruning and endgame databases, confirmed the game's game-theoretic value and highlighted ongoing advances in computational power for solving complex games. Today, computer influences broader domains, including applications like Othello-GPT models that internalize game mechanics.

Overview

Definition and Scope

Computer Othello refers to the development of hardware and software systems designed to play the Othello, also known as Reversi, which is a two-player contested on an 8x8 grid using 64 reversible discs that are black on one side and white on the other. In the game, players alternate turns, with Black starting first; on each turn, a player places a disc of their color adjacent to an opponent's disc in such a way that it forms a straight line of the opponent's discs bracketed between two of the player's own discs, thereby flipping all discs in that line to the player's color. The game concludes when the board is full or neither player can make a legal move, with the winner determined by the player having the majority of their colored discs facing up. The field of Computer Othello encompasses techniques for selecting optimal moves, ranging from exhaustive methods to sophisticated functions that approximate board positions without exploring every possibility. Unlike human play, which is limited by cognitive constraints and time, computer programs achieve superior performance through vast computational depth, enabling them to evaluate millions of potential game states per second and simulate outcomes far beyond human foresight. Othello's computational challenges arise from its immense state space, with an estimated 10^{28} possible legal board positions, rendering full infeasible on current without algorithmic optimizations. This complexity, coupled with a game-tree size of approximately 10^{58}, positions Computer Othello as a for research in perfect-information games, highlighting the need for efficient search and evaluation strategies to approach optimal play.

Historical Context

The development of computer Othello began in the late , paralleling the rise of programs for other board games like chess, which saw early successes such as Mac Hack VI in 1967. Othello itself was patented in 1971 by Japanese designer Goro Hasegawa as a modern variant of the 19th-century game Reversi. The first computerized version appeared as an released by in 1978, marking the initial foray into digital play. By 1979, the inaugural computer Othello tournaments were held, featuring rudimentary programs that relied on basic search techniques and were generally outperformed by skilled human players. A notable milestone in mainstream adoption came in 1985 with the inclusion of Reversi () as a built-in demonstration game in Windows 1.0, introducing the game to a broad audience of users. Early software implementations from this era, such as those on systems like the Compucolor II in 1978, used simple heuristics and were limited by hardware constraints, often achieving only novice-level performance. The 1990s marked a transition to strong play, driven by refinements in alpha-beta pruning and more sophisticated evaluation functions, elevating programs beyond human capabilities. This culminated in 1997 when Logistello, developed by Michael Buro at NEC Research Institute, decisively defeated reigning human world champion Takeshi Murakami 6-0 in a formal match, demonstrating superhuman strength through exhaustive search depths. An earlier milestone occurred in 1980, when the program The Moor won one game in a six-game match against human world champion Hiroshi Inoue. Recent advances have incorporated neural networks for enhanced strategy evaluation and , building on evolutionary approaches from the 1990s to achieve even greater efficiency. In 2023, Japanese researcher Hiroki Takizawa weakly solved the initial position of 8x8 using a modified version of the Edax program on Japan's MN-J , exhaustively searching approximately 1.5 × 10^{18} positions with alpha-beta and proving that perfect play by both players results in a draw.

Program Availability

Downloadable Software

Several freely available Othello programs have been developed over the years, providing users and developers with robust tools for playing, analyzing, and experimenting with the game offline. Among the key options is NTest, created by Chris Welty, which emphasizes a high-quality for accurate position assessment. NTest supports command-line operation or integration with the graphical interface NBoard, offering compatibility with 64-bit Windows, macOS, and systems; a 32-bit version is also available for older hardware lacking popcnt instructions. Its can be downloaded from the official repository, enabling customization and study of its algorithms. Another prominent program is Edax, developed by Richard Delorme, renowned for its efficient bitboard-based search that enables fast computation even on standard hardware. Edax features an accurate midgame evaluation, opening book learning, and a text-based interface supporting multiple protocols for graphical frontends or online play integration, with multithreading for enhanced performance. It is compatible with 64-bit versions of Windows, , and macOS, and binaries along with source code are freely downloadable from its releases page. This makes Edax suitable for users seeking high-speed analysis across various skill levels. Logistello, authored by Michael Buro, serves as a classic benchmark in Othello programming history, having been one of the strongest programs in the with innovative techniques like multi-probcut search. Its , including an X-window GUI, was released publicly in 2002 and remains available for download from the developer's archived homepage, allowing historical study and adaptation. While dated, it supports basic play and evaluation on legacy systems, providing a foundational reference for developers. These programs often include adjustable difficulty settings to accommodate beginners and experts, as well as built-in analysis tools for reviewing moves and positions. Community efforts further expand options through open-source repositories on GitHub, where developers contribute custom AIs implementing modern techniques like AlphaZero-style learning for Othello. Examples include implementations focused on reinforcement learning and minimax enhancements, fostering ongoing innovation in downloadable Othello software.

Online and Commercial Versions

Online platforms for computer Othello provide accessible ways for players to engage in matches against or human opponents without requiring software installation. eOthello, launched as an , allows users to play the in real-time against players worldwide, supporting both casual and competitive multiplayer modes. Similarly, Othello Quest operates as a free online server and , hosting one of the largest Othello communities with features for quick games and matchmaking against beginners to expert-level opponents. Commercial versions of Othello software have appeared across various platforms, often under the name Reversi, incorporating opponents for single-player practice. Mobile apps such as Reversi by Livio on offer AI-driven gameplay with adjustable difficulty levels, enabling users to challenge computer opponents on an board. On , apps like - The Official Game provide both AI battles and live multiplayer, optimized for touch interfaces with strategic depth. Historically, one of the earliest commercial releases was Atari's for the console in 1981, a cartridge-based that pitted players against a basic AI on the system's . These platforms and apps commonly include multiplayer modes for human-versus-human play, AI difficulty scaling to suit novice to advanced users, and mobile optimizations such as portrait/landscape support and offline AI modes. For instance, Egaroucid integrates capabilities in its app version, allowing high-level practice against near-optimal opponents. Accessibility varies between browser-based options like eOthello, which require no downloads and run directly in web browsers for instant play, and app-based versions like Othello Quest, which involve store downloads but offer enhanced performance on mobile devices. This distinction enables broader reach, with web platforms favoring quick sessions and apps supporting persistent features like leaderboards and saved progress.

Core Playing Techniques

Search Algorithms

Search algorithms in computer Othello primarily revolve around exploring the game's to select moves that maximize the player's advantage while anticipating the opponent's responses. The foundational approach is the algorithm, which performs a recursive, depth-limited search through the game tree. At each maximizer (the player's turn), it selects the move leading to the highest possible score from the of an at leaf nodes; conversely, at minimizer s (opponent's turn), it chooses the move yielding the lowest score for the player. This exhaustive exploration ensures optimal play under , but its computational demands limit depth in practice for the of up to 33 moves in midgame Othello positions. To mitigate the exponential growth of the search tree, alpha-beta pruning serves as a critical optimization to the , dramatically reducing the number of nodes evaluated without altering the result. It maintains two values during traversal: alpha (the best already explored option for the maximizer) and beta (the best for the minimizer). Branches are pruned when the current best for the maximizer is at least as good as the minimizer's best (i.e., if \alpha \geq \beta), eliminating subtrees that cannot influence the root decision. In programs like Logistello, alpha-beta pruning enables searches to depths of 10-14 plies, far beyond unpruned , by exploiting the ordered nature of move evaluations. For better time management in fixed-time scenarios, such as play, iterative deepening combines with progressively increasing depth limits. It begins with a shallow search (e.g., depth 4) to identify promising moves, then re-searches deeper (e.g., depth 6) using those moves for ordering, repeating until time expires. This yields a best move at the deepest achievable depth while providing move-ordering hints from prior iterations, enhancing subsequent pruning efficiency. engines like employ iterative deepening alpha-beta to achieve consistent performance across varying hardware, often reaching effective depths of 12 plies in under a second. Building on alpha-beta, Multi-ProbCut introduces probabilistic pruning to further accelerate selective searches by estimating cutoff probabilities via shallow sub-searches. For a deep search at depth d, it performs multiple shallow searches at depths s_1, s_2, \dots, s_n (e.g., s=2 for d=8), using linear regression from historical data to map shallow values V_s to predicted deep values V_d \approx a \cdot V_s + b. Pruning occurs if the predicted interval lies outside the current [\alpha, \beta] with high probability, adjusted by a threshold k and standard deviation \sigma of prediction errors (e.g., prune if |V_d - V_s| > k \cdot \sigma). Developed for Othello by Michael Buro, Multi-ProbCut reduced nodes searched by up to 50% in Logistello while preserving playing strength, allowing deeper searches in complex midgame positions. Parallel search algorithms distribute the game tree exploration across multiple processors to scale with hardware, addressing the limitations of single-threaded depth. Techniques like the Asynchronous Parallel Hierarchical Iterative Deepening () algorithm assign independent subtrees to worker processes, using a shared root for and a principal variation to guide move ordering. In , APHID achieved near-linear on networks of workstations, enabling Zebra to search 20+ plies in parallel configurations that outperformed sequential alpha-beta by factors of 10-20 in node throughput. Similarly, the ABDADA distributed algorithm coordinates young brothers (parallel move explorations) and moves across distributed systems, boosting program performance in 1990s computer tournaments, such as the Waterloo and Paderborn events.

Evaluation Functions

Evaluation functions in computer Othello provide a heuristic estimate of a board position's value for a player, serving as a static assessment applied at leaf nodes of the search tree where full simulation is infeasible. These functions assign a numerical score, typically positive for advantageous positions and negative for disadvantageous ones, guiding the or alpha-beta search toward promising moves. Early implementations relied on simple s, evolving into more sophisticated combinations that capture strategic nuances like and potential. Seminal work on , a world-class from 1990, demonstrated how refined could rival human champions by integrating multiple features tuned via learning techniques. Disk-square tables represent one of the foundational methods, assigning precomputed weights to each of the squares based on their inherent strategic value. Corners receive the highest positive scores, often +100, as they are immune to and control; adjacent "X-squares" are penalized heavily, such as -20 or -50, due to their vulnerability to opponent captures leading to corner losses. squares might be valued at +20 for their relative , while central squares have lower or neutral weights reflecting higher risk. A typical full disk-square table used in early programs, symmetric across axes, appears as follows (values for ; negate for ):
abcdefgh
8100-20108810-20100
7-20-50-2-2-2-2-50-20
610-2-1-1-1-1-210
58-2-100-1-28
48-2-100-1-28
310-2-1-1-1-1-210
2-20-50-2-2-2-2-50-20
1100-20108810-20100
The score is computed by summing the weights of squares occupied by the player's disks minus those of the opponent. This approach, while simple, captures positional priorities but overlooks dynamic factors like move sequences. Mobility-based quantifies a player's to act by counting legal moves available to each side. The basic formula is \text{Score} = (M_{\text{own}} - M_{\text{opp}}) \times w, where M_{\text{own}} and M_{\text{opp}} are the number of legal moves, and w is a weighting factor (often 10-20 to balance against other terms). Advanced variants, as in program, refine this by distinguishing current (immediate legal moves, penalized for frontier flips via 38 precomputed line tables) and potential (bonuses for empty squares adjacent to opponent internals, scaled by desirability like +large for corner-adjacent). These adjustments prevent overvaluing risky expansions, improving accuracy in midgame positions. Sequence penalties further deduct for long disc lines vulnerable to cuts, computed via lookup tables for efficiency. Pattern-based evaluation identifies specific board motifs associated with advantage, such as unbroken or X-square threats. Coefficients are assigned to recognized patterns; for instance, an unbroken might yield +50 for stability, while an opponent's X-square occupation could subtract 30-60 due to corner risk. In , stability is evaluated using a 3^10-entry covering all and X-square configurations, generated through probabilistic and refined with move legality checks. This allows instant scoring of complex (e.g., a position valued at 430.30 based on move probabilities like 92% for a 450-score outcome), emphasizing motifs like internal disc bonuses adjacent to empties. Such patterns extend beyond simple squares to line-based heuristics, capturing tactical motifs efficiently. Hybrid approaches integrate disk-square, mobility, and pattern features into a unified score, often using statistical methods for weighting. BILL employed Bayesian learning on 40 features (e.g., edge and mobility subcomponents) to derive a discriminant function:
g(\mathbf{x}) = (\mathbf{x} - \boldsymbol{\mu}_{\text{Win}})^T \Sigma_{\text{Win}}^{-1} (\mathbf{x} - \boldsymbol{\mu}_{\text{Win}}) - (\mathbf{x} - \boldsymbol{\mu}_{\text{Loss}})^T \Sigma_{\text{Loss}}^{-1} (\mathbf{x} - \boldsymbol{\mu}_{\text{Loss}}) + \log|\Sigma_{\text{Win}}| - \log|\Sigma_{\text{Loss}}|
where \mathbf{x} is the feature vector, \boldsymbol{\mu} and \Sigma are means and covariances from win/loss databases. The win probability is then P(\text{Win}|\mathbf{x}) = \frac{e^{g(\mathbf{x})}}{e^{g(\mathbf{x})} + 1}, scaled to scores. Stability metrics (unflippable discs) are incorporated for endgame tuning, with correlations like 0.73 between potential and current mobility guiding feature selection. This combination enabled BILL to search 8.5 plies on average, outperforming prior programs by estimating outcomes more precisely than isolated heuristics.

Specialized Components

Opening Books

Opening books in computer Othello programs consist of precomputed databases containing sequences of moves for the early phases of the game, typically covering the first 10 to 20 moves, to provide optimal or near-optimal responses without requiring real-time computation. These books are constructed by analyzing large collections of high-level human games, such as those from the International Othello Federation (IOF) or the Game of the Gods Server (GGS), and evaluating positions using deep searches to determine the best moves. Automatic construction methods, including strategies that prioritize promising positions and drop-out expansion to minimize opponent deviations from the book, have been applied to to build these databases efficiently. The resulting books can contain hundreds of thousands to millions of entries, reflecting the of early-game positions while focusing on common lines to manage storage. During gameplay, opening books are consulted at the start of by hashing the current board position to check for matches; if found, the program follows the precomputed move or sequence, often integrating with tables to detect repeated positions across variations. This lookup occurs before initiating a full search, allowing seamless transition to the program's once the book is exhausted. Benefits include significantly reduced computational overhead in the opening, where search trees are broad and shallow, enabling faster and more consistent strong starts against both and computer opponents; for instance, books ensure adherence to proven lines like the parallel opening (e.g., f5-d6-c4), which controls central edges early. Maintenance of opening books involves periodic updates by re-evaluating positions from new game databases or incorporating advances in game solving; these updates are often automated post-game, where deviations from existing lines are deeply searched and added to refine the database against evolving opponent strategies.

Endgame Databases

Endgame databases in computer enable perfect play in late-game positions by precomputing exact outcomes through , a technique that starts from terminal board states—where the winner is determined by the disc count—and propagates results to preceding positions. This method systematically classifies each reachable configuration based on the optimal outcome under perfect play from both sides, focusing on boards with a limited number of empty squares to constrain the state space explosion. Seminal work on for endgame computation was introduced by , who demonstrated its application to combinatorial games with fixed termination rules like . Central to these databases are WLD (win/loss/) tables, which encode the game-theoretic value for every valid and the player to move, indicating whether the position is a forced win, loss, or assuming optimal responses. These tables are constructed layer by layer, with each iteration updating outcomes for positions one ply earlier by examining all legal successors and assigning the best achievable result (e.g., a position is a win if at least one move leads to a loss for the opponent). For , such tables are feasible for all configurations with fewer than 30 empty squares, encompassing millions to billions of states depending on the exact , as the decreasing in endgames reduces branching factors. In practical usage within programs, endgame databases are probed during alpha-beta search when the number of empty squares falls below a predefined limit, instantly resolving the subtree and avoiding exhaustive exploration. This integration accelerates decision-making in the final 20–30 moves, where search depth can exceed 60 plies without databases but is truncated upon database hits. techniques, including 5-bit encoding schemes that pack WLD outcomes and minimal distance-to-outcome data into compact bitstreams, mitigate storage demands; for instance, hardware-accelerated solvers achieve efficient querying for up to 20 empty squares using optimized representations on limited resources like FPGAs. Despite these advances, limitations arise from the in position count—reaching approximately 10^{18} states around 30 empty squares—rendering full 8x8 coverage computationally prohibitive with current technology. Partial databases thus serve primarily as evaluation aids, providing exact values for late positions while functions handle earlier phases; this approach has informed broader solving efforts, where principles combined with massive parallel search confirmed as a under perfect play.

Performance Optimizations

Hardware and Parallel Computing

Hardware accelerations and have significantly enhanced the performance of Othello programs by enabling faster exploration of the game's vast search space. Early implementations in the and relied on single-processor systems, where programs like achieved modest speeds of 1100 nodes per second on VAX hardware. By the late , advanced programs such as Logistello operated on single processors, reaching search speeds of approximately 160,000 nodes per second while defeating world champions through optimized alpha-beta search. These limitations contrasted with later developments, where multi-core CPUs, GPUs, and cloud-based supercomputing allowed for dramatic speedups, often reaching millions to billions of nodes per second in aggregate. Multi-core CPUs have been pivotal for distributing alpha-beta search workloads in programs, allowing threads to explore independent branches of the game tree while sharing global bounds to prune ineffective searches. A key challenge in this parallelization is : threads must communicate alpha and beta cutoff values to avoid redundant computations and maintain search efficiency, often using techniques like the Young Brothers Wait Concept (YBWC) to coordinate task allocation and bound updates. The open-source program Edax exemplifies this approach, employing YBWC for parallel alpha-beta search across multiple cores; on a quad-core system, it delivers a 3.8-fold over single-core execution, achieving up to 82.9 million nodes per second in version 4.1. Graphics processing units (GPUs) enable massive parallelism for evaluating multiple game lines simultaneously, particularly in (MCTS) variants suited to Othello's . By leveraging for thread-level parallelism in and simulations, GPU implementations can process thousands of playouts concurrently, optimizing memory access and workload distribution across GPU cores. For instance, a 2011 implementation of parallel MCTS on GPUs for Othello achieved up to 10 million nodes per second, demonstrating substantial gains over CPU-only methods for midgame evaluation. In contrast to 1990s single-chip processors, modern harnesses distributed supercomputing clusters for exhaustive searches, as seen in the 2023 computational solving of . Using the MN-J supercluster—comprising thousands of CPU cores from Xeon and EPYC processors—researchers evaluated approximately 1.5 × 10^9 positions with 36 empty squares, searching a total of about 1.5 × 10^18 nodes at an effective rate of roughly 1.2 × 10^7 positions per GHz per core per second. This scale, unattainable on dedicated 1990s hardware, underscores the shift to parallel cloud resources for high-impact computations. Recent advancements as of 2025 include highly parallel hardware accelerators, such as the HIPAS design for the NTest engine, which leverages scalable FPGA implementations to accelerate middle- and evaluations using with alpha-beta pruning. Additionally, multimodal language models trained on data have shown improved performance and robustness through visual integrations.

Algorithmic Enhancements

Transposition tables represent a key optimization in computer Othello programs, employing hash functions to store and retrieve evaluations of board positions encountered during search. By caching exact scores, depths, and best moves for these positions, programs like Logistello and Zebra avoid recomputing identical or transposed states, which can occur frequently due to the game's symmetric structure and branching factor. This technique dramatically reduces the effective search space; for instance, in midgame searches, transposition tables can cut computation time by orders of magnitude while preserving search accuracy. Killer moves and history heuristics further enhance move ordering to maximize the effectiveness of alpha-beta pruning in Othello's minimax framework. Killer moves prioritize branches that previously caused beta cutoffs at the same ply, maintaining a small list (typically two per depth) of such "killer" responses to opponent moves, as implemented in early programs like and refined in Logistello. Complementing this, history heuristics assign scores to move-to-square pairs based on their success in causing cutoffs across the game, accumulating evidence of promising patterns without position-specific storage. Together, these methods improve cutoff rates by 20-30% in typical searches, allowing deeper exploration of critical lines without increasing overall nodes evaluated. Quiescence search addresses the horizon effect in Othello by extending the search beyond the principal variation's depth limit in positions involving unresolved captures or threats. In Othello, where flips can dramatically alter board control, this involves continuing the search selectively for "noisy" moves—such as those capturing multiple discs—until a stable, quiet position is reached, often limited to one or two plies of captures. Programs like Zebra incorporated quiescence to refine leaf evaluations, reducing tactical errors and improving playing strength by ensuring evaluations reflect post-capture outcomes rather than intermediate instabilities. Empirical tests show quiescence adds 10-20% to search time but boosts win rates against tuned opponents by avoiding shortsighted assessments. The ProbCut method provides selective deepening for critical lines in , using shallow searches to predict and prune unpromising branches based on probabilistic bounds. Developed by Michael Buro, it performs a quick scout search (e.g., to depth 4) to estimate move values via , then applies a —derived from game-specific parameters—to skip deeper analysis if the predicted score falls outside the alpha-beta window, reserving resources for high-variance lines. Integrated into Logistello, ProbCut prunes 50-70% of nodes in deeper searches (e.g., depth 8+), with prediction errors below 5%, yielding a 10-20% increase in effective search depth and contributing to the program's victories in the . This approach exemplifies tailored to Othello's evaluation volatility, distinct from hardware accelerations.

Solving Othello

Smaller Board Variants

Smaller board variants of Othello, such as 4x4 and 6x6, have been exactly solved through computational methods, serving as proofs of concept for techniques applicable to larger boards like the standard version. These solves demonstrate the feasibility of exhaustive analysis in games with reduced state spaces, highlighting the second player's advantage under perfect play and providing benchmarks for efficiency. The 4x4 variant is trivially solved, requiring less than one second of computation on modern hardware, with (the second ) winning by +8 disks under perfect play. This outcome arises from the limited board size, where the total number of positions is small—fewer than 10^5 after accounting for symmetries and legal constraints—allowing full enumeration without significant resources. Exhaustive , which builds a database of all terminal positions and propagates values backward through the game tree, confirms this result by classifying every reachable position as a win, loss, or for the to move. The 4x4 variant was solved in the early days of computer Othello . In contrast, the 6x6 variant presents greater challenges due to its expanded state space, but it was solved in approximately 2 weeks of using exhaustive search algorithms like with alpha-beta pruning on 1990s hardware such as SGI workstations. Perfect play leads to a white win by +4 disks, as verified through that enumerates and evaluates all legal positions, overcoming key difficulties in position generation and storage. The process involves constructing endgame databases iteratively, starting from terminal states and resolving dependencies, which reveals strategic patterns transferable to larger boards. These smaller solves, with 6x6 achieved around 1993 by researchers including David Eppstein, have served as essential testing grounds for optimization techniques in 8x8 Othello solving.

Standard 8x8 Board

In 2023, bioinformatician Hiroki Takizawa announced a computational proof that the standard 8x8 Othello game is weakly solved, establishing that perfect play by both players from the initial position results in a draw. This milestone was detailed in a preprint where Takizawa modified the open-source Edax program to exhaustively analyze critical positions, confirming the game-theoretic value of the initial board as 0—meaning neither player can force a win or loss under optimal conditions. The approach relied on a partitioned search framework, dividing the into subsets of positions that could be independently solved using alpha-beta pruning with narrow search windows (such as [-3, +3] or [-1, +1]) and for efficiency. Precomputed databases, including the WTHOR archive of over 61,000 professional games from 2001 to 2020, guided the selection of representative midgame positions (e.g., those with 50 empty squares), while subsets with 36 empty squares were fully enumerated and solved. from these solved subsets propagated values up to the root position, verifying the draw without exploring the entire . Computations were performed on the CPU-based MN-J at Preferred Networks, Inc., leveraging multi-core processors like Xeon and EPYC for parallel evaluation. A primary challenge was the vast state space of approximately $10^{28} possible board positions, far exceeding those of solved games like checkers ($10^{20}). Takizawa addressed this through symmetry reductions—treating mirrored or rotated positions as equivalent—and decomposition algorithms that isolated solvable clusters, such as enumerating 2,958,551 positions with 50 empty squares and solving 1,505,367,525 with 36 empty squares. Move ordering based on historical game data further pruned irrelevant branches, enabling the search of around $1.5 \times 10^{18} nodes in total. The outcomes underscore that the first player cannot achieve a win with perfect opposition, resolving long-standing questions about 's balance and demonstrating the limits of current in weakly solving complex board games. This proof provides a foundation for constructing flawless Othello engines, as the solved subsets can be integrated into databases for real-time perfect play, though full strong solving (from any position) remains computationally prohibitive.

Historical Milestones

Early Developments

The earliest documented computer implementations of Othello emerged in 1977, marking the beginning of computational interest in the game. In April of that year, Martin Gardner's "Mathematical Games" column in Scientific American referenced the first known program, a Reversi variant written by N.J.D. Bailey in BCPL for the PDP-11 minicomputer. This implementation represented an initial foray into programming the game's flipping mechanics on available hardware, though details of its strategy remain limited. Later in October, BYTE magazine published "Othello, A New Ancient Game" by Richard O. Duda, featuring a BASIC type-in program for microcomputers that simulated the 8x8 board, piece placement, and flanking rules using simple heuristics such as maximizing captured pieces or favoring edge positions. These early efforts highlighted the game's suitability for personal computing amid the rising popularity of hobbyist systems like the Apple II and TRS-80. By the 1980s, Othello programs advanced toward competitive viability, incorporating more sophisticated search algorithms. A notable milestone occurred in 1980 when The Moor, developed by Mike Reeve, , and Michael Stean, secured one victory in a six-game exhibition match against world champion Hiroshi Inoue, demonstrating that computers could occasionally outplay elite human players. This program employed enhanced evaluation functions to assess board positions, building on prior heuristics. Around the same time, Paul Rosenbloom's (1982) achieved consistent world-championship-level performance by integrating task-specific analysis with state-of-the-art techniques, routinely defeating strong human opponents at . Iago evaluated positions using a that weighted factors like mobility and stability, enabling it to compete effectively in tournaments. Early algorithmic foundations relied on minimax search without alpha-beta to explore move sequences and select optimal plays, constrained by the computational limits of 1970s and early hardware. These implementations recursively evaluated game trees to a fixed depth, prioritizing moves that maximized the player's pieces while minimizing the opponent's, often supplemented by basic positional bonuses for corners and edges. techniques, which eliminate irrelevant branches to accelerate search, were introduced in subsequent refinements but absent in the inaugural programs, limiting depth to around 4-6 plies on typical minicomputers. This approach provided a for adversarial planning in zero-sum games like . The decade also saw the organization of dedicated competitions to benchmark program strength. The inaugural computer Othello tournaments occurred in 1979, fostering rapid innovation among developers. A prominent event was the 1981 Santa Cruz Open Machine Othello Tournament at the , where 20 programs competed; , running on a DEC KA-10, placed first, outperforming entrants on various platforms and underscoring the growing prowess of specialized . These events, often hosted alongside human championships, encouraged refinements in search efficiency and evaluation accuracy.

Mid-to-Late 1980s and 1990s Developments

Progress continued through the mid-1980s with programs incorporating advanced learning techniques. In the late 1980s, , developed by and Sanjoy Mahajan at , introduced Bayesian learning to dynamically adapt evaluation functions based on , reliably defeating and other contemporaries. This marked an early use of in Othello AI. Throughout the 1990s, programs like refined search algorithms and evaluation, but the field saw a major leap with Michael Buro's Logistello, which used extensive (over 100,000 games) to tune pattern-based evaluations.

Modern Achievements

In 1997, the Othello program Logistello, developed by Michael Buro at the Research Institute, achieved a landmark victory by defeating the human world champion Takeshi Murakami in a best-of-six match with a perfect score of 6-0. This event, held in , marked the first time a decisively outperformed the top human player, demonstrating the superiority of advanced alpha-beta search combined with multi-probcut pruning techniques. During the 2000s, programs like Zebra and its Windows variant WZebra served as key benchmarks for evaluating Othello AI strength, consistently placing in the top tiers of computer tournaments such as the GGS Open series, where Zebra secured multiple third-place finishes between 2002 and 2003. These programs utilized iterative deepening and transposition tables to achieve high search depths, influencing subsequent developments in evaluation functions. Concurrently, neural network experiments gained traction, with evolutionary algorithms training networks to discover strategies like mobility-based play; for instance, a 2003 study evolved multi-layer perceptrons that outperformed traditional heuristics in midgame scenarios. In the , Edax and NTest exemplified performance, with Edax's version 4.0 release in 2010 enabling it to evaluate positions at speeds exceeding 75 million nodes per second on quad-core processors, far surpassing human capabilities through pattern-based evaluations and . NTest, originally dominant by 2004, continued as an open-source benchmark into the decade, incorporating n-tuple networks for board assessment that allowed consistent wins against prior champions like Logistello. Parallel computing significantly boosted these programs, as seen in Edax's adoption of the Young Brothers Wait Concept for multi-threaded search, yielding up to 3.8-fold speedups on multi-core systems and enabling deeper explorations in complex midgames. A pivotal 2023 achievement came when Hiroki Takizawa computationally proved that the standard 8x8 board results in a draw under perfect play, using modified alpha-beta pruning on a cluster of high-performance computers to exhaustively verify the game tree from the initial position. Complementing this, Egaroucid emerged as one of the strongest neural network-based programs, employing convolutional layers for evaluation to achieve winning rates over 59% against Edax at high search levels, highlighting the efficacy of in approximating optimal strategies. As of 2025, research continues with hardware accelerators for engines like NTest, enabling scalable parallel evaluation of midgame and endgame positions, and studies on language models like Othello-GPT for interpretability.

Notable Programs

Classic Programs

One of the pioneering programs in computer Othello was , developed by Paul S. Rosenbloom at Carnegie-Mellon University in the early 1980s. 's design stemmed from a detailed of the game, emphasizing key strategic elements such as piece —both on edges and internally—and , which measures current and potential moves for both players. The weighted these features with hand-crafted coefficients to approximate game outcomes, combined with standard search techniques like alpha-beta pruning, iterative deepening, and killer move heuristics for efficient move ordering. This approach enabled to achieve world-championship-level performance in its era; it won the Santa Cruz Open Machine Othello Tournament in with a perfect 8-0 record and placed fifth at the Northwestern Man-Machine Othello Tournament in June 1980, defeating several competing programs. 's innovations in systematic task decomposition and feature-based evaluation laid foundational techniques for subsequent Othello AIs, demonstrating how targeted analysis could elevate program strength without exhaustive computation. In the mid-1990s, Logistello emerged as a landmark achievement, authored by Michael Buro during his PhD research at the University of Alberta and Paderborn University, with development spanning from 1993 to 1997. The program's evaluation function innovatively combined mobility—assessing the number of legal moves available to each side—and pattern recognition, using logistic regression models trained on thousands of expert game records to predict win probabilities from board positions. This data-driven approach, augmented by null-move pruning and transposition tables in its alpha-beta search, allowed Logistello to explore deeper game trees than predecessors. Its crowning accomplishment came in 1997, when it defeated the human world champion Takeshi Murakami 6-0 in a formal match, marking the first time a computer program conclusively bested a top human Othello player in an official setting. Logistello also dominated computer tournaments throughout the 1990s, including multiple wins at the World Microcomputer Othello Championship, and its techniques, such as automated opening book refinement through self-play, influenced evaluation methods in later programs. Zebra, developed by Gunnar Andersson starting in June 1997, represented a significant evolution in heuristic-based engines, with WZebra as its Windows graphical interface released shortly thereafter. The program's strength derived from an extensive opening book comprising over 500,000 positions derived from games between top human players and extensive simulations, enabling precise early-game decisions and reducing search overhead in critical phases. Zebra incorporated learning mechanisms to update its book and parameters after each game, using features like piece counts, mobility extensions, and solvers optimized with bitboard representations for rapid computation. It achieved notable success in computer competitions, securing third place in the GGS Open tournament in April 2003 among 12 entrants and consistent top rankings in online leagues during the late and early . Zebra's emphasis on robust opening strategies and inspired numerous derivative programs and remains a for traditional, non-neural engines.

Contemporary AI Systems

Edax, developed by Richard Delorme, remains one of the leading Othello programs in the 2020s, renowned for its exceptional speed and efficient parallel search capabilities. Implemented in C with bitboard representations and multi-threaded processing using the Young Brothers Wait Concept, Edax achieves high nodes-per-second rates on multi-core processors. Its pattern-based evaluation and selective search techniques, including MultiProbCut variants, enable deep midgame analysis, making it a for performance. A modified version of Edax was instrumental in the 2023 computational proof that perfect play on the standard board results in a draw, demonstrating its prowess in exhaustive solving. NTest, created by Chris Welty, stands out for its precise and rapid search algorithms, positioning it as a key tool for Othello analysis rather than competitive play. The program's midgame evaluator emphasizes strategic accuracy, outperforming many contemporaries in position assessment, and it supports both command-line operation and integration with graphical interfaces like NBoard for detailed game dissection. Widely used in and , NTest's open-source nature allows for hardware accelerations, such as parallel implementations that enhance its utility in exploring game trees. Egaroucid, emerging in the early 2020s and developed by Takuto Yamana, represents a shift toward neural network-driven , achieving world-leading strength through integration. Its employs small-scale deep neural networks to compress and refine positional assessments, combining convolutional layers for board feature extraction with hybrid search methods. Optimized for multi-threading, Egaroucid supports mobile deployment while maintaining high performance, with benchmarks showing it outperforming Edax 4.6 in win rates (e.g., 59.6% at ) and disc differentials across thousands of games from varied starting positions. The program's light version secured first place in the contest in 2021, 2023, and 2025, underscoring its competitive edge. Contemporary Othello AI trends emphasize hybrid architectures blending traditional search with evaluations, often incorporating data from solved positions to refine endgame precision. Open-source repositories on , such as those extending Edax or Egaroucid frameworks, facilitate community-driven advancements in these minimax-NN hybrids, enabling faster training on self-play datasets and broader accessibility for researchers. This integration has elevated program strengths, with modern systems routinely achieving near-perfect play in benchmarks against human experts and legacy engines.

References

  1. [1]
    None
    ### Description of the Game Othello
  2. [2]
    [PDF] The History of Computer Games - CUNY Academic Works
    Late 1980s: Kai-Fu Lee and Sanjoy Mahajan created the Othello program BILL, which was similar to IAGO but incorporated Bayesian learning. BILL reliably beat ...
  3. [3]
    [2310.19387] Othello is Solved - arXiv
    Oct 30, 2023 · This paper announces a significant milestone: Othello is now solved. It is computationally proved that perfect play by both players lead to a draw.Missing: developments | Show results with:developments
  4. [4]
    Official rules for the game Othello
    The object of the game is to have the majority of discs facing up on the board showing one's own colour at the end of the game.
  5. [5]
    [PDF] THE OFFICIAL RULES OF OTHELLO
    1.1. Othello is played by two players on a board composed of 64 equal squares arranged in an 8- square-by-8-square grid and with 64 discs which are black on ...
  6. [6]
    [PDF] An Analysis of Othello AI Strategies - Alec Barber
    Abstract. Othello is an interesting game in the domain of artificial intelligence due to a some- what unexpected level of complexity.
  7. [7]
    [PDF] Applications of Artificial Intelligence and Machine Learning in Othello
    Abstract. This project explores Artificial Intelligence techniques in the board game Othello. Several Othello-playing programs were implemented and compared ...
  8. [8]
    Othello - Chessprogramming wiki
    Othello programmers often use piece-square tables, mobility and take various other features, pattern, and heuristics into account.
  9. [9]
    Modern Living: Japanese Othello | TIME
    Nov 22, 1976 · When Japanese Salesman Goro Hasegawa, 44, invented his simple board game in 1971, his father, a Shakespearean scholar, duly noted that the ...
  10. [10]
    Computer Othello - Videogame by Nintendo | Museum of the Game
    Computer Othello was produced by Nintendo in 1978. Nintendo released 106 machines in our database under this trade name, starting in 1970. Nintendo was ...
  11. [11]
    The Evolution of Strong Othello Programs - SpringerLink
    This paper surveys the evaluation and search techniques utilized by the strongest Othello programs of their time during the past twenty years.Missing: history | Show results with:history
  12. [12]
    Microsoft Windows (included game) (1985) - MobyGames
    Apr 15, 2010 · Included with Windows 1 is a single game: Reversi. Reversi is a no-nonsense adaptation of the popular board game Othello. The player plays ...
  13. [13]
    Vintage Compucolor Computer Software Game OTHELLO 1978 | eBay
    Vintage software for the Compucolor computer... From the very early days of computers in the late 1970's...Othello….Item in nice condition. with outer sleeve ...
  14. [14]
    The Evolution Of Strong Othello Programs - ResearchGate
    Aug 6, 2025 · ... In the 1950s, some of the earliest AI algorithms were developed to play board games [57]. Continual advancement eventually led to the ...
  15. [15]
    Computer Wins at Othello - The New York Times
    Aug 8, 1997 · Logistello, a computer program developed at the NEC Research Institute, defeated its human challenger, Takeshi Murakami, who teaches English in ...
  16. [16]
    [PDF] SHA-ZA: Advanced Reinforcement Learning for Othello Mastery ...
    Feb 25, 2025 · Neural Network Architecture and Attention Mechanism. SHA-ZA's agent network architecture took advantage of recent advancements in the landscape ...
  17. [17]
    weltyc/ntest: NTest othello program - GitHub
    Ntest is a strong othello program. It can be used in command-line mode or using the graphical viewer NBoard. Requirements: 64-bit Windows, Mac, or Linux ...Missing: download | Show results with:download
  18. [18]
    Edax reversi version 4.6 - GitHub
    Edax is a very strong othello program. Its main features are: fast bitboard based & multithreaded engine. accurate midgame-evaluation function.Missing: Delorme | Show results with:Delorme
  19. [19]
  20. [20]
    LOGISTELLO's Homepage - Skat
    Logistello is one of today's strongest Othello programs. Besides the powerful hardware on which it is running and the efficient implementation of standard game ...
  21. [21]
  22. [22]
    Play Othello online - eOthello
    According to the rules of Othello, once a disc is placed in a corner, that disc can never be flipped back (it is "stable").Log in · Register · Tournament - 06/2025 - Grand · Leaderboard
  23. [23]
    Beginner friendly free online othello server - Othello Quest
    Othello Quest is a free online othello game server created to be beginner friendly, simple and speedy.
  24. [24]
    Othello Quest - Online Othello - Apps on Google Play
    Rating 4.4 (4,185) · Free · AndroidOthello Quest is one of the largest othello (reversi) servers in the world with lots of beginners as well as world class players.
  25. [25]
    Reversi - Othello - Apps on Google Play
    Rating 4.0 (404) · Free · AndroidExperience the thrill of Reversi (a.k.a. Othello)! Challenge yourself against the AI engine on an 8x8 grid by flipping computer's pieces to conquer the board!
  26. [26]
    Othello - AI & Friends - Apps on Google Play
    May 25, 2025 · Step into the world of strategy with our beautifully designed Othello (Reversi) game! Whether you want to challenge your mind against a smart AI or play live ...
  27. [27]
    Atari 2600 Manuals (HTML) - Othello (Atari) - AtariAge
    You'll need strategy, foresight and cunning to be a success at this game. The field is made up of a grid containing 64 squares.
  28. [28]
    Othello Quest - Online Othello App - App Store
    Rating 4.6 (218) · Free · iOSOthello Quest is one of the largest othello (reversi) servers in the world with lots of beginners as well as world class players.
  29. [29]
    Egaroucid Othello AI
    Egaroucid [ɪɡɑɻˈəʊsid] is an Othello app with one of the strongest othello solver AI. download icon Download Egaroucid 7.7.0. Lineup. Completely free software.
  30. [30]
    Alpha-Beta - Chessprogramming wiki
    The Alpha-Beta algorithm (Alpha-Beta Pruning, Alpha-Beta Heuristic ) is a significant enhancement to the minimax search algorithm that eliminates the need ...Beta-Cutoff · Aspiration Windows · Fail-High · Fail-Low
  31. [31]
    [PDF] Othello - Stanford
    Nov 27, 2002 · Kubat (ed.), 2001. Buro, M. 2002. The Evolution of Strong Othello Programs. Proceedings of the IW EC-2002 W orkshop on Entertainment ...<|separator|>
  32. [32]
    [PDF] Experiments with Multi-ProbCut and a New High-Quality Evaluation ...
    The paper introduces a new table-based evaluation function for Othello and Multi-ProbCut, a search procedure that generalizes ProBCUt, improving the program's ...
  33. [33]
    [PDF] The APHID Parallel Search Algorithm
    Results for an Othello pro- gram are presented here. The algorithmyields good parallel performance on a network of workstations, without using a shared ...
  34. [34]
    [PDF] The ABDADA Distributed Minimax Search Algorithm
    To analyse the behaviour of the parallel scheme, we used the parallelization of two sequential game programs: a competi- tive Othello program3 and an early ...
  35. [35]
    [PDF] The Development of a World Class Othello Program*
    In this paper we describe an Othello program, BILL, that has far surpassed the generation of Othello programs represented by IAGO. Its performance is due to ...
  36. [36]
    Writing an Othello program - Gunnar Andersson
    Apr 2, 2007 · Selective search algorithms, such as Multi Prob-Cut (MPC), only work well when the program has a good position evaluation. The reason for this ...Searching · Position Evaluation · Some Source Code
  37. [37]
  38. [38]
    arminkz/Reversi: Artificial intelligence of the Reversi / Othello - GitHub
    Opening Book. In the opening, the AI may take its moves from a database of commonly played openings (source here). Transposition Table. the AI keeps record of ...
  39. [39]
    Catalog of Opening Moves for Othello - UltraBoardGames
    More common today, openings are named after the players that discover or consistently use them. For example, F5d6C3d3C4b3 is known to many as the "Aubrey ...
  40. [40]
    Retrograde Analysis of Certain Endgames - Ken Thompson, 1986
    Sep 1, 1986 · Retrograde Analysis of Certain Endgames. Ken ThompsonView all authors ... PDF/EPUB. More. Cite; Share options; Information, rights and ...
  41. [41]
    [PDF] Solving Othello using BDDs - LIACS Thesis Repository
    Jul 12, 2019 · be used to perform retrograde analysis on Othello. Now, we introduce and evaluate the experiments that might gain knowledge on how to solve ...
  42. [42]
    (PDF) An FPGA-based Othello endgame solver - ResearchGate
    A single chip FPGA-based Othello endgame solver is presented in This work. The solver includes all the hardware for move checking, disc flipping, ...Missing: retrograde | Show results with:retrograde
  43. [43]
    [PDF] An FPGA-based Othello Endgame Solver - phwl
    A single chip FPGA-based Othello endgame solver is presented in this paper. The solver includes all the hardware for move checking, disc flipping, ...
  44. [44]
    basics of strategy game programming: part IV - endgame databases
    May 4, 2005 · The memory requirements for the retrograde analysis algorithm are proportional to the maximum index number, so if you produce a gapped index ...
  45. [45]
    Improving heuristic mini-max search by supervised learning
    Equipped with this evaluation function and running on a PentiumPro/200 Linux PC, on which it achieves a search speed of 160k nodes/second, LOGISTELLO beat the ...Missing: per | Show results with:per
  46. [46]
    Edax : a strong Othello program. - GitHub Pages
    Edax is a very strong and fast Othello program. For the purpose of strength, I made the following choices: I used the best algorithms I know.
  47. [47]
  48. [48]
    [PDF] The Evolution of Strong Othello Programs - Skat
    Abstract This paper surveys the evaluation and search techniques utilized by the strongest Othello programs of their time during the past twenty.
  49. [49]
    [PDF] A World-Championship-Level Othello Program - Stacks
    iago won the Santa Cruz Open Machine Othello Tournament,2 with an 8-0 record against an international field of computer Othello programs. In his review of ...
  50. [50]
    [PDF] Pro Cut An E ective Selective Extension of the /1 Algorithm - Skat
    In its application to Othello, the technique is shown to be effective in investigating the relevant variations more deeply. It significantly increased the ...Missing: Asphalt | Show results with:Asphalt
  51. [51]
    (PDF) On games and fairness - ResearchGate
    Oct 24, 2016 · It concerns about the outcome of 4x4, 6x6 and 8x8, but not 10x10 Othello. The second player wins in 4x4 and 6x6 with score 8 discs and 4 discs, ...
  52. [52]
    [PDF] Analysis of the First Published Othello Game
    The game Othello was invented in 1971 by Goro Hasegowa, based on an earlier game. Reversi. Othello was launched in the popular market in 1973 and it sold ...
  53. [53]
    [PDF] Byte-1977-10.pdf - World Radio History
    Oct 1, 1977 · Contest inspired by Mike's article. At first glance a simulator designed to run on the computer it is simulating may not seem very useful ...
  54. [54]
    [PDF] Byte Sep 1980 - Vintage Apple
    Sep 10, 1980 · ... program called. "The Moor" written by. David Levy, Michael Stean, and Michael Reeve, all of. London, England. This defeat, like the defeat of.<|separator|>
  55. [55]
    [PDF] The Evolution of Strong Othello Programs - SciSpace
    Using this kind of evaluation function, LOGISTELLO dominated the computer Othello scene until 1996 when HANNIBAL en- tered the scene. Logistello-2 (1997): joint ...<|separator|>
  56. [56]
    [PDF] A World-Championship-Level Othello Program
    The second tournament - the Santa Cruz Open Machine Othello Tournament - was held in January 1981. This tournament was open to any computer Othello program, and ...
  57. [57]
    [PDF] Takeshi Murakami vs. Logistello Michael Buro NEC Research ... - Skat
    This report summarizes the first decent match between an Othello World-Champion and a strong program held at the NEC Research In- stitute in Princeton (NJ) on ...
  58. [58]
    Zebra - Gunnar Andersson
    Fall 1999: Endgame speedup. Spring 2001: Midgame speedup. Performance. Zebra has played in some tournaments for computer programs. Here is how it has performed.Missing: benchmarks | Show results with:benchmarks
  59. [59]
    Download WZebra 4.2.4 - Gunnar Andersson
    The Othello game engine, the GUI design and all features added the last two years were programmed by Gunnar Andersson (currently the only maintainer of WZebra).
  60. [60]
    [PDF] Evolved Neural Networks Learning Othello Strategies - AiZhu
    Abstract- Evolutionary computation was used to train neural networks to learn to play the game of Othello,. Each neural network represents a strategy based ...Missing: history | Show results with:history
  61. [61]
    othello : Ntest
    Ntest by Chris Welty was among the strongest othello programs in the early 2000's and was continued until at least 2010 as an open source project used for ...Missing: download | Show results with:download
  62. [62]
    A world-championship-level Othello program - ScienceDirect.com
    8. P.W. Frey. The Santa Cruz open Othello tournament for computers. BYTE, 6 (7) (1981) ...
  63. [63]
  64. [64]
    Richard Delorme - Chessprogramming wiki
    He is author of the strong Othello program Edax written in C, also analysis engine in Stephane Nicolet's Othello program Cassio for Macintosh, and ...
  65. [65]
    A Highly-Parallel and Scalable Hardware Accelerator for the NTest ...
    We propose a HIghly PArallel, Scalable and configurable hardware accelerator for evaluating the middle and endgame Othello positions.
  66. [66]
    Egaroucid 7.2.0 Benchmarks
    Othello AI Egaroucid Free software with one of the strongest Othello AI in the world. The light version got the first place in the world.Missing: tournaments | Show results with:tournaments
  67. [67]
    Nyanyan/Egaroucid: Super Strong and Fast Othello AI ... - GitHub
    Totally free application. Egaroucid - GUI application. Desktop application for Windows / macOS; Japanese / English / Chinese; GUI with Siv3D.
  68. [68]
    Egaroucid Technology
    Othello AI Egaroucid Free software with one of the strongest Othello AI in the world. The light version got the first place in the world.Missing: neural network