Fact-checked by Grok 2 weeks ago

Horizon effect

The horizon effect is a well-known limitation in search algorithms, particularly those used for adversarial games like chess, where a fixed-depth search horizon causes the system to overlook critical future events—such as threats or opportunities—just beyond the evaluated depth, resulting in inaccurate state evaluations and suboptimal move selections. The term was coined by chess programmer Hans Berliner in 1973. This phenomenon arises primarily in minimax-based tree searches, where computational constraints limit the depth of exploration, creating an artificial "horizon" that masks drastic changes in game value occurring shortly after the cutoff. In practice, the horizon effect manifests when an apparently promising position at the search boundary leads to disaster on the opponent's subsequent turn, as the algorithm's fails to anticipate moves that delay but do not avert negative outcomes. For instance, a program might select a move that captures a , yielding a high immediate score, but ignore an impending counterattack that becomes evident only one or two plies deeper. This issue is exacerbated in complex games with high branching factors, where exhaustive deeper searches are infeasible due to time limits. To address the horizon effect, AI systems employ mitigation strategies such as , which selectively extends the search beyond the primary depth in unstable or "noisy" positions—like those involving captures or checks—to resolve volatile evaluations more accurately. Other techniques include singular extensions, which probe deeper along particularly strong move lines to uncover hidden consequences, and heuristics like the killer move approach, which prioritizes searching promising opponent responses early to reveal potential pitfalls. While these methods reduce the effect's impact, it remains a fundamental challenge in balancing search depth with computational efficiency, influencing the design of modern game-playing AIs.

Background and Definition

Core Concept

The horizon effect refers to a limitation in depth-limited search algorithms, such as , where potential threats or opportunities that lie beyond the fixed search depth are overlooked, resulting in suboptimal position evaluations. This phenomenon arises in adversarial settings, particularly in two-player zero-sum games like chess, where the algorithm explores a up to a predetermined depth but terminates prematurely due to computational constraints, leading to inaccurate assessments of long-term consequences. The search horizon represents this fixed depth limit in tree search algorithms, beyond which further branches are not explored, effectively creating a "blind spot" for events that unfold over additional moves. In practice, this horizon is imposed by time or resource budgets, forcing the algorithm to rely on a static at leaf nodes to estimate the desirability of positions. For instance, in with alpha-beta , the search alternates between maximizing and minimizing layers to approximate optimal play, but the abrupt cutoff at the horizon can mask delayed developments, such as an opponent's inevitable capture or a player's opportunity pushed just out of view. Key characteristics of the horizon effect include the masking of long-term consequences through static evaluations at non-terminal leaves, which often fail to account for unresolved tactical sequences. These evaluations typically score positions based on heuristic features like material balance or piece , but in volatile scenarios—such as ongoing or captures—they may overestimate or underestimate risks if the critical resolution occurs beyond the horizon. This leads to decisions that appear rational within the limited view but prove erroneous upon deeper analysis, manifesting as the algorithm favoring moves that temporarily delay harm without addressing its root cause.

Historical Development

The concept of the horizon effect traces its roots to early explorations in programming during the mid-20th century. In his seminal 1950 paper, outlined the challenges of implementing chess-playing algorithms on computing machines, emphasizing the limitations of search depth due to exponential branching factors in game trees, which could lead to incomplete evaluations of positions where threats or opportunities lurked just beyond the reachable ply limit—observations that implicitly anticipated the horizon effect, though the term was not yet coined. The horizon effect was formally identified and analyzed in the 1970s amid the development of more sophisticated chess programs. Hans Berliner, a pioneer in AI game playing, provided the first detailed examination of the phenomenon in his 1974 technical report on the CAPS (Chess as Problem Solving) system, describing it as a critical flaw arising from fixed-depth searches combined with quiescence procedures, which caused programs to make suboptimal moves by delaying or prematurely assessing inevitable outcomes. Berliner highlighted both negative and positive variants of the effect through illustrative chess positions, underscoring its impact on evaluation accuracy and program performance. This recognition influenced the evolution of AI search strategies throughout the late 20th century. In the 1980s, researchers advanced selective search methods, such as null-move pruning and pattern-based extensions, to probe deeper into promising branches and alleviate horizon-induced errors in resource-constrained environments. By the 1990s, the issue persisted in high-profile systems like IBM's , which defeated in 1997 despite its massive computational power and use of both brute-force and selective search architectures. The horizon effect remains relevant in contemporary , particularly in hybrid systems integrating with traditional search. Modern engines like those inspired by employ neural value networks to approximate evaluations beyond explicit search horizons, yet components can still exhibit horizon limitations in complex, long-term planning scenarios, as explored in recent comparisons of traditional and neural paradigms.

Causes and Characteristics

Search Depth Limitations

The horizon effect arises primarily from the computational constraints imposed by the exponential growth of search trees in games with high branching factors. In chess, the average branching factor is approximately 35 legal moves per position, leading to an explosive increase in the number of nodes to evaluate as depth increases. For instance, searching to a depth of 6 plies requires evaluating roughly $35^6 \approx 1.8 \times 10^9 nodes in a full minimax tree, which exceeds practical limits on most hardware without optimizations. This exponential scaling, formalized as the number of nodes explored being b^d where b is the branching factor and d is the depth, fundamentally restricts the search horizon to shallow depths, typically 6-10 plies in early chess programs. Time and hardware constraints further exacerbate these limitations, as search budgets are finite. In tournament settings, players have about 3 minutes per move under classical time controls (90 minutes for the first 40 moves), translating to comparable computational budgets for systems aiming to simulate human-like decision times. Even with modern processors capable of evaluating millions of positions per second, hardware memory and processing speed cap effective depth; for example, IBM's in typically searched to a full-width depth of about 12 plies in a 3-minute window, reaching average depths of 17-18 plies with selective deepening, and up to 40 plies in critical lines, though still limited by the b^d node explosion. These bounds prevent exhaustive exploration, forcing reliance on approximations that can mask threats or opportunities just beyond the horizon. A key pitfall stems from the use of static evaluation functions at the search horizon, which approximate a position's value without further lookahead and often lead to biased assessments. These functions, typically linear combinations of and positional features (e.g., piece values and ), assume quiescence—stable board states without ongoing captures or checks—but when applied to volatile leaves, they ignore future developments like transpositions (reaching the same via different move sequences) or emerging quiet positions where threats materialize slowly. This results in over-optimism, as postponable losses appear undervalued, or over-pessimism in defensive scenarios, distorting the value and promoting suboptimal moves that delay inevitable penalties beyond the depth limit. Algorithmic optimizations like alpha-beta pruning mitigate some growth but introduce trade-offs that still constrain depth, particularly in unbalanced trees. By maintaining alpha and beta bounds to prune irrelevant branches, alpha-beta reduces the effective from b to approximately \sqrt{b} under optimal move ordering, enabling deeper searches—for chess, this drops from 35 to about 6, allowing 12-14 plies instead of 6-8 in the same time. However, in unbalanced game trees where branch widths vary (e.g., due to tactical explosions), pruning efficiency diminishes, as poor ordering leads to fewer cutoffs and exacerbates the horizon by unevenly allocating computational resources across the .

Manifestations in Game Trees

The horizon effect manifests in game trees as a where limited search depth causes the AI to misrepresent threats or opportunities that lie just beyond the evaluation horizon, leading to suboptimal move selections in minimax-based . For instance, consider a scenario where an opponent can force a piece sacrifice that yields a significant in d+2 moves; at a search depth of d, the tree appears to show the position as safe or even favorable, as the impending loss is not visible, prompting the AI to choose moves that delay rather than counter the threat. This distortion arises because the game tree's nodes at the horizon are evaluated statically, ignoring sequences that extend further and alter the position's true value. The horizon effect can lead to errors such as overlooking opportunities for gain, like failing to pursue a winning combination that materializes beyond the search depth, resulting in conservative plays that miss strategic advantages, or ignoring dangers where the AI selects moves that temporarily avoid or postpone threats—like forks or pins delayed by forcing sequences—underestimating the opponent's ability to exploit them later, often leading to unnecessary material concessions. These errors stem from the uniform depth limitation in the tree, where branches representing delaying tactics mask the underlying positional weaknesses. The severity of the horizon effect intensifies in non-quiescent positions, where ongoing tactical turbulence, such as unresolved captures or checks, continues beyond the horizon, rendering static evaluations unreliable. Quiescent positions, by contrast, are relatively stable with no immediate threats en prise, allowing accurate assessment without further search; however, in turbulent scenarios, the effect worsens as the tree fails to resolve capture sequences, potentially overvaluing transient material gains that evaporate deeper in the tree. To address this, searches often require extensions focused on capture resolutions until quiescence is achieved, preventing distorted evaluations in volatile subtrees. Ultimately, these manifestations propagate errors through the backup process, where incorrect leaf scores from horizon-limited nodes skew the min/max values upward in the tree, favoring flawed paths over superior ones. For example, a position evaluated as winning at depth d due to an unseen counterplay at d+1 will backup a misleadingly high score, causing the to select moves that appear optimal but lead to losses in full-depth analysis. This propagation undermines the reliability of the root node's choice, highlighting the horizon effect's role in compromising overall strategic accuracy in game-playing .

Examples and Illustrations

Chess Scenarios

One illustrative example of the horizon effect in chess programming involves positions where the opponent can make delaying moves that push critical events beyond the search horizon. As described in standard literature, consider a where a program's fixed-depth search misses a queening because the opponent stalls with checks, extending the sequence beyond the evaluation depth (e.g., 14 plies in one analyzed position), leading the program to misjudge the outcome as a draw or loss when it is actually a win. In tactical positions, the horizon effect can cause a program to overlook sacrifices that lead to material gain just beyond the search depth. For instance, a program might delay an inevitable loss by interposing , sacrificing multiple in an 8-ply sequence that hides the net loss, evaluating the position as safer than it is. Historical analyses of early chess programs highlight these issues. Mac Hack VI, developed in the late at , used a shallow search depth of around four to six plies and exhibited horizon effect errors, such as delaying inevitable piece losses with unnecessary checks in positions, which weakened its overall play. To combat this, it employed extensions like "crossovers" for attacked pieces. Similarly, early programs struggled in versus pawns , where pawn advances could delay the loss beyond the search horizon, leading to evaluations that favored prolonging the game. Tree diagrams of such scenarios visually depict the horizon effect by showing truncated branches where tactical motifs, like discovered attacks following a , are hidden just past the fixed depth limit. In a typical , the main line branches into delaying sequences (e.g., repetitive checks) that extend beyond the , while the winning path—such as a breakthrough—remains unexplored, illustrating how uniform depth truncation obscures critical structures.

Applications in Other Domains

The horizon effect manifests in robotics pathfinding, particularly in multi-agent scenarios, where limited search depth causes agents to overlook distant obstacles or optimal routes, resulting in congestion or suboptimal trajectories. In pathfinding algorithms, agents with a fixed planning horizon may maneuver to resolve immediate conflicts but inadvertently push blockages beyond their lookahead window, leading to deadlocks or inefficient paths in dynamic environments. In economic modeling and market prediction, the horizon effect arises when AI-driven forecasts prioritize short-term signals from abundant but undervalue long-term dependencies, such as cascading disruptions. Studies show that alternative data sources enhance prediction accuracy for near-term horizons (e.g., 1-3 months) but yield for longer periods, as models overlook delayed economic feedbacks like propagation or sector interdependencies. This limitation can lead to misguided policy simulations, where initial stability masks future volatility in simulated markets. The horizon effect can be analogous in games, where AI agents with constrained planning depths may ignore production delays in and unit , leading to imbalances. In non-adversarial planning, the horizon effect contributes to shortages in multi-step supply chains by truncating forecasts at the planning boundary, often termed the "end-of-horizon effect." Rolling-horizon optimization in routing problems demonstrates that short horizons (e.g., weekly cycles) deplete ending inventories to minimize immediate costs, amplifying shortages in subsequent periods and increasing overall expenses. This issue is prevalent in models, where unaccounted variability beyond the horizon disrupts chain , as seen in blood supply or general networks.

Mitigation Techniques

Depth Extensions

Depth extensions are selective search enhancements in game tree algorithms, such as alpha-beta pruning, designed to probe deeper into promising or critical branches beyond the standard search horizon. These methods address the horizon effect by artificially increasing the depth in lines where threats or opportunities may lurk just out of reach, without uniformly expanding the entire tree, which would be computationally prohibitive. By focusing extensions on specific conditions, engines like those used in chess can resolve tactical sequences more accurately, though at the cost of additional computational effort. Iterative deepening serves as a foundational depth extension , incrementally increasing the search depth across successive —starting from shallow searches (e.g., 1 ply) and progressively deepening until time constraints are met. This approach allows the engine to refine move ordering from prior , reusing results to avoid full re-exploration of shallower depths, thereby mitigating the horizon effect through fallback to the best move from the last completed if deeper searches are interrupted. In practice, it enables engines to achieve deeper effective searches in time-limited environments, as the bulk of computation occurs in the final . Singular extensions target branches with low move diversity, such as forced sequences involving captures or threats, where a single move dominates alternatives. Introduced in the context of , this technique performs a reduced-depth scout search with a narrowed null window (lowered beta by a margin) to verify if one move singularly outperforms others; if alternatives fail low, the dominant line is extended by 1-2 plies to ensure accurate evaluation near tight beta cutoffs. This dynamically identifies critical paths overlooked by uniform depth limits, directly countering horizon-induced errors in tactical positions. Check and capture extensions automatically deepen the search in response to king threats or material exchanges, resolving potentially volatile sequences that could propagate beyond the horizon. For checks—whether delivering or evading them—engines typically add 1 ply to account for limited opponent replies, preventing premature evaluation of unstable positions. Capture extensions similarly extend on exchanges to fully assess recaptures or material implications, ensuring tactical noise does not distort the horizon. These are particularly effective in high-conflict scenarios, where shallow searches might defer inevitable losses or gains. Implementing depth extensions introduces trade-offs, primarily an increase in nodes evaluated due to expanded branches, which can slow overall search speed but enhances accuracy in tactical motifs. In the Stockfish chess engine, extensions on checks, captures, and singular lines contribute to deeper tactical insight, though they elevate node counts significantly compared to non-extended searches; aggressive pruning helps mitigate this overhead while preserving minimax optimality in key areas. Empirical observations in engine development indicate such extensions yield measurable strength gains in positions prone to horizon errors, justifying the computational cost.

Pruning and Heuristic Methods

Null-move addresses the horizon effect by simulating a scenario where the opponent passes their turn, allowing the search to probe for tactical weaknesses or threats that might otherwise be obscured beyond the search depth. This technique, based on the observation that passing is typically inferior to any legal move, involves making a null move, reducing the search depth (often by a factor of 3), and the if the resulting score meets or exceeds the threshold in alpha-beta search, indicating the position is strong enough without further exploration. To mitigate risks such as positions where passing could be optimal, verification searches are employed, re-examining the at full depth if initial conditions suggest potential oversights. This method safely cuts irrelevant while preserving accuracy, as demonstrated in early implementations like Kaissa, where it enhanced tactical detection without excessive computation. Transposition tables mitigate the horizon effect by storing and reusing evaluations of identical positions encountered through different move sequences, effectively extending the search horizon through hashed lookups rather than recomputation. These tables, utilizing for position keys, cache exact scores, bounds, and best moves, enabling immediate cutoffs or move suggestions in alpha-beta searches and improving overall tree efficiency. By avoiding redundant exploration of , the technique provides deeper insights into stable positions, reducing the likelihood of overlooking long-term consequences masked by shallow searches. First implemented in Mac Hack VI, transposition tables have become a cornerstone of game-tree search, with replacement schemes like depth-preferred policies ensuring retention of high-value entries. History and killer heuristics enhance move ordering to focus the search horizon on promising lines, prioritizing moves based on prior success in causing cutoffs, thereby concentrating computational effort on paths likely to reveal critical threats or advantages. The killer heuristic maintains a short list of non-capturing moves that recently led to beta cutoffs at the same ply, trying them early after hash and capture moves to accelerate . Complementing this, the history heuristic tracks cumulative cutoff successes across the search tree in a indexed by move features (e.g., from-to squares), scaling bonuses by search depth squared to favor deeper successes, which dynamically reorders quiet moves for better alpha-beta performance. Together, these methods, pioneered in the , can improve search efficiency by around 20% in practice, as shown in early experiments. Evaluation tuning compensates for horizon limitations by incorporating features that capture long-term strategic elements, such as or territorial control, into the leaf-node assessment, providing a more forward-looking estimate without extending search depth. In chess, this involves weighting enduring advantages like pawn islands or safety higher in the to penalize positions that appear stable but invite delayed threats. For instance, in games like , combining with minimum stone ply (territory) metrics, tuned via or Bayesian methods, yields evaluations robust to horizon-induced optimism. Such tuning, emphasizing horizon-aware components over immediate tactics, has been shown to improve decision quality in shallow searches by aligning static evaluations with deeper strategic realities. Quiescence search is a specialized extension to game tree search algorithms, commonly integrated into or alpha-beta frameworks in chess , designed to address the horizon effect by continuing the search selectively in unstable positions until a stable, or "quiet," state is reached. This technique focuses on tactical moves such as captures, promotions, and checks, which can create volatile evaluations if terminated prematurely at the main search depth. By ensuring that leaf node evaluations occur only after resolving these tactics, quiescence search prevents the misinterpretation of half-resolved sequences, such as a queen sacrifice that leads to material gain just beyond the horizon. The concept traces its origins to Claude Shannon's foundational 1950 paper on programming, where he advocated investigating variations until a quiescent position is achieved to avoid superficial assessments. In terms of implementation, quiescence search operates as a post-main-search phase, either standalone or embedded within alpha-beta pruning. At the horizon of the primary search, the algorithm evaluates the current "stand-pat" position—using a static material and positional —while recursively exploring only high-impact moves like captures (ordered by most valuable victim-least valuable attacker, or MVV-LVA) or until no further such moves are legal or beneficial. Pruning mechanisms, including delta pruning (discarding lines where gains do not exceed a ) and static exchange (to assess capture sequences), limit explosion in branching. If a position remains quiet with no tactical options, the search terminates with the stand-pat score; otherwise, it continues until quiescence or a depth limit. This selective extension is detailed in Slate and Atkin's 1975 analysis of search modifications for chess, which emphasized its role in stabilizing amid tactical complexity. The primary benefits of quiescence search lie in its ability to mitigate horizon-induced errors in critical scenarios, such as delayed checkmates or unbalanced exchanges, where a fixed-depth search might undervalue threats postponed by opponent responses. By resolving these tactical lines, it enhances decision accuracy with relatively low computational cost, as effective move ordering confines the additional exploration to a modest subset of the game tree. Modern chess engines, including Komodo, incorporate refined to handle such volatility efficiently, contributing to stronger tactical play without disproportionate slowdowns. Despite its effectiveness, is limited in scope, primarily targeting tactical disruptions like captures and while failing to address non-tactical horizons, such as long-term strategic maneuvers or shifts that evade capture-based extensions. In positions with extensive capture chains, poor ordering can still amplify counts, necessitating careful to depth and , as noted in Althöfer's 1991 examination of quiescence variants.

Broader AI Search Challenges

The horizon effect represents a specific manifestation of depth-limited search limitations in , but it must be distinguished from other challenges like plateauing in . Plateauing occurs when an assigns nearly identical scores to a large number of positions, resulting in a flat that hinders effective by failing to establish tight bounds for branch elimination. This contrasts with the horizon effect, where the issue stems directly from the artificial cutoff in search depth, allowing postponable adverse events to evade detection regardless of . In both cases, search efficiency suffers, but plateauing demands improvements in discriminability, while the horizon effect requires extensions beyond uniform depth limits. The horizon effect also emerges as a symptom of the broader in search, where the exponential growth of possible states ( raised to depth) renders exhaustive exploration infeasible. However, it is distinct from state-space bloat in combinatorial puzzles like the , where the sheer volume of configurations (approximately 4.3 × 10^19) overwhelms search without the adversarial postponement dynamic central to the horizon effect. This explosion necessitates selective search strategies, yet the horizon effect uniquely arises when depth bounds interact with game dynamics to mask inevitable outcomes, amplifying errors in adversarial settings over neutral puzzle solving. In modern hybrids, such as those employing (MCTS), the horizon effect persists despite advancements in simulation-based exploration. For instance, MCTS simulations guided by and networks can still overlook threats just beyond the effective search horizon, leading to over-reliance on the policy network's prior predictions rather than deeper tactical analysis. This manifestation highlights how even sophisticated hybrids inherit classical search pathologies, as finite computational budgets limit rollout depths, causing the algorithm to favor network-guided moves that may defer rather than avert losses. Ongoing research addresses these intertwined challenges through innovative paradigms like quantum search algorithms, which offer quadratic speedups for evaluating game trees by leveraging superposition to explore multiple paths simultaneously. Similarly, adaptive horizon techniques dynamically adjust search depths based on state complexity, enabling more targeted extensions in volatile positions without uniform overhead. These approaches aim to mitigate the horizon effect alongside combinatorial pressures, potentially transforming search scalability in complex domains.

References

  1. [1]
    Search Methods - Duke Computer Science
    The Horizon Effect. A potential problem in game tree search to a fixed depth is the horizon effect, which occurs when there is a drastic change in value ...
  2. [2]
    [PDF] CS440/ECE448 Lecture 9: Minimax Search
    Cutting off search. • Horizon effect: you may incorrectly estimate the value of a state by overlooking an event that is just beyond the depth limit. • For ...
  3. [3]
    Adversarial Search - Dr. Mark Humphrys
    Horizon effect: A state may look promising according to the heuristic (e.g. I capture one of opponent's pieces) but actually lead to disaster later on, beyond ...
  4. [4]
    [PDF] 6 ADVERSARIAL SEARCH - Artificial Intelligence: A Modern Approach
    The minimax algorithm performs a complete depth-first exploration of the game tree. If the maximum depth of the tree is m, and there are b legal moves at each ...
  5. [5]
    [PDF] Chess as Problem Solving: The Development of a Tactics Analyzer
    Horizon Effect can not be dealt with adequately by merely shifting the horizon. 2. The Positive Horizon Effect. The Positive Horizon Effect is different in ...
  6. [6]
    [PDF] XXII. Programming a Computer for Playing Chess1
    The thesis we will develop is that modern general purpose computers can be used to play a tolerably good game of chess by the use of suitable computing routine ...
  7. [7]
    Chess AI: Competing Paradigms for Machine Intelligence - MDPI
    We examine the algorithmic differences between the engines and use our observations as a basis for carefully interpreting the test results. Drawing inspiration ...
  8. [8]
    [PDF] Search and Game Playing
    Depth limit can result in the horizon effect: interesting or devastating events can be just over the horizon! 80. Page 21. Evaluation Functions. For chess, ...
  9. [9]
    [PDF] An Analysis of Alpha-Beta Priming'
    A technique called "alpha-beta pruning" is generally used to speed up such search processes without loss of information, The purpose of this paper is to analyze ...
  10. [10]
    [PDF] COMPUTER CHESS AND SEARCH
    Apr 3, 1991 · It is the quality of this quiescence search which controls the severity of the horizon effect exhibited by all chess programs. Since the ...Missing: pitfalls | Show results with:pitfalls
  11. [11]
    [PDF] The Greenblatt chess program
    The Greenblatt chess program, developed at MIT, uses a simulated chess set, standard notation, and a minimax search. It won a class D trophy and wins 80% ...
  12. [12]
    ICS 180, February 4, 1999 - UC Irvine
    Feb 4, 1999 · horizon effect example. One way to counter the horizon effect is to add knowledge to your program: if it knows from the evaluation that the ...
  13. [13]
    A new approach to cooperative pathfinding - ACM Digital Library
    A new approach to cooperative pathfinding. Authors: Renee Jansen, Nathan SturtevantAuthors Info & Claims. AAMAS '08: Proceedings of the 7th international ...
  14. [14]
    [PDF] Cooperative Pathfinding - Moving AI Lab
    Cooperative Pathfinding. Nathan Sturtevant (& Renee Jansen). AAAI WoMP. July 22, 2012. Page 2. Talk Overview/Goals. Discuss some older work (2008).
  15. [15]
    Does Alternative Data Improve Financial Forecasting? The Horizon ...
    Mar 7, 2024 · Does Alternative Data Improve Financial Forecasting? The Horizon Effect. OLIVIER DESSAINT,.
  16. [16]
    Could AI Trigger the Next Financial Crisis? - HEC Paris
    Dec 11, 2024 · Article based on a masterclass on Thierry Foucault's article, “Does Alternative Data Improve Financial Forecasting? The Horizon Effect” (The ...
  17. [17]
    Long‐term effects of short planning horizons for inventory routing ...
    May 25, 2021 · ... horizon effect that increases the risk of making bad decisions unless the planning horizon is sufficiently long. Another way to compare the ...
  18. [18]
    Tactical planning in blood supply chain: An integrated demand ...
    However, the decomposition in submodels may deteriorate the solution quality due to the end-of-horizon effect caused by assuming a reduced time horizon ...
  19. [19]
    Extensions - Chessprogramming wiki
    Many programs extend certain moves to try and find better moves faster, or to resolve tactical noise resulting from the horizon effect.
  20. [20]
    Iterative Deepening - Chessprogramming wiki
    Iterative deepening (ID) is a time management strategy where a program starts with a one-ply search, then increments depth, repeating until time is exhausted.Missing: horizon | Show results with:horizon
  21. [21]
    Singular Extensions - Chessprogramming wiki
    Singular Extensions (SE), are domain-independent extensions introduced in 1988 by Thomas Anantharaman, Murray Campbell, and Feng-hsiung Hsu.
  22. [22]
    Check Extensions - Chessprogramming wiki
    Check Extensions have two distinct forms: one of them extends when giving check, the other - when evading it. In each case, typical depth to extend is one ply.
  23. [23]
    Frequently Asked Questions | Stockfish Docs - GitHub Pages
    Oct 18, 2025 · The engine picks the suboptimal move during search at depth = 1 + int(Skill Level) (so level 0 => depth 1, level 10 => depth 11, etc.).Missing: overhead | Show results with:overhead
  24. [24]
    Null Move Pruning - Chessprogramming wiki
    Recursive null move pruning is simply allowing more than one null move in one branch of the search. Fruit uses a depth reduction factor R=3, with no null move ...
  25. [25]
  26. [26]
  27. [27]
  28. [28]
    The History Heuristic - Jonathan Schaeffer, 1983 - Sage Journals
    Aug 1, 1983 · This paper presents the history heuristic, an inexpensive way to re-order moves dynamically at interior nodes of search trees.
  29. [29]
    [PDF] SOME STUDIES IN GAME-TREE PRUNING AND EVALUATION ...
    This will give us a very good move ordering and prune more branches during alpha-beta search. ... Moore, An Analysis of Alpha-beta Pruning. Artificial.
  30. [30]
    Programming a Computer for Playing Chess
    This paper is concerned with the problem of constructing a computing routine or "program" for a modern general purpose computer which will enable it to play ...
  31. [31]
    [PDF] the heuristic search and the game of chess a study of quiescence ...
    This paper describes the results of applying the formal heurisitic search algorithm to the game of chess, and the impact of this work on the theory of heuristic ...
  32. [32]
    Komodo 12 Chess Engine - Official Site
    Revised quiescence search; Hash table changes; More than 70 evaluation and search changes; Added “Magnify”, which multiplies the final eval by a percentage ...
  33. [33]
  34. [34]
    [PDF] Error Minimizing Minimax: Avoiding Search Pathology in Game Trees
    In this paper, we show how local pathologies can occur at certain kinds of subtrees of a game tree, and how to modify a minimax style search procedure to ...
  35. [35]
    Some necessary conditions for a master chess program
    This is the Horizon Effect, which causes unpredictable evaluation errors due to an interaction between the static evaluation function and the rules for search ...
  36. [36]
    Horizon Effect - Chessprogramming wiki
    The Horizon Effect is caused by the depth limitation of the search algorithm, and became manifest when some negative event is inevitable but postponable.Missing: algorithms | Show results with:algorithms
  37. [37]
    [PDF] MCTS with Influence Map for General Video Game Playing
    In this paper, we propose to use an influence map to solve the MCTS's horizon effect problem. Even if MCTS doesn't find rewards, the influence map guides the AI ...
  38. [38]
    [0907.1623] Faster quantum algorithm for evaluating game trees
    Jul 9, 2009 · We give an O(sqrt n log n)-query quantum algorithm for evaluating size-n AND-OR formulas. Its running time is poly-logarithmically greater after efficient ...
  39. [39]
    Adjusting Planning Horizon with Adaptive Subgoal Search - arXiv
    Jun 1, 2022 · Taking advantage of this property, we propose Adaptive Subgoal Search (AdaSubS), a search method that adaptively adjusts the planning horizon.