Fact-checked by Grok 2 weeks ago

Quiescence search

Quiescence search is a selective search extension in computer game-playing algorithms, particularly for chess, that addresses the by continuing the or alpha-beta search at leaf nodes beyond the principal variation to resolve tactical threats—such as captures, , and promotions—until the position reaches a stable, "quiet" state where no significant changes are imminent, thereby improving the accuracy of static evaluations. Developed in the mid-1970s as part of early search applications to chess, quiescence search was introduced by Larry R. Harris to mitigate errors from evaluating volatile positions, using a threshold (δ) to prioritize and extend searches in tactically unstable nodes until threats are resolved. This approach dynamically assesses positional stability, expanding nodes with high potential for tactical shifts first, which helps avoid premature cutoffs in alpha-beta pruning and provides more reliable bounds on position values. In 1990, D. F. Beal generalized the concept using null moves—a hypothetical skip of a player's turn—to create a self-terminating quiescence search applicable to broader problems, demonstrating significant efficiency gains in chess tactics by converging opposing bounds without full-width exploration. Modern implementations, such as those in engines like , integrate quiescence search with evaluations (e.g., NNUE) to handle deeper tactical sequences at leaf nodes, ensuring evaluations occur only on quiet positions and enhancing overall search precision in complex, capture-heavy scenarios. The technique remains a of high-performance chess engines, complementing by reducing errors from the —where inevitable threats just beyond the search depth are overlooked—and enabling more stable decision-making under time constraints.

Background and Motivation

Definition and Purpose

Quiescence search is a selective extension to the alpha-beta pruning algorithm commonly used in computer-based searches, such as those for chess. It performs additional recursive searches at the leaf nodes of the primary specifically in non-quiet positions, where recent moves like captures, promotions, or could drastically alter the static of the board. This technique focuses on resolving tactical instability by continuing the search only until the reaches a stable, or "quiet," state devoid of immediate threats. The primary purpose of quiescence search is to reduce tactical oversights inherent in fixed-depth searches by incorporating relevant disruptive sequences beyond the main search horizon, thereby yielding more reliable position values and enhancing overall search accuracy. Without it, evaluations at the search frontier may be misleading due to unresolved captures or threats, leading to suboptimal move selections. By prioritizing these high-impact moves, quiescence search improves move ordering and supports better decisions in the alpha-beta process. In practice, quiescence search integrates with standard search algorithms by invoking a secondary, limited-depth routine at the end of each primary ply, where it generates and evaluates only a subset of moves—typically those involving material changes—while applying the same alpha-beta bounds to maintain efficiency. This extension does not expand the full but stabilizes evaluations selectively, allowing programs to approximate deeper analysis in tactical scenarios without prohibitive computational cost. It primarily addresses the , a limitation where tactics straddling the search depth go undetected. Quiescence search integrates with the algorithm by extending the phase at the leaves of the primary , specifically after the fixed depth limit is reached, to handle positions where immediate tactical opportunities—such as captures—could distort static evaluations. This extension ensures that assessments are made only after resolving these volatile elements, thereby mitigating inaccuracies in the value propagation up the tree. When combined with alpha-beta pruning, quiescence search inherits and applies the same alpha and beta bounds from the main search, but restricts its depth to a shallow, tactical exploration focused on non-quiet moves like captures and checks, avoiding the exponential growth of full-width branching. This limited scope allows alpha-beta cutoffs to operate effectively within quiescence lines, terminating branches early when bounds converge, while excluding quiet moves to prevent unnecessary computation. The primary impact of this integration on search efficiency lies in its ability to localize the resolution of tactical motifs at the search horizon, obviating the need for proportionally deeper uniform searches across the entire tree and enabling more accurate decisions with controlled overhead. Experimental evaluations in chess tactics demonstrate substantial gains in cost-effectiveness compared to exhaustive full-width alternatives, as quiescence search achieves reliable position bounds through mechanisms like null moves without extensive exploration.

The Horizon Effect

Description and Causes

The horizon effect is a fundamental limitation observed in fixed-depth search algorithms employed in for two-player games, such as chess, where the algorithm terminates exploration at a predetermined depth and evaluates the resulting positions using a static . This creates an artificial "horizon" beyond which potential threats, captures, or advantageous developments remain invisible, often leading to erroneous assessments that cause the program to select moves resulting in tactical oversights or blunders. The phenomenon was first systematically described by Hans Berliner, who identified it as a source of unpredictable evaluation errors arising from the interaction between depth-limited search and imperfect static evaluations. The primary cause of the horizon effect lies in the inherent constraints of fixed-depth minimax search, which assumes that positions at the maximum search depth are sufficiently stable for reliable evaluation, even when they are dynamically volatile. In practice, this leads to premature convergence on seemingly favorable evaluations, as the algorithm cannot discern sequences that unfold just beyond the horizon, such as delayed material losses or hidden counterplays. This issue is particularly pronounced in games with high branching factors and long tactical lines, where adversaries can exploit the depth limit by interposing moves that postpone negative outcomes, thereby masking them from the search. Alpha-beta pruning, while efficient, can further compound the problem by selectively eliminating branches that might reveal horizon-crossing developments. Theoretically, the horizon effect stems from the mismatch between the search's uniform depth limit and the varying quiescence horizons of different positions, where "quiet" moves—those not involving captures, checks, or other disruptions—allow positions to appear deceptively safe at the cutoff. In contrast, "noisy" moves generate immediate changes that demand deeper analysis, but fixed-depth searches treat all nodes equally, failing to extend scrutiny where needed and thus permitting quiet sequences to conceal tactical . Berliner distinguished negative instances, where dangers are missed, from positive ones, where illusory opportunities are pursued, both rooted in this evaluation shortfall. To address this limitation, quiescence search extends analysis selectively in non-quiescent positions to ensure more accurate assessments.

Illustrative Examples

One illustrative example of the occurs in positions where an inevitable material loss is delayed just beyond the fixed search depth, causing a chess program to misjudge the position as safer than it is. involves a on the seventh poised for to a , opposed by black's delivering a series of to the king. If the search depth is limited to 7 plies, black's checks can push the promotion 14 plies into the future—two full moves beyond the horizon—leading the program to evaluate the position as a win for black, despite the promotion being unavoidable and resulting in a decisive material advantage for . In this scenario, the principal variation might show 's checking repeatedly (e.g., Ra8+, Kb1, Rb8+, Ka1, Ra8+, and so on), forcing king to zigzag without advancing the pawn, until the evaluation at the leaf nodes assumes no immediate threat. This false sense of security arises because the 's static evaluation at depth 7 overlooks the zugzwang-like force driving the pawn forward after exhaust themselves. An advanced example demonstrates how sequences of multiple captures can exacerbate the , creating a temporary of material balance. Consider the given by FEN 5r1k/4Qpq1/4p3/1p1p2P1/2p2P2/1p2P3/3P4/BK6 b - - 0 1, where to move faces on e7 and on a1. A searching to 8 plies with basic quiescence (extending only captures) may opt to advance its pawns on the queenside (e.g., b5-b4, then after capture, c5-c4, and so forth), sacrificing up to four pawns in recaptures by the while believing it delays or avoids losing the on e7. In reality, these zugzwang-forcing advances merely postpone the queen's capture by forces beyond the , leading to greater net loss for . This setup mimics a fork-like , where each advance attacks the , drawing it forward into a vulnerable line, but the non-quiet position at the horizon masks the overall tactical collapse. The evaluation might score the position as roughly even after the pawn sacrifices, ignoring the queen's two plies later.

Core Mechanism

Basic Principle

Quiescence search addresses the , where fixed-depth searches in games like chess may overlook tactical threats just beyond the search horizon, by selectively extending the evaluation in unstable positions. The core principle of quiescence search is to continue the search beyond the main fixed depth only for "noisy" moves—such as captures, promotions, and checks—that could lead to significant changes in position value, until a quiescent position is reached where a static can be reliably applied. This extension ensures that evaluations occur in stable states, avoiding the pitfalls of assessing volatile positions prone to tactical disruptions. A quiescent position is defined as a stable configuration with no pending captures or other tactical moves available to the side to move that would result in material loss or a substantial shift in positional value. In such positions, the provides a dependable estimate of the game's state, as further moves are unlikely to cause wild swings in the assessed value. To maintain efficiency during these extensions, quiescence search employs alpha-beta windows, which prune branches outside the current value bounds, focusing the search on lines relevant to the process. This windowing mechanism integrates seamlessly with the broader alpha-beta pruning framework, allowing for deeper tactical without excessive computational overhead.

Search Extensions and Termination

In quiescence search, the extension process recursively applies the exclusively to capture moves, with optional inclusion of or promotions to capture tactical threats or gains that could alter the position's value. This selective ensures that only moves likely to resolve instability are explored, maintaining efficiency while mitigating the . To prevent computational explosion or infinite in long tactical lines, implementations typically rely on techniques such as delta pruning and move ordering, though some may impose a maximum depth limit. At each in quiescence search, a "stand-pat" — the static of the without making a move—is first computed. If this value is greater than or equal to , the search terminates immediately with that value. Otherwise, only tactical moves are explored. Additional methods, such as delta pruning (ignoring moves that cannot recoup a deficit) and futility , are commonly used to limit the search of non-promising tactical lines. Termination occurs under specific conditions to balance thoroughness and performance: the search halts when no legal captures (or included moves like ) remain available that improve the alpha bound, the stand-pat meets or exceeds , the is quiet—meaning no tactical moves generate significant value swings—or a cutoff is triggered via alpha-beta pruning, indicating the current line cannot improve the best option. Upon termination, the uses the stand-pat or explored static , such as balance adjusted for positional factors, to assign a final score. To handle potential cycles, such as perpetual checks or repeating capture sequences that could lead to infinite loops, repetition tables track histories across the search tree; if a repeats (typically under chess rules), it is scored as a draw to avoid redundant exploration and ensure convergence. Transposition tables further aid by storing and reusing evaluations for identical positions encountered via different move orders.

Implementation

Pseudocode Overview

The quiescence search algorithm extends the alpha-beta search by selectively examining only tactical moves, such as captures, beyond the main search horizon to stabilize position evaluations. A standard high-level pseudocode representation of this algorithm, adapted for a typical minimax framework in games like chess, is as follows.
pseudocode
function quiesce(alpha, beta):
    stand_pat = evaluate()  // Static evaluation of the current position
    if stand_pat >= beta:
        return beta  // Beta cutoff: position is too good for the opponent
    if alpha < stand_pat:
        alpha = stand_pat  // Update alpha to the stand-pat value if better
    for each legal capture move in ordered captures:  // Only generate and prioritize capture moves
        make_move(capture)
        score = -quiesce(-beta, -alpha)  // Recursive call with negated bounds for opponent
        undo_move(capture)
        if score >= beta:
            return beta  // Beta cutoff on this branch
        if score > alpha:
            alpha = score  // Update alpha with better score for current player
    return alpha  // Return the best achievable score
This demonstrates the core structure of quiescence search, which is typically invoked from the main alpha-beta search function when the search depth reaches zero, allowing further exploration of volatile positions. The function begins by computing a static (stand-pat) of the current position, which serves as a and potential leaf node value if no further improvements are found; this avoids evaluating unstable positions where recent captures might skew the score. If the stand-pat exceeds the beta bound, a occurs immediately, as the position is sufficiently advantageous for the opponent to prune the search. The loop then iterates exclusively over legal capture moves, which are generated and often ordered by heuristics like most-valuable-victim least-valuable-attacker (MVV-LVA) to prioritize high-impact tactics first; non-capture moves are ignored to focus on resolving imbalances that could alter s dramatically. For each capture, is made, and the function recurses with negated alpha and bounds to simulate the opponent's turn, maintaining the negation principle. After undoing , the returned score is checked against the bounds: a prunes if the score is too high for the current maximizer, while alpha is updated if a better option is found. The search terminates when no more captures remain, returning the updated alpha as the quiesced value, ensuring the position is "quiet" enough for reliable static . This flow integrates seamlessly with the search by providing a bounded extension that mitigates the without exhaustive exploration.

Practical Variations

In practical implementations of quiescence search, delta pruning is commonly employed to skip captures that offer insufficient material gain relative to the current evaluation threshold, thereby reducing the branching factor and computational overhead without significantly compromising accuracy. This technique prunes moves where the potential gain, often measured against a fixed delta like the value of a pawn or queen, cannot beat the alpha bound, as seen in engines that integrate it with static exchange evaluation to filter low-value exchanges early. Promotions are typically handled separately within quiescence search to prioritize tactical lines involving advances to the eighth , often by including all promotion moves regardless of capture status or limiting to promotions to thoroughness and efficiency. and check evasions are also commonly included alongside captures to ensure tactical threats involving checks are resolved. This adaptation ensures that critical tactics are not overlooked, particularly in positions near material equality. Additionally, some engines optionally extend the search to include quiet moves in clearly winning positions—such as those with a exceeding a certain —to verify the stability of the advantage and avoid over-reliance on stand-pat evaluations. Optimizations in move ordering further enhance quiescence search performance, with the most valuable victim/least valuable attacker (MVV/LVA) serving as a foundational method for sorting captures by prioritizing high-value targets captured by low-value pieces. For instance, a capturing a would rank higher than a capturing a , promoting early cutoffs in alpha-beta . History heuristics complement this by assigning scores to capture types based on their historical success in causing cutoffs across searches, often replacing or augmenting LVA components to refine ordering dynamically.

History and Development

Origins in AI Research

Quiescence search originated in the foundational work of researchers addressing the challenges of search in chess programs during the mid-20th century. The concept was first proposed by in his 1950 paper on programming computers to play chess, where he emphasized evaluating positions only in quiescent states—those free of immediate threats or ongoing exchanges—to ensure accurate assessments and avoid errors from incomplete tactical sequences. This approach aimed to mitigate limitations in the algorithm by extending searches selectively until stability was achieved, laying the groundwork for handling dynamic board positions in resource-constrained computing environments. The technique was formalized in 1975 by Larry R. Harris in his IJCAI paper "The heuristic search and the game of chess: a study of quiescence, sacrifices, and plan oriented play," which applied heuristic search to chess and introduced to resolve tactical instability using a for extensions. By the late 1960s and 1970s, quiescence search transitioned from theoretical discussion to practical implementation in early chess programs, driven by the need to overcome fixed-depth search constraints. Richard Greenblatt's MacHack VI, developed at in 1967, incorporated a basic form of quiescence search, extending its standard five-ply depth to analyze captures and threats, which enabled the program to achieve Class C player strength and compete in human tournaments. Similarly, the Belle chess system, initiated by and Joe Condon in the mid-1970s, utilized a move-vs-evaluation quiescence search prioritizing captures, enhancing its performance in hardware-accelerated evaluations and contributing to its success in world computer chess championships. These implementations emerged from manual analyses of search failures, where programs like MacHack overlooked postponed threats due to abrupt terminations. A key milestone in the development occurred around 1980, as quiescence search was integrated and formalized as an enhancement to within broader research on variable-depth . Hermann Kaindl's 1982 paper presented dynamic control mechanisms for quiescence search, allowing adaptive extensions based on position volatility, and was discussed in conferences such as the European Meeting on and Systems Research. This work linked quiescence directly to discussions of the —the tendency of depth-limited searches to miss inevitable dangers just beyond their reach—prompting its adoption to refine tactical accuracy in minimax-based systems.

Evolution and Adoption

Following its initial conceptualization in AI research, quiescence search evolved significantly in the post-1980s era through integration into advanced chess engines, enhancing their tactical depth and evaluation stability. In the late 1980s, incorporated quiescence search focused on captures and checks to mitigate the in selective deepening, allowing for more reliable assessments at search frontiers. By the , this technique was refined and scaled in , where it formed a core component of the program's alpha-beta search, processing millions of positions per second while extending evaluations in volatile tactical lines. Concurrently, refinements such as null-move pruning were introduced to complement quiescence search, enabling aggressive branch pruning in quiet positions and reducing computational overhead without sacrificing accuracy in capture sequences. In modern chess engines, quiescence search has become ubiquitous, serving as a standard extension to principal variation searches in programs like , which employs it alongside advanced move ordering to prioritize high-impact captures and checks. MCTS-based engines inspired by , such as , use evaluations at leaf nodes to assess positions, providing stable evaluations in tactical scenarios without traditional quiescence extensions. Beyond chess, the approach has been extended to other combinatorial games; in shogi engines like those using alpha-beta variants, quiescence search handles promotion and capture chains to avoid premature cutoffs in dynamic positions. Similar tactical extensions appear in Go AI systems, where analogs to quiescence search focus on ko fights and capturing sequences to stabilize simulations. As of the 2020s, quiescence search has increasingly integrated with neural network evaluations in hybrid architectures, boosting accuracy in volatile positions by combining traditional capture extensions with learned positional insights. For instance, Stockfish's NNUE (Neural Network Ueber Evaluation) layer pairs quiescence search with efficient neural approximations, achieving superior performance in tactical volatility compared to pure heuristic methods. This synergy has proven particularly effective in engines handling complex endgames, where neural-guided quiescence improves evaluation accuracy compared to traditional static evaluations.

Advantages and Limitations

Key Benefits

Quiescence search dramatically reduces the by extending the principal variation search into tactical lines until a stable, quiet position is reached, thereby preventing premature evaluations that lead to blunders in dynamic middlegame scenarios. This extension ensures more accurate assessments of threats like captures, pins, and forks, resulting in superior move selection during tactical exchanges. In terms of efficiency, quiescence search adds minimal computational overhead because it limits extensions to a targeted subset of high-impact moves, such as winning captures or checks, rather than the full move set. This approach accelerates position evaluation compared to uniformly deeper searches, allowing programs to achieve effective depth increases in critical areas without proportional time costs. Quantitative analyses demonstrate its impact, with quiescence search contributing to significant reductions in tactical errors across benchmark positions and effective additions of search plies in tactical contexts that boost chess program strength.

Potential Drawbacks

Quiescence search introduces notable overhead costs in chess engines by extending the evaluation at leaf nodes to resolve tactical sequences, such as capture chains, which can substantially increase overall search time. In highly tactical positions, these extensions may lead to prolonged explorations that amplify computational demands, with quiescence searches often consuming 50% or more of the total processing budget due to the high frequency of invocations across the game tree. This overhead is particularly pronounced in positions with multiple hanging pieces or checks, where the algorithm recursively analyzes responses until a stable state is reached, potentially slowing down the engine if not carefully controlled. A key risk associated with this approach is over-searching, where loose definitions of quiescence—such as including all and captures—can result in excessively deep branches that fail to terminate efficiently or explore irrelevant variations without improving decision quality. For instance, persistent threats in the position may cause the search to expand indefinitely unless explicit depth limits are imposed, exacerbating the computational burden without proportional gains in accuracy. Practical variations, like restricting quiescence to winning captures or using delta pruning, can mitigate this to some extent by focusing only on high-impact moves. Despite these extensions, quiescence search has inherent limitations in addressing long-term strategic threats, as it prioritizes immediate tactical resolutions over subtle positional factors like weaknesses or the gradual advancement of passed pawns. Such features often remain unstable over many plies without involving captures or checks, evading the quiescence criteria and leading to evaluations based on prematurely "quiet" but strategically flawed positions. Consequently, the algorithm may undervalue or entirely miss quiet but potent moves—those without immediate tactical consequences—that shape the game's mid- to dynamics. Mitigating these drawbacks presents ongoing challenges, particularly in balancing quiescence depth limits to curb slowdowns while preserving tactical insight, a task made harder on resource-constrained hardware where even modest extensions can dominate runtime. Engineers must tune parameters like maximum depth or move types included, often through empirical testing, to avoid both over-searching and incomplete evaluations, though no universal solution eliminates the trade-offs entirely.

References

  1. [1]
    [PDF] the heuristic search and the game of chess a study of quiescence ...
    The purpose of this paper is to describe the results of applying the formal heuristic search algorithm to the game of chess, and discuss the impact of this work ...Missing: scholarly | Show results with:scholarly
  2. [2]
    A generalised quiescence search algorithm - ScienceDirect
    This paper describes how the concept of a null move may be used to define a generalised quiescence search applicable to any minimax problem.
  3. [3]
    A Theoretical Analysis of the Development and Design Principles of ...
    May 10, 2025 · Modern chess engines achieve high performance by tightly integrating three core components: Alpha-Beta search, quiescence search, and a learned ...
  4. [4]
    [PDF] COMPUTER CHESS AND SEARCH
    Apr 3, 1991 · Quiescence Search. Even the earliest papers on computer chess recognized the importance of evaluating only positions which are ''relatively ...
  5. [5]
    [PDF] Searching to Variable Depth in Computer Chess - IJCAI
    ABSTRACT. This paper discusses some methods for guiding the search of conventional chess programs to variable depth. The motivation for investigating such ...
  6. [6]
  7. [7]
    [PDF] Chess as Problem Solving: The Development of a Tactics Analyzer
    Horizon Effect can not be dealt with adequately by merely shifting the horizon. 2. The Positive Horizon Effect. The Positive Horizon Effect is different in ...
  8. [8]
    [PDF] A REVIEW OF GAME-TREE PRUNING†
    It is the quality of this quies- cence search which controls the severity of the horizon effect exhibited by all chess programs. Since the evaluation ...<|control11|><|separator|>
  9. [9]
    [PDF] 6 ADVERSARIAL SEARCH - Artificial Intelligence: A Modern Approach
    The minimax algorithm performs a complete depth-first exploration of the game tree.
  10. [10]
    Horizon Effect - Chessprogramming wiki
    The Horizon Effect is caused by the depth limitation of the search algorithm, and became manifest when some negative event is inevitable but postponable.
  11. [11]
    [PDF] The Nature of Minimax Search
    understanding may be used to increase the efficiency of minimax search. Chapters 2 and 3 describe models of minimax for game trees that attempt to capture ...
  12. [12]
    Quiescence Search - Chessprogramming wiki
    The purpose of this search is to only evaluate "quiet" positions, or positions where there are no winning tactical moves to be made.Missing: scholarly articles
  13. [13]
    Repetitions - Chessprogramming wiki
    Repetitions of positions may happen during game play and inside the search of a chess program due to reversible moves played from both sides.
  14. [14]
    Add delta pruning in quiescence search · Issue #16 · vinc/littlewing
    Oct 21, 2017 · The idea of delta pruning in quiescence search is pretty simple: // Delta pruning let delta = 1000; // Queen value if stand_path < alpha ...Missing: engines | Show results with:engines
  15. [15]
    Quiescence Search increases required time by a factor of 20
    Dec 9, 2019 · The point of a quiescence search(QSearch) is to get a better static evaluation. By the number of nodes your searching, it seems that your just ...How to improve performance of Quiescence Search? - Chess Stack ...Delta pruning problem - Chess Stack ExchangeMore results from chess.stackexchange.com
  16. [16]
    Only captures for quiescent search? - TalkChess.com
    Aug 2, 2024 · Promotions are tactical moves and thus should be searched. In QS however, you can limit them to queen promotions only. With check extension in main search, you ...Quiescence - Check Evaluation and Depth Control - TalkChess.comQuiescence Search doesn't improve strength - Page 3More results from talkchess.com
  17. [17]
    MVV-LVA - Chessprogramming wiki
    MVV-LVA (Most Valuable Victim - Least Valuable Aggressor), is a simple heuristic to generate or sort capture moves in a reasonable order.Missing: practical delta pruning quiet time controls
  18. [18]
  19. [19]
    History Heuristic - Chessprogramming wiki
    History Heuristic, a dynamic move ordering method based on the number of cutoffs caused by a given move irrespectively from the position in which the move ...
  20. [20]
    Stockfish Docs - Threads - GitHub Pages
    Sep 20, 2024 · How the time limits are set for playing a game. For chess engine testing, Stockfish uses 10+0.1s (10 seconds for the game, 0.1 seconds for every ...<|control11|><|separator|>
  21. [21]
  22. [22]
    [PDF] XXII. Programming a Computer for Playing Chess1
    This paper is concerned with the problem of constructing a computing routine or. "program" for a modern general purpose computer which will enable it to ...
  23. [23]
    COMPUTER CHESS STRENGTH - ScienceDirect.com
    The chess program Belle was modified to search exactly 3 plies (half moves) and then enter its normal quiescence evaluation. This program (P3) was then ...
  24. [24]
    Null Move Pruning - Chessprogramming wiki
    He proposed a so called Null Move Quiescence Search (NMQS) as a selective layer between a regular full width search and quiescence search, later dubbed ...Missing: generalised principle<|control11|><|separator|>
  25. [25]
    [PDF] Minimaxing: Theory and Practice
    Kaindl, H. 1982. Dynamic Control of the. Quiescence Search in Computer Chess. In. Proceedings of the Sixth European Meeting on Cybernetics and Systems Research, ...
  26. [26]
    Chess Programming Part V: Advanced Search - GameDev.net
    Sep 6, 2000 · Such relatively stable positions are called "quiet" or "quiescent", and they are identified via "quiescence search". The basic concept of ...Missing: seminal | Show results with:seminal
  27. [27]
    [PDF] Quiescence Search for Stratego - Maarten PD Schadd
    To overcome both problems, we designed an algorithm to compute an Evaluation-Based Quiescence. Search (EBQS) value. ... Dynamic Control of the Quiescence Search ...Missing: scholarly | Show results with:scholarly