Fact-checked by Grok 2 weeks ago

Beam search

Beam search is a heuristic search algorithm used in artificial intelligence to explore a graph or search space by maintaining a fixed-size set, known as the beam, of the most promising partial solutions or nodes at each step, thereby balancing computational efficiency with the ability to find high-quality solutions in large state spaces. Developed as part of the Harpy speech recognition system in 1976, beam search addressed the challenges of decoding connected speech by pruning less promising paths during search, enabling recognition of continuous utterances from a vocabulary of up to 1,011 words with real-time performance on limited hardware. In general AI search problems, it operates by generating successors from the current beam and selecting the top k candidates based on a heuristic evaluation function, where k is the beam width that trades off solution quality against time and memory usage; unlike exhaustive methods like breadth-first search, it is incomplete but often effective for approximate optimization. Beam search gained prominence in natural language processing, particularly for sequence-to-sequence models, where it decodes the most likely output sequences—such as translations—by keeping a beam of partial hypotheses and expanding them left-to-right, significantly improving over greedy decoding while remaining computationally feasible. Its variants, including length-normalized scoring and coverage penalties, mitigate biases toward shorter sequences in applications like machine translation and text generation, and it remains a standard decoding strategy in modern large language models despite ongoing research into alternatives for better diversity and quality.

Fundamentals

Definition

Beam search is a heuristic search algorithm in that explores a or by expanding the most promising nodes from a limited set of candidates, thereby balancing completeness and computational efficiency in large search spaces. It operates by maintaining a "beam" of partial solutions, systematically pruning less viable paths to prevent the exponential explosion of possibilities inherent in exhaustive methods. This approach sacrifices guaranteed optimality for practicality, making it suitable for problems where exact solutions are computationally prohibitive. A key parameter in beam search is the beam width, denoted as k or \beta, which specifies the fixed number of top-scoring partial solutions retained at each expansion step. By limiting the candidates to this width, the algorithm focuses resources on high-potential paths while discarding others based on a scoring , such as path cost or probability. The choice of beam width trades off between solution quality and speed: narrower beams prioritize efficiency but risk suboptimal results, while wider beams improve accuracy at higher cost. To illustrate, consider a basic sequence generation task, such as assembling a short from a of words in a where represent partial sequences and edges denote possible extensions. Starting from an empty sequence as the initial , beam search expands successors (e.g., adding at a time), evaluates each new partial sequence with a score (like cumulative probability), and prunes all but the top-k candidates—say, k=3—for the next step. Over iterations, this process narrows the search, potentially yielding a coherent like "the quick brown" from myriad combinations, while discarding low-scoring alternatives early. Beam search relates briefly to as a variant that limits exploration to promising rather than all at each level.

Comparison to Other Search Strategies

Beam search differs from exhaustive search strategies, such as (DFS) or (BFS), by limiting exploration to a fixed number of promising paths rather than examining the entire search space, which ensures and optimality in exhaustive methods but at exponential computational cost. This makes beam search incomplete, as it may discard paths leading to the global optimum, but it achieves substantial efficiency gains, with scaling linearly with beam width k instead of b raised to depth n. For instance, in large state spaces like , exhaustive search becomes infeasible, while beam search approximates solutions rapidly. Compared to greedy search, which selects only the single best immediate successor at each step—similar to hill-climbing and prone to in local optima—beam search expands multiple hypotheses simultaneously via its beam width parameter k > 1, fostering a balance between local and global considerations for improved solution quality without full exploration. This multi-path approach enhances the chances of escaping suboptimal traps that greedy methods fall into, though both remain and lack formal guarantees. Beam search offers no optimality or completeness guarantees in general, as its fixed-width pruning can overlook superior solutions if early promising paths diverge from the optimum; optimality holds only when k equals the full , effectively reverting to exhaustive search. Failure modes include premature convergence to local optima, such as in sequence generation where high-scoring partial sequences block diverse alternatives leading to better overall outcomes.
AspectAdvantagesDisadvantages
EfficiencyScales well for large spaces (e.g., O(n k b \log k) time vs. O(b^n) for exhaustive), enabling practical use in applications like .Higher constant factors than due to maintaining multiple paths, requiring tuning of k for balance.
Solution QualityExplores diverse paths to mitigate 's local optima issues, often yielding better approximations than single-path methods.Risks missing global optima, e.g., if the beam excludes a low-scoring early step essential for later high rewards.
Resource UseMemory-bounded by k, avoiding the explosion in exhaustive or broad BFS.Potential for reduced if states in the beam homogenize, amplifying suboptimal .

Algorithm Mechanics

Step-by-Step Procedure

Beam search proceeds iteratively, maintaining a fixed-size set (the beam) of the most promising partial solutions based on a scoring function, typically expanding and pruning at each level to balance exploration and efficiency. The algorithm is particularly suited for problems with large branching factors, such as in graphs or sequence generation in probabilistic models.

Initialization

The process begins with an initial beam containing one or more starting partial solutions, often a single empty sequence or root node assigned an initial score of 0 (or the log-probability of the starting state). This sets the foundation for expansion without prior commitments to suboptimal paths.

Iterative Expansion

At each iteration, corresponding to a depth level in the search tree:
  • For every partial solution in the current beam, generate all possible successors (next states or ) using the problem's transition rules or model predictions.
  • Compute a score for each new partial solution, commonly the cumulative log-probability of the sequence to date, which encourages high-likelihood paths while avoiding underflow in probability .
  • Collect all candidate partial solutions, rank them in descending order of score, and to retain only the top-β highest-scoring ones, forming the new beam. This discards lower-scoring alternatives to limit computational growth.
The iteration repeats until the termination condition is met. scoring functions, such as cumulative probability, guide the ranking to prioritize promising paths.

Termination

The algorithm typically terminates when the maximum depth is reached or when all hypotheses in the beam have reached a complete state, such as generating an end-of-sequence () indicator in domains like . Completed hypotheses are not expanded further but are tracked separately to select the highest-scoring one upon termination. If no complete solutions are found by the end, the highest-scoring partial solution from the final beam is returned. The following pseudocode outlines the standard beam search procedure in a general form adaptable to various domains:
function beam_search(initial_state, β, max_depth, score_function, expand_function, is_goal):
    active_beam = [(initial_state, 0.0)]  # List of (partial_solution, score) tuples for incomplete
    complete_solutions = []  # List for completed solutions
    
    for depth in range(1, max_depth + 1):
        candidates = []
        new_active = []
        for partial, partial_score in active_beam:
            successors = expand_function(partial)
            for successor in successors:
                new_score = score_function(successor, partial_score)  # e.g., partial_score + log P(next | partial)
                if is_goal(successor):
                    complete_solutions.append((successor, new_score))
                else:
                    candidates.append((successor, new_score))
        
        # Prune active candidates to top β
        candidates.sort(key=lambda x: x[1], reverse=True)
        new_active = candidates[:β]
        active_beam = new_active
        
        # Optional: if len(complete_solutions) >= β and all beams complete, can early stop in variants
    
    # After loop, return best complete or best active
    all_final = complete_solutions + active_beam
    if all_final:
        all_final.sort(key=lambda x: x[1], reverse=True)
        return all_final[0]
    return None
This pseudocode maintains separate active and complete lists, avoids expanding completed states, and continues to max_depth before selecting the best overall. Backtracking to reconstruct full paths can be incorporated by storing predecessor pointers in each partial solution, though it is omitted here for brevity.

Illustration with a Toy Example

Consider a simple sequence generation task with vocabulary {sat, is, on, in} and input "The cat", where the model outputs probabilities at each step. Use β=2 and a 3-step search, scoring by cumulative probability product (or log for additivity, but shown as product for clarity).
  • Step 1 (depth 1): Model predicts next : sat (0.6), is (0.4). Beam: ["The cat sat" (0.6), "The cat is" (0.4)].
  • Step 2 (depth 2): Expand "sat": on (0.7) → "The cat sat on" (0.6 × 0.7 = 0.42); in (0.3) → "The cat sat in" (0.6 × 0.3 = 0.18).
    Expand "is": sat (0.8) → "The cat is sat" (0.4 × 0.8 = 0.32); on (0.2) → "The cat is on" (0.4 × 0.2 = 0.08).
    All candidates: 0.42, 0.32, 0.18, 0.08. to top 2: ["The cat sat on" (0.42), "The cat is sat" (0.32)].
  • Step 3 (depth 3): Assume end-of-sequence () reachable; model predicts for each: e.g., from "sat on": (0.9) → "The cat sat on <>" (0.42 × 0.9 = 0.378); other (0.1) → 0.042.
    From "is sat": (0.5) → "The cat is sat <>" (0.32 × 0.5 = 0.16); other (0.5) → 0.16.
    Top scores include 0.378 and 0.16; terminate with highest: "The cat sat on <>".
This example demonstrates pruning decisions, where lower paths like "sat in" (0.18) are discarded despite potential, favoring higher cumulative scores.

Parameters and Computational Complexity

The primary tunable parameter in beam search is the beam width, denoted as β, which specifies the maximum number of partial hypotheses or paths retained and expanded at each expansion step. A larger β enhances solution quality by allowing exploration of a broader set of promising candidates, thereby providing a closer to the optimal solution, but it incurs higher computational cost due to the increased number of evaluations. Conversely, smaller values of β reduce resource demands at the expense of potentially suboptimal results, with β=1 degenerating to search. Secondary parameters include the maximum depth d, which caps the length of generated sequences or paths to prevent unbounded exploration; the scoring , typically the cumulative log-likelihood or a similar additive score to rank hypotheses; and early stopping criteria, such as halting when all beams complete or when a patience factor—generalizing the stopping condition—indicates sufficient convergence. These parameters allow customization to specific domains, like limiting d in sequence generation to match expected output lengths. The time complexity of beam search is O(\beta \cdot b \cdot d), where b denotes the branching factor (e.g., vocabulary size in language models) and d the maximum depth, as it involves generating and scoring β · b candidates per level across d levels before pruning back to β. Accounting for sorting or heap operations during pruning, a more precise bound is O(d \cdot \beta \cdot b \cdot \log(\beta \cdot b)). The space complexity is O(\beta \cdot d), required to store the active partial paths or hypotheses along with their scores. This complexity arises from the pruning mechanism, which limits the exponential growth of exhaustive search—whose is O(b^d)—by discarding all but the top β candidates at each level, effectively bounding the size rather than allowing it to expand unchecked. In practice, this makes beam search feasible for large search spaces where full is intractable. The selection of β critically influences the between accuracy and runtime: larger beam sizes generally improve performance up to a point, after which gains diminish, with runtime scaling linearly in β.

Applications

Classical Applications

Beam search found one of its earliest applications in , particularly in the Harpy system developed by Bruce Lowerre at in 1976. Harpy employed beam search to generate word sequence hypotheses by traversing a precompiled of approximately 100 million phone transitions, pruning less likely paths to focus on the most promising ones based on acoustic scores. This approach enabled efficient connected-word recognition for a vocabulary of 1,011 words, achieving real-time performance on hardware of the era and meeting DARPA's 1976 goal of recognizing 1,000 words with low error rates through path selection. In , beam search became integral to statistical decoding processes during the and early , especially in systems relying on n-gram s for fluency scoring. It facilitated the search for optimal sentence translations by maintaining a fixed-width set of partial translations (hypotheses) at each step, scoring them via translation probabilities and costs to approximate the maximum solution. Seminal implementations, such as the decoder for phrase-based , used beam search to handle reordering and phrase insertion, reducing decoding time from exponential complexity to near-linear in sentence length while preserving high translation quality. For instance, with a beam width of 100, achieved translations for 30-word sentences in under one second on 2004-era processors, a substantial speedup over full search methods. Beam search also proved effective in AI planning problems, including robot navigation, where it addresses state-space explosion in tasks. In such domains, it expands the most promising states level-by-level while discarding others outside the , using heuristics like distance to goal for to find collision-free paths in grid or continuous environments. This mechanism allowed early systems to navigate complex mazes or robotic arms through high-dimensional spaces, trading optimality for scalability. Beam search was also applied in classical tasks, such as probabilistic , where it efficiently explores chart-based derivations in context-free grammars by maintaining a of high-probability partial parses, enabling scalable syntactic analysis in systems like those based on probabilistic CKY algorithms during the and . Overall, these classical applications demonstrated beam search's ability to accelerate decoding and planning in pre-neural systems, often achieving 10-100x speedups over unpruned searches by confining exploration to b hypotheses per level, thus enabling practical deployment in resource-constrained environments without sacrificing essential accuracy.

Modern Applications in AI

In sequence-to-sequence models based on transformers and LSTMs, beam search serves as a key decoding strategy for generating coherent outputs in tasks such as abstractive text summarization and responses. By maintaining multiple partial hypotheses during , it explores diverse sequence paths to produce higher-quality summaries that capture essential content while avoiding repetition, as demonstrated in evaluations on datasets like /DailyMail where beam sizes of 4-5 yield improved scores compared to greedy decoding. In applications, beam search combined with top-k sampling enhances response naturalness by balancing fluency and diversity, transforming model-generated text into more human-like dialogues, with empirical results showing reduced classifier detectability from 0.984 to 0.908 on embeddings. In large language models like GPT variants, beam search facilitates text generation by enabling diverse path exploration, which helps mitigate hallucinations—factually incorrect outputs—through re-ranking mechanisms that prioritize contextually faithful hypotheses. For instance, an improved beam search using natural language inference to score entailment between input and generated beams reduces hallucination rates in abstractive summarization, achieving higher human-evaluated faithfulness scores of 3.48 versus 3.37 for standard beam search on the XSum dataset. Heuristic scoring via neural network probabilities guides this process by assigning log-probabilities to tokens, favoring sequences that align with learned distributions. Recent advancements from 2023 to 2025 have integrated beam search with and semantic augmentations to enhance interpretability and output quality in LLMs. A notable at 2025 introduces an interactive visual tool for investigating beam search trees, allowing users to compare variations (e.g., tweaks) and apply semantic highlights for phenomena like , revealing hidden hallucination-prone branches such as erroneous substitutions in GPT-2 and BLOOM models. In speech and vision domains, beam search supports end-to-end automatic (ASR) models by jointly optimizing and attention mechanisms, improving word error rates in streaming scenarios, as seen in hybrid decoders on LibriSpeech. For image captioning, context-aware variants dynamically adjust attention using generated caption prefixes, boosting CIDEr scores from 126.4 to 129.2 on MSCOCO by reducing repetition and enhancing relevance. A common challenge in beam search for these applications is length bias, where models favor shorter sequences due to higher average log-probabilities, leading to suboptimal outputs in and generation tasks. This bias is addressed through , which divides the sequence score by its length to penalize brevity, resulting in empirical score improvements of up to 3 points (e.g., from 23.3 to 26.3 at beam size 100 in Russian-English ). Such mitigations ensure more balanced explorations, particularly in where larger beams without can degrade performance by over 20 points.

Variants

Deterministic Variants

Deterministic variants of beam search modify the standard breadth-first procedure to maintain non-probabilistic, rule-based exploration while addressing limitations in , , or for structured search problems. These extensions preserve the core width parameter, which limits the number of partial solutions retained at each step, but alter the ordering, , or expansion strategy to suit specific domains like . By avoiding elements, they ensure reproducible outcomes, making them suitable for optimization tasks requiring guaranteed progress toward optimal or near-optimal solutions. Beam stack search integrates backtracking into using a specialized called a , which enables a last-in-first-out (LIFO) management of search layers to achieve and optimality. The algorithm proceeds breadth-first but prunes nodes outside a beam width based on admissible cost bounds (f-cost ranges); when a layer is exhausted, it backtracks via the to revisit pruned branches systematically until an upper bound confirms no better solutions exist. This structure records one entry per search depth (except the deepest layer), ensuring linear memory usage proportional to depth times beam width, which makes it particularly suitable for depth-limited searches in large state spaces. A divide-and-conquer variant further bounds memory to four times the beam width, allowing wider beams without excessive resource demands. Local beam search adopts a population-based approach, initializing a of k randomly selected states and generating successors solely from those current beam members, rather than maintaining a global across all paths. At each iteration, it evaluates and retains the top k most promising successors to form the next beam, promoting parallel exploration of diverse solution trajectories while limiting computational overhead. This avoids the redundancy of traditional beam search's uniform expansion by focusing on local improvements, balancing breadth and depth without . The method excels in escaping local compared to hill-climbing, as the beam introduces mild diversity through successor generation from multiple parents. Depth-first beam search prioritizes depth over uniform breadth by expanding the most promising path within the beam until a depth limit or leaf is reached, applying the beam width constraint per branch to prune low-potential alternatives at each level. Upon exhausting a branch's beam, it backtracks to the nearest ancestor and selects the next-best untried extension, mimicking depth-first search but with heuristic-guided retention of partial paths. This variant reduces memory footprint relative to full breadth-first beam search, as it processes one primary path deeply before branching, making it efficient for uneven tree structures where deep solutions are likely. It integrates limited discrepancy mechanisms to revisit near-optimal choices, ensuring coverage of high-quality paths without exhaustive exploration. The Best-Until-Limit Beam (BULB) variant enhances efficiency in uneven search trees by repeatedly expanding the globally best from the current until the beam limit is met, rather than filling the level-by-level. This best-first strategy within a bounded dynamically adjusts expansion to promising areas, incorporating limited discrepancy search to bound deviations from the guide and guarantee completeness when combined with . BULB improves upon standard search by minimizing wasted expansions on suboptimal branches, particularly in domains with varying costs, and supports anytime for progressive solution refinement. These deterministic variants find application in optimization for domains, such as automated and AI, where they reduce redundancy by targeted expansion and , enabling scalable solutions in large, deterministic environments. For instance, beam stack search solves STRIPS benchmarks like (30-step optimal plan, 261 million nodes expanded) and domains (45-step plan, 984 million nodes), achieving optimality with bounded memory that avoids the exponential growth of exhaustive search. In AI, local and depth-first variants optimize or move sequencing by limiting redundant state evaluations, as seen in heuristic-guided where beam-constrained depth exploration cuts search time by focusing on viable trajectories without global queuing overhead. further demonstrates impact in uneven planning trees, reducing node expansions by up to an in discrepancy-bounded scenarios compared to level-order beam search.

Probabilistic Variants

Probabilistic variants of beam search incorporate elements of probability and to address limitations of deterministic approaches, particularly in handling uncertainty during sequence generation tasks such as (). These methods aim to balance and by introducing stochasticity, which helps mitigate issues like mode collapse and repetitive outputs while preserving high-quality hypotheses. By sampling or penalizing based on probabilistic distributions, they enhance the diversity of generated sequences without significantly increasing computational overhead. Stochastic beam search extends standard beam search by sampling sequences from the top-β candidates using the Gumbel-Top-k trick, which draws exact samples without replacement from a factorized over sequences. This approach, introduced by Kool et al. (), promotes by avoiding the deterministic selection of the highest-probability paths, allowing for more varied outputs in tasks like (NMT). The method implicitly applies the Gumbel-Max trick at each step, ensuring that the sampled beams reflect underlying probability distributions more faithfully than greedy or uniform beam expansion. Temperature-scaled beam search adjusts the softmax temperature parameter applied to model logits before beam selection, controlling the sharpness of the to introduce controlled . Lower temperatures sharpen distributions for more focused exploration, while higher ones flatten them to encourage broader sampling within the beam. A notable is local temperature beam search, which applies position-specific temperature adjustments to calibrate overconfident predictions and prevent degeneration in text generation (Lee et al., 2023). This variant dynamically tunes temperatures per beam to balance and variability, outperforming scaling in maintaining sequence quality. Diverse beam search modifies the beam expansion process by introducing penalties for similarity between candidate sequences, often using metrics like n-gram overlap or determinantal point processes to enforce . Proposed by Li et al. (2016), it groups beams into diverse subsets and optimizes within each group, generating varied outputs while retaining high-probability paths. This probabilistic penalization ensures that the final beams cover a wider range of plausible sequences, particularly useful in descriptive tasks where standard beam search may converge to similar hypotheses. Best-first beam search prioritizes beam expansion based on estimated completion scores rather than breadth-first ordering, incorporating probabilistic future rewards to guide search in uncertain environments. Meister et al. (2020) developed a memory-efficient variant for NMT, where beams are selected by a heuristic combining current log-probabilities with predicted remaining sequence scores, leading to improved translation quality over uniform beam search. This approach reduces search space by focusing on high-potential paths, achieving better BLEU scores in low-resource settings. These probabilistic variants demonstrate benefits in reducing repetition and enhancing diversity in modern large language models (LLMs).

History

Origins

Beam search originated in 1976 as a key component of the speech recognition system, developed at by Bruce T. Lowerre under the guidance of and the ARPA speech understanding project team. Harpy was designed to perform connected on continuous speech, compiling linguistic into a finite-state network to represent possible word sequences and using acoustic models to score paths through this network. The algorithm addressed the computational challenges of searching exponentially large hypothesis spaces by maintaining only a limited set of the most promising partial hypotheses at each step, thereby pruning less viable paths based on probability thresholds. The primary purpose of beam search in Harpy was to enable efficient traversal of word lattices for continuous speech recognition, particularly to manage the complexity introduced by large vocabularies exceeding 1,000 words. Unlike prior systems like Hearsay-I, which relied on best-first search with backtracking, or the Dragon system, which explored all paths in parallel, beam search operated in a time-synchronous manner, expanding a "beam" of high-scoring states simultaneously to approximate the optimal path without exhaustive enumeration. This approach drew influence from earlier heuristic search methods in artificial intelligence, such as the best-first strategy and A* algorithm, but was specifically adapted to support beam-like parallelism for real-time processing in speech tasks. The term "beam search" was first coined and formally described in a 1977 summary report on the speech understanding project, co-authored by and others, where it was characterized as a graph-searching technique that prunes all but a beam of the most promising alternatives to eliminate backtracking and control exponential growth in hypotheses. This formalization highlighted its role in Harpy's network-based recognition process, which integrated acoustic, syntactic, and semantic constraints ahead of time. Harpy's implementation of beam search had an immediate impact by enabling the first , speaker-independent recognition of continuous speech with a 1,011-word , achieving 91% accuracy and 95% semantic accuracy across five speakers on controlled tasks. This breakthrough represented a pivotal shift in from computationally prohibitive exhaustive methods to practical approximate heuristics, paving the way for scalable systems handling in sequential data.

Key Developments and Adoption

In the late 1980s and 1990s, beam search spread to through the pioneering statistical models developed at IBM's , where it facilitated decoding in early noisy-channel-based systems like the project. Refinements in scoring heuristics enhanced its practicality for handling complex hypothesis spaces in translation tasks. During the 2000s, beam search was integrated into prominent statistical frameworks, such as the toolkit, which employed it as the core decoding algorithm for phrase-based to balance speed and accuracy. The 2010s witnessed a surge in beam search's adoption amid the rise of , particularly in sequence-to-sequence models for and other sequence generation tasks, as introduced by Sutskever et al. in 2014, where a left-to-right beam search decoder maintained partial hypotheses to find high-probability outputs. This momentum continued with the architecture by Vaswani et al. in 2017, which used beam search with a beam size of 4 during inference to improve translation quality on benchmarks like WMT. Theoretical contributions in the 2010s focused on beam search's approximation properties, including analyses of its error bounds and lack of formal optimality guarantees in decoding, underscoring the need for heuristics to mitigate suboptimality while preserving empirical performance. By 2025, beam search has become a cornerstone of decoding pipelines, enabling diverse applications from text generation to systems, with active into efficient variants like trie-based parallel decoding for memory optimization and speculative frameworks for deployment on edge devices with limited resources.

References

  1. [1]
    [PDF] THE HARPY SPEECH RECOGNITION SYSTEM - Stacks
    The HARPY Speech Recognition System. Thesis Summary. Bruce T. Lowerre. April, 1976. Department of Computer Science. Carnegie-Mellon University. Pittsburgh ...
  2. [2]
    [PDF] Beam Search: Faster and Monotonic - Computer Science
    Beam search is a popular satisficing approach to heuristic search problems that allows one to trade increased computa- tion time for lower solution cost by ...
  3. [3]
    Beam Search Strategies for Neural Machine Translation - arXiv
    Feb 6, 2017 · In this paper, we concentrate on speeding up the decoder by applying a more flexible beam search strategy whose candidate size may vary at each time step.
  4. [4]
    Learning Linear Ranking Functions for Beam Search with ...
    Abstract. Beam search is commonly used to help maintain tractability in large search spaces at the expense of completeness and optimality.
  5. [5]
    Best-First Beam Search - MIT Press Direct
    Dec 1, 2020 · Beam search is a common heuristic algorithm for decoding structured predictors (e.g., neural machine translation models and transition-based ...
  6. [6]
    [PDF] Learning Beam Search Policies via Imitation Learning
    Beam search is widely used for approximate decoding in structured prediction problems. Models often use a beam at test time but ignore its existence at ...
  7. [7]
    Beam Search Algorithm | Baeldung on Computer Science
    Mar 18, 2024 · Beam Search is a greedy search algorithm that expands a node with a specific number of nodes, based on heuristic cost, until reaching the goal.
  8. [8]
    Introduction to Beam Search Algorithm - GeeksforGeeks
    Jul 23, 2025 · Beam Search is a heuristic search algorithm that navigates a graph by systematically expanding the most promising nodes within a constrained set.Missing: survey | Show results with:survey
  9. [9]
    Introduction to the Beam Search Algorithm | Built In
    Beam search is an approximate search algorithm that only remembers the top possible solutions to determine the best one.
  10. [10]
    AI | Search Algorithms | BEAM Search - Codecademy
    Jun 5, 2023 · BEAM Search, a variant of breadth-first Search, is an algorithm that searches a weighted graph for an optimal path from some start node S to some goal node G.
  11. [11]
    [PDF] 4 INFORMED SEARCH AND EXPLORATION
    It suffers from the same defects as depth-first search—it is not optimal, and it is incomplete (because it can ... Unlike A∗, however, it is not complete for ...
  12. [12]
    [PDF] CSPs: beam search - CS221 Stanford
    So a natural solution is to keep track of more than just the single best partial assignment at each level of the search tree. This is exactly beam search, which ...
  13. [13]
    10.8. Beam Search — Dive into Deep Learning 1.0.3 documentation
    Beam search provides a trade-off between accuracy and computational cost via the flexible choice of the beam size.Missing: survey | Show results with:survey
  14. [14]
    [2007.03909] Best-First Beam Search - arXiv
    Jul 8, 2020 · In this work, we show that the standard implementation of beam search can be made up to 10x faster in practice.
  15. [15]
    How to generate text: using different decoding methods for language ...
    Mar 1, 2020 · Great, it has found the most likely word sequence in our toy example! Beam search will always find an output sequence with higher probability ...
  16. [16]
    [PDF] 15. the harpy speech understanding system
    Harpy is one of the first systems to demonstrate that high performance, large vocabulary connected speech recognition systems can in fact be realized.
  17. [17]
    Audrey, Alexa, Hal, and More - CHM - Computer History Museum
    Jun 9, 2021 · Using a new method known as “Beam Search,” HARPY wins the DARPA challenge to recognize 1,000 words by the 1976 deadline. HARPY was written by ...
  18. [18]
    [PDF] Pharaoh: A Beam Search Decoder for Phrase-Based Statistical ...
    We describe Pharaoh, a freely available decoder for phrase- based statistical machine translation models. The decoder is the imple- mentation of an e˜cient ...Missing: seminal paper
  19. [19]
    [PDF] A Comparison of Greedy Search Algorithms - Computer Science
    Figure 1 shows that the breadth first beam search outper- forms the other beam searches in dynamic robot path plan- ning, an artifact of duplicate handling. The ...
  20. [20]
    [PDF] Automatic Definition Extraction and Crossword Generation From ...
    For example we could try a beam search approach that explores a set of promising partial crossword grids and returns the one that uses the fewest number of ...
  21. [21]
    [PDF] arXiv:2005.11009v1 [cs.CL] 22 May 2020
    May 22, 2020 · Beam search is an effective and widely used decoding algorithm in many sequence- to-sequence (seq2seq) text generation tasks.
  22. [22]
    [PDF] Transforming Chatbot Text: A Sequence-to-Sequence Approach
    Jun 17, 2025 · During inference, we use decoding strategies of beam search and top-𝑘/top- 𝑝 sampling with repetition penalty to generate the humanized version ...
  23. [23]
    [PDF] Improved Beam Search for Hallucination Mitigation in Abstractive ...
    mitigate hallucinations in data-to-text application. 066. Unlike our proposed decoding, their text classifier. 067 requires dataset specific supervised ...
  24. [24]
    [PDF] Visual Investigation of Beam Search Trees to Address Language ...
    Consequently, we leverage an interactive visual method for investigating the beam search tree, facilitating analysis of the decisions made by the model during ...
  25. [25]
    [2406.02950] Joint Beam Search Integrating CTC, Attention, and ...
    Jun 5, 2024 · End-to-end automatic speech recognition (E2E-ASR) can be classified by its decoder architectures, such as connectionist temporal classification ...
  26. [26]
    Meshed Context-Aware Beam Search for Image Captioning - MDPI
    Beam search is a commonly used algorithm in image captioning to improve the accuracy and robustness of generated captions by finding the optimal word ...
  27. [27]
    [PDF] Correcting Length Bias in Neural Machine Translation
    Length normalization (norm) gets the best BLEU scores in Fra-Eng due to the slight bias of BLEU towards shorter translations. beam. 10. 50. 75 100. 150. 200.
  28. [28]
    [PDF] Beam-Stack Search: Integrating Backtracking with Beam Search
    Called beam-stack search, the algorithm uses a new data structure, called a beam stack, that makes it possible to integrate systematic backtracking with beam.
  29. [29]
    [PDF] Limited Discrepancy Beam Search∗ - IJCAI
    Figure 1 (c) shows DB (Depth-first Beam search), our sim- plest variant of beam search with backtracking. DB behaves like beam search until it exhausts the ...
  30. [30]
    Complete anytime beam search - ACM Digital Library
    ... algorithm that is guaranteed to find an optimal solution. Called beam-stack search, the algorithm uses a new data structure, called a beam stack, that makes ...
  31. [31]
    Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick ...
    Mar 14, 2019 · We show how to implicitly apply this 'Gumbel-Top-k' trick on a factorized distribution over sequences, allowing to draw exact samples without replacement using ...
  32. [32]
  33. [33]
  34. [34]
    [PDF] Speech understanding systems: summary of results of the five-year ...
    Jan 1, 1977 · Speech. Understanding System," IEEE Trans. ASP-23. No. 1, 11-23. B. Lowerre (1976), "The HARPY Speech Recognition System,". Technical Report ...
  35. [35]
    Statistical machine translation - Wikipedia
    Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers at IBM's Thomas J. Watson Research Center.
  36. [36]
    Machine Translation: Revealing History & Trends of The Century
    Mar 22, 2023 · In the 1990s, researchers at IBM developed a renewed interest in MT technology, publishing research on some of the first SMT systems in 1991.
  37. [37]
    [PDF] The Alignment Template Approach to Statistical Machine Translation
    In the following, we describe our search algorithm based on the concept of beam search, which allows a trade-off between efficiency and quality by adjusting the ...
  38. [38]
    [PDF] Statistical Machine Translation System User Manual and Code Guide
    Apr 18, 2022 · This document serves as user manual and code guide for the Moses machine translation decoder. The decoder was mainly developed by Hieu Hoang and ...
  39. [39]
    [PDF] Statistical Machine Translation with the Moses Toolkit - AMTA
    Oct 22, 2014 · → Search problem solved by beam search. Hoang, Huck and Koehn. Machine Translation with Open Source Software. October 2014. Page 17. 14.
  40. [40]
    [PDF] Sequence to Sequence Learning with Neural Networks - arXiv
    Dec 14, 2014 · We search for the most likely translation using a simple left-to-right beam search decoder which maintains a small number B of partial ...
  41. [41]
    Optimal beam search for machine translation - ResearchGate
    Beam search is a fast and empirically effective method for translation decoding, but it lacks formal guarantees about search error.