Fact-checked by Grok 2 weeks ago

A* search algorithm

The A* (pronounced "A-star") search algorithm is an informed method for finding the shortest path from an initial to a goal in a weighted or , combining the strengths of uniform-cost search and greedy to efficiently explore promising paths. Developed in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael at the Stanford Research Institute, it represents a foundational advancement in and techniques. At its core, A* operates by maintaining an of to explore, prioritized by an f(n) = g(n) + h(n), where g(n) denotes the exact path cost from the start to the current n, and h(n) is a estimate of the remaining cost from n to the . The algorithm expands the with the lowest f(n) value at each step, effectively balancing the known cost incurred so far with an optimistic prediction of future costs, which guides the search toward the more rapidly than uninformed methods like breadth-first or . For h(n), common choices include the or in grid-based environments, ensuring the heuristic remains simple yet effective. A* possesses strong theoretical guarantees: it is optimal, always finding the least-cost path if the heuristic is admissible (never overestimating the true cost to the ), and it is complete, guaranteed to find a if one exists in a finite with non-negative edge costs. These properties stem from its extension of branch-and-bound search principles, refined through guidance to prune suboptimal branches early. Widely applied in , video games, geographic information systems, and , A* excels in scenarios requiring efficient around obstacles, though its can degrade in high-dimensional or very large spaces without a well-tuned .

Background and Prerequisites

Graph Search Basics

In the context of search algorithms, a represents the state space of a problem, where nodes denote distinct and directed or undirected edges signify actions or transitions between those states, each potentially associated with a cost reflecting the expense of performing the action. This modeling allows algorithms to navigate from an initial start to one or more goal states by following paths through the . Graphs in search problems are typically directed to capture the one-way nature of many actions, though undirected graphs apply when transitions are reversible without cost differences. Uninformed search strategies, also known as blind search, explore the without any domain-specific knowledge about the goal's location, relying solely on the structure of the state space to generate and evaluate paths systematically. These methods guarantee completeness—finding a if one exists in a finite space—but can be inefficient in large graphs due to exhaustive exploration. (BFS) is a foundational uninformed strategy that expands nodes level by level, starting from the root, using a queue to maintain the order of discovery and ensuring the shortest path in terms of number of edges when all edge costs are uniform (typically 1). Uniform-cost search (UCS), an extension of BFS, handles varying edge costs by always expanding the node with the lowest cumulative path cost from the start, employing a priority queue to prioritize cheaper paths and guaranteeing optimality in the sense of minimal total cost. Central to these algorithms are key data structures and metrics: the , or frontier, holds nodes generated but not yet expanded, ordered by expansion priority ( queue for BFS, for UCS); the tracks expanded nodes to avoid re-exploration and prevent cycles in search variants. The path cost, denoted g(n), measures the total cost of the path from the start to a given node n, accumulated along the edges traversed. To illustrate BFS expansion on a small , consider a with start node S connected to A (cost 1) and B (cost 1); A connected to C (cost 1) and D (cost 1); B connected to D (cost 1) and E (cost 1), where E is the . The algorithm begins by enqueueing S in the . It expands S, adding A and B to the (frontier now: A, B). Next, it expands A, adding C and D (: B, C, D), marking A closed. Expanding B adds E (since D is already in open; now: C, D, E), marking B closed. The algorithm then expands C (assuming no children), then D (assuming no children). Finally, it dequeues E, recognizing it as the . The nodes expanded are S, A, B, C, D, with the S-B-E of 2. This level-order expansion highlights how BFS prioritizes shallower nodes, avoiding deeper branches until necessary.

Heuristic Functions

In the context of search algorithms like A*, a function h(n) provides an estimate of the minimum cost required to reach the from a given n. This estimate guides by prioritizing nodes that appear closer to the , thereby improving efficiency over uninformed methods. The original formulation of A* emphasizes the use of such s to incorporate domain-specific knowledge into the search process. For A* to guarantee finding an optimal , the must be , meaning h(n) \leq h^*(n) for every n, where h^*(n) denotes the true minimum from n to the ; an never overestimates the actual . Additionally, a is consistent if it satisfies the : h(n) \leq c(n, n') + h(n') for every n and successor n' reached via an with c(n, n'), ensuring that the values do not contradict the step costs along any . Consistency implies admissibility, and both properties ensure that A* expands in a manner that preserves optimality. Common examples of admissible heuristics include the straight-line (Euclidean) distance between two points in a plane for pathfinding problems with free movement, as this distance represents the shortest possible path and thus never overestimates the true cost. In grid-based environments with orthogonal movement, the Manhattan distance—calculated as the sum of absolute differences in x and y coordinates—serves as an admissible heuristic, since it equals the minimum number of steps needed without obstacles. For the 8-puzzle, a sliding tile problem, the number of misplaced tiles (relative to the goal configuration) is an admissible heuristic, as correcting each misplaced tile requires at least one move, providing a lower bound on the remaining cost. These examples illustrate how heuristics can be derived from relaxed versions of the problem, where constraints are ignored to compute an easily evaluable underestimate. The quality of a heuristic significantly affects search performance: a perfect heuristic, where h(n) = h^*(n) for all n, would direct A* straight to the goal with minimal exploration, though such exact knowledge is typically unavailable. Admissible heuristics balance accuracy with the optimality guarantee, but weaker ones (closer to zero) lead to broader searches, while stronger admissible heuristics (closer to h^* without exceeding it) prune more irrelevant branches. Inadmissible heuristics, which may overestimate, can accelerate search in practice by focusing aggressively on promising areas but risk suboptimal solutions unless modified (e.g., via weighted variants). Compared to uniform cost search (UCS), which expands nodes solely by accumulated path cost g(n) from the start and effectively uses h(n) = 0, a positive admissible heuristic reduces the number of expanded nodes by estimating and biasing toward the remaining cost, effectively combining breadth in low-g(n) areas with depth toward the ; this can dramatically lower computational demands in large spaces, as demonstrated in early analyses showing exponential reductions in node expansions for informed versus uninformed searches.

Historical Development

Invention and Early Contributions

The A* search algorithm was invented in 1968 by Peter Hart, Nils Nilsson, and Bertram Raphael at the Stanford Research Institute (SRI) International, as part of the Shakey the Robot project, which aimed to develop the world's first mobile robot capable of reasoning about its environment and actions. Shakey, operational from 1966 to 1972, required robust pathfinding capabilities to navigate cluttered indoor spaces autonomously, marking a pioneering effort in mobile robotics and AI. The primary motivation for A* stemmed from the need to enhance efficiency for robotic navigation in uncertain, real-world environments, where exhaustive searches were computationally prohibitive. Building on Edsger Dijkstra's algorithm for shortest paths in graphs, which guaranteed optimality but expanded nodes inefficiently without guidance, Hart, Nilsson, and integrated estimates to prioritize promising paths while preserving solution quality. This innovation distinguished A* by leveraging domain-specific through heuristics, addressing limitations in prior uniform-cost methods for practical applications like avoidance in Shakey's trials. The algorithm's formal foundation was first detailed in the seminal paper "A Formal Basis for the Heuristic Determination of Minimum Cost Paths," published in the IEEE Transactions on and in 1968, where the authors introduced the evaluation function f(n) = g(n) + h(n) to guide selection—g(n) representing the exact cost from the start to n, and h(n) a estimate to the goal. This work emerged amid broader challenges in and during the late , a period focused on solving complex search problems in discrete spaces, such as game playing and network routing, following the 1956 on AI.

Evolution and Key Milestones

Following its invention in 1968, the A* algorithm saw significant theoretical refinements and integrations into early systems during the and 1980s. Initially developed for path in the project at , A* was extended to broader planning frameworks, including extensions of systems like the General Problem Solver through heuristic enhancements in problem-solving architectures. Researchers proved key properties such as optimality under admissible and consistent s, with Pohl's work in the early exploring bidirectional variants to improve efficiency in heuristic search. A pivotal milestone came in 1985 with Richard E. Korf's introduction of depth-first iterative-deepening A* (IDA*), which addressed memory limitations of standard A* while preserving admissibility and optimality for tree searches, demonstrated through showing matching to A* but with linear space usage under monotonic s. In 1984, Judea Pearl's comprehensive survey on heuristic search strategies solidified A*'s foundational role in literature, analyzing its performance bounds and comparisons to uninformed methods across diverse problem domains. In the , A* gained widespread adoption in practical applications, particularly in computer games and , marking a shift from theoretical tool to real-world utility. This popularity extended to , where A* facilitated obstacle-avoiding path planning in mobile robots, integrating sensor data with estimates for robust performance in unstructured terrains. These applications highlighted A*'s versatility, with optimizations like hierarchical abstractions reducing search spaces in large-scale game worlds and robotic maps. From the 2000s onward, adaptations of A* addressed challenges in large-scale graphs, powering applications such as GPS systems in consumer devices. Commercial routing engines, like those in early and units (post-2000), leveraged A* variants with landmark heuristics to compute optimal routes over massive road networks, achieving sub-second query times for continental-scale maps. Efficiency tweaks, including bidirectional A*—refined from Pohl's foundations—became standard for symmetric graphs, meeting searches from start and goal to halve effective branching factors in practice. These developments enabled A*'s integration into mobile apps and autonomous vehicles, with ongoing research focusing on anytime variants for time-critical scenarios. A notable milestone was its use in the (2004–2005), where A*-based path planning contributed to early successes in autonomous off-road . As of 2025, A* variants continue to underpin advancements in self-driving cars and drone systems.

Algorithm Description

Core Mechanism

The A* search algorithm operates as an informed method, guiding the exploration of a search space by evaluating based on a combined f(n) = g(n) + h(n), where g(n) represents the exact path cost from the start to the current n, and h(n) is a estimate of the remaining cost from n to the goal . This formulation balances the known accumulated cost with an optimistic prediction of future expenses, enabling efficient prioritization toward promising paths while ensuring optimality under admissible heuristics. The process begins by initializing an , typically a , with the start node, assigning it g(start) = 0 and f(start) = h(start). In each iteration, the node with the lowest f(n) is selected for expansion from the open set and moved to a to avoid reprocessing. For each neighbor of the expanded node, the algorithm computes a tentative g value as the expanded node's g plus the edge cost to the neighbor; if this yields a lower g than previously known (or if the neighbor is unvisited), the neighbor's g and f are updated, and it is added to or reprioritized in the open set. This continues until the goal node is selected for expansion, at which point the optimal path is reconstructed by from the goal using pointers. When multiple nodes share the same lowest f(n), tie-breaking strategies come into play to influence search behavior, often prioritizing nodes with higher g(n) to favor deeper exploration or adjusting the scale slightly to nudge toward the . The choice depends on the application, such as emphasizing in expansive spaces or in grid-based . Conceptually, A* expands the search tree by growing a that advances unevenly, directing efforts toward the while accounting for actual costs, unlike uniform cost search (UCS), which expands nodes by lowest g(n) alone and explores symmetrically from the start like a breadth-first manner regardless of goal direction, or , which selects solely by lowest h(n) and may chase suboptimal paths trapped by obstacles. This hybrid approach results in a more focused expansion pattern, often visualized as a directed wavefront that efficiently converges on the in weighted graphs.

Pseudocode

The standard pseudocode for the A* search algorithm outlines the steps to find an optimal from a start to a in a weighted , using an f(n) = g(n) + h(n), where g(n) is the cost from the start to n, and h(n) is the estimate from n to the .
function A*(start, goal)
    // The set of discovered nodes that may need to be re-expanded, ordered by f
    openSet ← priority_queue()  // Typically a min-heap by f(n)
    // The set of nodes already evaluated
    closedSet ← [empty set](/page/Empty_set)
    // For each node, track its estimated total cost from start via this path
    gScore ← [map](/page/Map) with default ∞, with gScore[start] ← 0
    // For each node, track its f score
    fScore ← [map](/page/Map) with default ∞, with [fScore](/page/F-score)[start] ← h(start)
    // For each node, track its immediate predecessor
    cameFrom ← [map](/page/Map) with default null
    openSet.insert(start)

    while openSet is not empty
        current ← openSet.extract_min()  // Node with lowest fScore
        if current == goal
            return reconstruct_path(cameFrom, goal)

        // Typically, we would add current to closedSet here, but we defer to avoid unnecessary checks
        for each neighbor in neighbors(current)  // Assuming adjacency list representation
            tentative_gScore ← gScore[current] + distance(current, neighbor)
            if tentative_gScore < gScore[neighbor]
                // This path to neighbor is better than any previous one
                cameFrom[neighbor] ← current
                gScore[neighbor] ← tentative_gScore
                fScore[neighbor] ← gScore[neighbor] + h(neighbor)
                if neighbor not in openSet  // Or in closedSet, but we allow reopening for general admissibility
                    openSet.insert(neighbor)
                // Note: If using a heap, decrease-key may be needed for efficiency if already present

    return null  // No path found

function reconstruct_path(cameFrom, current)
    total_path ← empty list
    while current is not null
        total_path.prepend(current)
        current ← cameFrom[current]
    return total_path
This implementation handles the closed set implicitly through the gScore check: if a neighbor has already been expanded (i.e., its gScore is final under a consistent heuristic), a worse path will be skipped; otherwise, better paths trigger re-insertion into the open set for re-expansion. To prevent infinite loops in graphs with cycles, the algorithm relies on the non-negative edge weights and the gScore updates, which ensure costs do not decrease indefinitely; explicit visited checks can be added as a variation for graphs without positive costs. The priority queue for the open set, such as a binary heap, enables efficient extraction of the minimum f-score node in O(\log |V|) time per operation, assuming the graph has |V| vertices.

Illustrative Example

To illustrate the A* algorithm, consider a 5x5 grid where each cell represents a node, and movement is allowed to adjacent cells (up, down, left, right) at a uniform cost of 1, with no diagonal moves. The start node is at position (0,0) in the top-left corner, and the goal is at (4,4) in the bottom-right corner. Obstacles occupy cells (1,2) and (2,1), blocking direct paths through the center. The heuristic function used is the Manhattan distance: h(n) = |x_g - x_n| + |y_g - y_n|, where (x_g, y_g) is the goal position and (x_n, y_n) is the current node's position. This heuristic is admissible and consistent for grid pathfinding with uniform costs, guiding the search efficiently toward the goal. The algorithm begins with the open set containing only the start node (0,0), where g(0,0) = 0 (path cost from start), h(0,0) = 8, and f(0,0) = g + h = 8. The closed set is empty. Nodes are selected for expansion based on the lowest f-value, with ties broken arbitrarily (e.g., by row-major order). Neighbors are generated only if they are unvisited, not obstacles, and within grid bounds; if a better path is found to a neighbor already in the open set, its g and f are updated. At each step, the open set (fringe) tracks candidate nodes with their g, h, and f values, while the closed set holds expanded nodes to avoid re-expansion. The following trace shows the first few expansions until the goal is reached, using tables for clarity. Only relevant nodes are listed; full grid visualization would show the path weaving around obstacles via the top and right edges. Step 1: Expand (0,0)
Open set after expansion:
Nodeghf
(0,1)178
(1,0)178
Closed set: {(0,0)} Step 2: Expand (0,1) (lowest f=8, first in tie)
Neighbors: (0,2), (1,1), (0,0) (ignored, closed). (1,1) is reachable.
Open set after expansion:
Nodeghf
(1,0)178
(0,2)268
(1,1)268
Closed set: {(0,0), (0,1)} Step 3: Expand (1,0) (next lowest f=8)
Neighbors: (1,1) (update if better; current g=1 via (0,1) is 2, but direct g=2—no change), (2,0), (0,0) (closed), (1,-1) (invalid). (2,0) added. Note: (1,2) is obstacle, ignored.
Open set after expansion:
Nodeghf
(0,2)268
(1,1)268
(2,0)268
Closed set: {(0,0), (0,1), (1,0)} Step 4: Expand (0,2) (f=8)
Neighbors: (0,3), (1,2) (obstacle, ignored), (0,1) (closed).
Open set after expansion:
Nodeghf
(1,1)268
(2,0)268
(0,3)358
Closed set: {(0,0), (0,1), (1,0), (0,2)} Subsequent expansions continue similarly: (1,1) leads to (1,0) (closed) and (2,1) (obstacle, ignored), but no new low-f paths; (2,0) adds (3,0) and (2,1) (ignored); (0,3) adds (0,4) and (1,3). The search progresses along the top row to (0,4) (g=4, h=4, f=8), then down the right edge: (1,4) (g=5, h=3, f=8), (2,4) (g=6, h=2, f=8), (3,4) (g=7, h=1, f=8), and finally (4,4) (g=8, h=0, f=8) enqueued with f=8. Upon selecting (4,4) for expansion, the goal is reached, and backtracking via parent pointers yields the optimal path: (0,0) → (0,1) → (0,2) → (0,3) → (0,4) → (1,4) → (2,4) → (3,4) → (4,4), with total cost 8. This example demonstrates A*'s node selection: it prioritizes nodes with f=8 throughout, expanding only those on or near the optimal path. In total, A* expands 9 nodes to find the goal (primarily along the fringes), compared to (BFS), which would expand over 15 nodes in this grid due to exploring all levels indiscriminately before reaching depth 8. This efficiency arises from the heuristic guiding expansion toward promising areas, reducing unnecessary detours around obstacles.

Implementation Details

Implementing the A* search algorithm requires careful selection of data structures to ensure efficiency in managing the open and closed sets, as well as handling path costs and reconstructions. The open set, which holds nodes to be expanded, is typically implemented using a such as a binary heap, enabling insertion and extraction of the node with the minimum f(n) = g(n) + h(n) value in O(log N) time, where N is the number of nodes in the set. This structure is crucial for the best-first search behavior of A*, prioritizing nodes likely to lead to the goal. The closed set, containing expanded nodes to prevent re-exploration, along with a hash table for storing g(n) values (the exact cost from the start to each node), allows for O(1) average-time lookups and avoids recomputing paths to already-visited states. When a potential better path to a node is discovered during expansion, implementations often use a lazy update approach: rather than performing a decrease-key operation on the priority queue (which can be O(log N) or more complex), a duplicate entry for the node is inserted into the open set, and upon extraction, it is discarded if a lower g(n) already exists in the hash table. This simplifies coding while managing ties in f(n) values, though it may lead to multiple entries for the same node. For memory management, each node is stored as a data structure including the state, parent pointer, action taken to reach it, and path cost g(n), facilitating O(1) backtracking from the goal node to reconstruct the full path once found. Parent pointers form a tree or graph of predecessors, enabling efficient path recovery without storing the entire path during search. Common pitfalls in A* implementations include infinite loops in graphs with cycles, which are prevented by consistently checking the closed set before expanding a node to ensure states are not revisited. Another issue arises with floating-point precision when dealing with non-integer costs in g(n) or h(n), potentially causing comparison errors in priority queues or hash table keys; this can be mitigated by using integer approximations or careful epsilon comparisons where necessary. These practices, building on the standard pseudocode framework, ensure robust and efficient real-world applications.

Special Cases

A* search maintains its optimality guarantees in graphs with non-uniform edge costs, provided all costs are positive (greater than some δ > 0) and the heuristic function h(n) is admissible, meaning it never overestimates the true to the goal. For instance, in a scenario involving a with highways (low-cost edges, e.g., cost of 1 per unit distance) and rough terrain (high-cost edges, e.g., cost of 5 per unit distance), A* efficiently prioritizes low-cost paths while using the admissible heuristic to guide exploration toward the goal, ensuring the shortest total path is found without exhaustive search. In graphs containing cycles, A* handles potential loops through its closed set, which tracks expanded nodes to avoid redundant processing, though it may reopen nodes if a lower-cost path to them is discovered during search. This mechanism ensures termination and optimality under the positive cost assumption, as infinite loops are prevented by the non-decreasing path costs. However, graphs with negative edge weights render A* infeasible for optimality, as negative costs can lead to non-termination or suboptimal solutions; such cases violate the core assumptions of the algorithm and require alternative methods like Bellman-Ford for shortest paths. For dynamic environments where edge costs change during execution, adaptations like the D* variant enable efficient replanning by incrementally updating only the affected portions of the path, maintaining A*-like optimality while accommodating modifications such as newly discovered obstacles or shifts. When dealing with incomplete , such as edge costs modeled by probability distributions, A* can be adapted using upper-confidence bounds to incorporate into the search; this risk-aware approach selects paths that bound the potential worst-case costs, ensuring robust planning in settings.

Theoretical Properties

Completeness and Termination

The A* search algorithm is complete, meaning that if a solution exists in a finite state space, A* is guaranteed to find it. ensures that the algorithm does not overlook any reachable goal state. Additionally, A* terminates, halting either upon discovering a solution or concluding that no path exists to the goal. These properties hold under specific conditions, including an function and positive edge costs greater than some small ε > 0, as established in the original formulation of the algorithm. A key requirement for completeness and termination is a , which limits the number of successors per node, combined with strictly positive edge costs to prevent infinite descending paths. Under these assumptions, along with an , the evaluation function values (f-values) of successively expanded nodes are non-decreasing and bounded above by the optimal path cost to the . This bounds the number of nodes that can be expanded before a solution is found or all possibilities exhausted, as only a of nodes can have f-values below this bound given the minimum cost increment ε. Without positive edge costs, A* can fail to terminate, entering infinite loops in graphs containing zero-cost cycles where nodes are repeatedly expanded without progress. For instance, a with zero-cost edges allows g-values to remain unchanged, causing the algorithm to indefinitely among those states. In contrast to incomplete algorithms like , which may diverge in infinite or cyclic spaces by pursuing arbitrarily deep paths without sufficiently, A* systematically explores based on increasing f-values, ensuring progress toward termination in compliant environments.

Admissibility

An admissible function h(n) for a n in A* search is defined as one that never overestimates the true minimal to reach the , formally satisfying h(n) \leq h^*(n) for all s n, where h^*(n) denotes the exact lowest from n to any state. This property ensures that A* can find an optimal solution when using such a , as the algorithm prioritizes paths without prematurely discarding promising ones. The non-overestimation guarantee of admissibility directly supports A*'s optimality. Consider the optimal path from start to goal with total cost C^*. For any node n on this path, the f(n) = g(n) + h(n), where g(n) is the cost from start to n, satisfies f(n) \leq g(n) + h^*(n) = C^* due to admissibility. Thus, A* expands nodes along the optimal path with f(n) \leq C^* before any suboptimal path with higher f, preventing the algorithm from terminating prematurely on a non-optimal . This holds assuming non-negative edge costs and h(\text{goal}) = 0. A classic example of an admissible heuristic is the Manhattan distance in uniform-cost grid pathfinding, where moves are restricted to four directions with : h(n) = |x_n - x_g| + |y_n - y_g|, with (x_n, y_n) as the coordinates of n and (x_g, y_g) as the goal. This underestimates or equals the true path cost by ignoring obstacles, providing a valid lower bound. In contrast, scaling it inadmissibly, such as h'(n) = 2 \times (|x_n - x_g| + |y_n - y_g|), overestimates costs in obstacle-free scenarios where the true distance matches the value, potentially causing A* to overlook the optimal path in favor of faster but suboptimal exploration. To test or generate admissible heuristics, one approach formulates a relaxed problem by removing constraints from the original (e.g., allowing free movement without obstacles in pathfinding), then computes the optimal solution cost to this simpler problem as h(n). Since solutions to the relaxed problem are feasible (or cheaper) in the original, this cost serves as a lower bound, ensuring admissibility. Admissibility also maintains A*'s completeness, as the search explores all necessary nodes without overestimation leading to infinite loops in finite spaces.

Optimality and Consistency

The optimality of the A* algorithm is guaranteed when the heuristic function h is admissible, meaning h(n) \leq h^*(n) for every node n, where h^*(n) denotes the optimal cost from n to the goal. Under this condition, the first time A* expands a goal node, the path cost g to that node equals the optimal path cost C^* from the start to the goal. This result follows from a proof by contradiction: suppose a suboptimal goal node s_g with g(s_g) > C^* is expanded before the optimal one; then, along the optimal path, some node n on that path would have f(n) = g(n) + h(n) \leq C^* < g(s_g) = f(s_g), implying n should have been expanded earlier due to A*'s priority selection by lowest f, leading to a contradiction. A stronger property, known as consistency (or monotonicity), holds if the heuristic satisfies h(n) \leq c(n, a, n') + h(n') for every node n, action a, and successor n', where c(n, a, n') is the cost of the edge from n to n'. A consistent heuristic ensures that f values are non-decreasing along any path in the state space, which implies that A* never needs to re-expand nodes once they are placed in the closed set, as any later path to the same node cannot yield a lower f value. This property enhances efficiency in graph search implementations by avoiding redundant expansions. Consistent heuristics are always admissible, as the consistency condition can be iteratively applied along any path from a node n to the goal, yielding h(n) \leq (sum of edge costs to goal) + h(\text{goal}) = h^*(n), akin to the triangle inequality in graph distances. Conversely, admissible heuristics are not necessarily consistent, and using an inconsistent but admissible h requires A* to potentially reopen nodes in the closed set (i.e., re-insert them into the open set with improved path costs) to maintain optimality, which can increase computational overhead.

Complexity Analysis

Time Complexity

The time complexity of the A* search algorithm is primarily determined by the number of nodes expanded and generated during the search process. In the worst case, A* has a time complexity of O(b^d), where b is the (maximum) branching factor of the search tree and d is the depth of the shallowest goal node; this matches the complexity of breadth-first search, which occurs when the heuristic function provides no guidance, such as when h(n) = 0 for all nodes n. The number of node expansions in A* is bounded by the number of nodes n for which f(n) = g(n) + h(n) < C^*, where g(n) is the path cost from the start node to n, h(n) is the heuristic estimate from n to the goal, and C^* is the optimal path cost to the goal; due to the admissibility of h, A* never expands nodes with f(n) \geq C^*, ensuring that expansions remain focused on paths at least as promising as the optimal one. If the step costs in the graph are bounded below by some minimum value \epsilon > 0, then the number of such nodes is at most |\{n : f(n) < C^*\}| \leq b^{C^*/\epsilon}, as these nodes cannot exceed the size of the search tree up to depth C^*/\epsilon. In practice, implementing the (priority queue) with a adds a O(\log m) factor per expansion and insertion, where m is the maximum number of nodes stored, but the overall remains dominated by the total number of expansions for tree-like searches. With a good admissible and , the effective \epsilon < b (empirically determined by fitting the $1 + \epsilon + \epsilon^2 + \cdots + \epsilon^d \approx number of expanded nodes) can significantly reduce the complexity to O(\epsilon^d), often achieving near-linear performance when h(n) closely approximates the true cost h^*(n). This worst-case bound is not exceeded because admissibility prevents expansion of suboptimal paths beyond the optimal cost threshold. However, it cannot be asymptotically better than \Omega(2^d) in general, as distinguishing the optimal path among exponentially many alternatives in a (where b=2) requires examining a comparable number of nodes in the worst case to verify optimality.

Space Complexity

The space complexity of the A* search algorithm primarily arises from the need to maintain the (fringe of nodes to explore) and the (visited nodes), with the often serving as the dominant factor due to its potential growth. In the worst case, these sets can store all nodes generated the solution depth, leading to O(b^d) space usage, where b is the maximum and d is the depth of the shallowest goal node. This occurs in highly branching search spaces without effective pruning from the heuristic. To mitigate this, optimizations such as variants limit the open set to a fixed beam width k, constraining space to O(k \cdot d) by discarding less promising nodes at each level, though this may compromise completeness or optimality. Similarly, Iterative Deepening A* (IDA*) trades increased time for reduced space by using recursive depth-limited searches with a monotonically increasing cost bound, achieving linear of O(d) via the stack alone, without explicit open or closed lists. In practical implementations, particularly on grid-based domains like in maps, the simplifies to (\text{width} \times \text{depth}), reflecting the linear frontier expansion along the grid's dimensions rather than full , assuming uniform connectivity. Efficient structures further help; using tables for the stores only unique visited states, reducing effective space to (\text{nodes visited}) by avoiding redundant storage of coordinates or paths. Compared to (DFS), which requires only O(d) space for the single path stack, A* generally demands more memory due to its broader exploration, but a well-chosen can limit expansions to make it more space-efficient than exhaustive methods like in informed scenarios.

Extensions and Variants

Bounded Relaxation

The weighted variant of the A* algorithm, known as weighted A*, modifies the to trade optimality for computational efficiency in time-constrained scenarios. Instead of the f(n) = g(n) + h(n), it employs f_w(n) = g(n) + w \cdot h(n) where w \geq 1 is a weighting factor applied to the estimate h(n). This alteration biases the search more heavily toward promising paths suggested by the , resulting in fewer nodes expanded compared to A* while producing suboptimal solutions. Under the assumption that h(n) is consistent (which implies admissible; i.e., h(n) \leq h^*(n) for all n, where h^*(n) is the true cost to the goal), weighted A* guarantees a solution whose cost C satisfies C \leq w \cdot C^*, with C^* denoting the optimal solution cost . Equivalently, this can be expressed as C \leq (1 + \epsilon) \cdot C^* where \epsilon = w - 1, providing a bounded measure of suboptimality that grows linearly with the weight. This variant finds particular utility in real-time applications such as , where exact optimality is often unnecessary and rapid is essential for responsive behavior. For instance, in grid-based game maps, applying w=2 can reduce the number of expansions by up to 50% relative to standard A*, enabling faster computation without excessively degrading path quality. The boundaries of this approach are clear: when w=1, weighted A* recovers the original A* algorithm with its optimality guarantees; as w \to \infty, the search degenerates into greedy , prioritizing only estimates and potentially yielding highly suboptimal paths.

Other Notable Variants

Iterative Deepening A* (IDA*) combines the space efficiency of with the optimality guarantees of A* by performing a series of depth-limited searches, where each increases the bound based on the f-values (g + h) of nodes. This approach addresses the high of standard A*, which can require exponential in the b and depth d, by limiting usage to linear in the depth O(d). The remains O(b^d) in the worst case due to repeated expansions across iterations, but IDA* is particularly effective for -constrained environments like systems or large state spaces. Bidirectional A* extends the standard algorithm by simultaneously searching from both the start and the goal , using separate priority queues until the searches meet at a common , after which the path is reconstructed by combining the two partial paths. In symmetric graphs when using consistent heuristics in both directions, this halves the effective search depth compared to unidirectional A*, potentially reducing the number of expanded exponentially from O(b^{d/2}) per direction. The algorithm requires admissible and consistent heuristics for both directions and careful meeting-point selection to ensure optimality. Jump Point Search (JPS) is an optimization tailored for uniform-cost grid maps, where it prunes redundant node expansions by identifying "jump points"—key locations where a path can deviate from a straight line—thus exploiting spatial symmetries in grid structures. By replacing full neighbor exploration with directed jumps along natural paths, JPS significantly reduces the search space while preserving A*'s optimality when using admissible heuristics like Manhattan distance. This variant is especially suited for grid-based environments, such as maps, where it can achieve speedups of an over standard A* without preprocessing. Hierarchical A* (HPA*) scales A* to very large worlds by abstracting the search space into a multi-level of clustered regions, precomputing optimal paths between entrances to guide high-level searches before refining low-level paths within . This abstraction reduces the effective graph size, enabling near-optimal paths in expansive maps with time complexities far lower than flat A* for long-distance queries. HPA* requires an initial clustering and entrance computation phase but offers substantial efficiency gains in static, large-scale terrains. Recent advancements incorporate to learn or refine heuristics for A*, surpassing traditional hand-crafted ones like by training neural networks on solved instances to predict more accurate h-values. For example, neural algorithmic reasoning can jointly optimize A* with to produce heuristics that accelerate search in diverse graphs, often reducing expanded nodes by factors of 2-10x in benchmarks. These ML-based heuristics maintain admissibility when constrained during training, enabling A* variants to handle complex, dynamic domains more effectively.

Practical Applications

Pathfinding and Navigation

The A* search algorithm originated in the context of through the Shakey project at Stanford Research Institute, where it was developed to enable the first to perform path planning in real-world environments by combining guidance with systematic search. This foundational application demonstrated A*'s ability to navigate uncertain spaces, such as indoor rooms with obstacles, by estimating distances to goals using admissible heuristics like straight-line metrics. In modern , particularly autonomous vehicles, A* remains a for , often in modified forms to handle real-time obstacles and . For instance, early autonomous vehicle , such as the Stanford Racing Team's vehicle in the 2007 Challenge, employed a variant of A* search integrated with non-linear optimization to generate feasible trajectories that account for , pedestrians, and road constraints, ensuring safe and efficient in urban settings. These adaptations allow the algorithm to process large-scale maps with millions of nodes while incorporating for dynamic updates. Video games, especially real-time strategy titles like StarCraft, leverage A* for grid-based unit movement and (NPC) pathfinding, enabling responsive around dynamic obstacles such as other units or terrain features. In these environments, A* operates on meshes or tile grids, using heuristics like to quickly compute paths for multiple agents, which supports fluid gameplay in scenarios involving hundreds of simultaneous movements. GPS navigation systems, such as , use variants of informed search algorithms like A* for route optimization by modeling networks as graphs and incorporating traffic-aware heuristics to prioritize time-efficient paths over mere distance. This approach combines precomputed data with live updates, allowing the system to suggest detours in response to congestion while scaling to global graphs. Despite its strengths, A* faces challenges in pathfinding and navigation, particularly in dynamic environments requiring frequent replanning due to moving obstacles, which can lead to computational overhead from repeated searches. In large-scale graphs with millions of nodes, such as those in urban robotics or expansive game worlds, the algorithm's time and grow quadratically in the worst case, necessitating optimizations like hierarchical or anytime variants to maintain performance.

Optimization and Planning

A* search has found significant application in puzzle solving, where the state space is discrete and combinatorial, allowing for the design of domain-specific admissible heuristics to guide the search toward optimal solutions. In the 8-puzzle, a classic sliding-tile problem, pattern databases serve as powerful heuristics by precomputing the exact number of moves required to solve abstracted subproblems involving subsets of tiles, such as the positions of 8 out of 9 tiles while ignoring the blank space. These databases provide a lower bound on the cost to the goal state, enabling A* (or its memory-bounded variant, iterative deepening A*) to solve instances that would otherwise be intractable due to the large . For example, using an 8-tile pattern database reduces the effective search space dramatically, allowing optimal solutions to be found with far fewer node expansions compared to simpler heuristics like distance. This approach extends to more complex puzzles like the , where pattern databases abstract the cube's state by focusing on subsets of pieces, such as corners or edges, to estimate the minimum moves needed for . Korf's seminal work employed iterative deepening A* with multiple pattern databases—one for corner permutations, another for edge orientations—to achieve optimal solutions for the full 3x3 cube, solving the ten random instances tested in times ranging from several hours to up to two days on hardware (e.g., a Sun Ultra-Sparc workstation) by combining heuristic estimates from disjoint abstractions via the max for admissibility. These databases, stored as lookup tables, capture exact distances in reduced state spaces, making A* practical for puzzles with state spaces exceeding 10^19 configurations. Heuristic functions in these domains are typically domain-specific, designed to exploit symmetries and invariances for compression and accuracy. In AI planning, A* underpins domain-independent planners operating in STRIPS (Stanford Research Institute Problem Solver) domains, where actions are defined by preconditions, add lists, and delete lists, and the goal is to find a sequence transforming an initial state to a desired one. The (Fast-Forward) planner integrates A*-style heuristic search in a forward-chaining framework, using a relaxed plan that ignores delete effects to compute an admissible lower bound on plan length, often underestimating by a factor that correlates well with actual costs. FF combines greedy best-first search (a relaxed form of A*) with systematic beam search for completeness, achieving high performance on benchmarks like the International Planning Competition by generating plans orders of magnitude faster than uninformed methods; for instance, it solved complex logistics domains with hundreds of actions in seconds. This integration allows A* to scale to real-world planning tasks, such as robotic assembly or scheduling, by leveraging graph-based relaxations for heuristic computation. A* also applies to optimization in network , particularly for finding shortest paths in abstract representing communication or circuit topologies. In VLSI design, algorithms treat the chip layout as a grid , using A* (or its uniform-cost special case) to connect components while avoiding obstacles like existing wires, with heuristics like Manhattan distance ensuring efficiency in dense layouts. C.Y. Lee's algorithm, a foundational BFS-based router equivalent to A* with zero heuristic, guarantees optimal wire paths and has influenced modern tools for multi-layer . For packet , A* variants guide source-based in dynamic networks, estimating costs via link metrics like or bandwidth, as seen in extensions of protocols like OSPF where informed search outperforms Dijkstra in non-uniform by prioritizing promising paths. Emerging applications hybridize A* with (), particularly by incorporating A*-inspired guidance into (MCTS) for sequential decision-making in large state spaces like games. In games such as Go, MCTS—viewed as an approximate akin to A* with upper-confidence bounds as —combines with RL-trained value functions to evaluate expansions, enabling play by balancing exploration and exploitation in training. This hybrid approach, as in , uses policy networks from RL to bias move selection (simulating A*'s heuristic) and MCTS to simulate forward search, achieving win rates over 99% against top humans by solving positions with branching factors around 200. Such integrations bridge classical with modern RL, extending A* to environments where traditional admissibility is relaxed for sample efficiency. Recent advancements as of 2025 include neural-guided A* variants in for improved path in dynamic, uncertain environments.

Relations to Other Algorithms

Comparison with Dijkstra's Algorithm

The A* search algorithm and Dijkstra's algorithm share fundamental similarities as graph search methods designed to find optimal paths in weighted graphs with non-negative edge costs. Both algorithms are variants of uniform cost search (UCS), employing a priority queue to expand nodes in order of increasing path cost from the starting node, ensuring optimality by guaranteeing the shortest path when all edge weights are non-negative. A* can be viewed as a generalization of Dijkstra's algorithm; specifically, when the heuristic function h(n) = 0 for all nodes n, A* reduces exactly to Dijkstra's algorithm, as the evaluation function f(n) = g(n) + h(n) simplifies to just the path cost g(n). The primary differences lie in their node expansion strategies and use of domain knowledge. Dijkstra's algorithm selects the node with the lowest g(n) (cumulative cost from the start) for expansion, performing an uninformed, breadth-first-like exploration that treats all directions equally until the goal is reached. In contrast, A* prioritizes nodes based on f(n) = g(n) + h(n), where h(n) estimates the cost to the goal, allowing informed guidance toward promising paths. This heuristic integration enables A* to expand fewer nodes than Dijkstra's when h(n) is informative and admissible (never overestimating the true cost), reducing computational overhead while preserving optimality under the same non-negative cost conditions. For instance, in a goal-directed scenario with obstacles, such as navigating from a starting point to a distant goal in a , would explore all accessible regions outward from the start (e.g., scanning irrelevant eastern or northern branches), potentially expanding a large portion of the . A*, however, directs expansion southward using a like , focusing on relevant paths and avoiding unnecessary branches, thus achieving the same optimal path with significantly less exploration. Dijkstra's algorithm is preferable in scenarios where the goal is unknown, no reliable is available, or exact shortest paths to all nodes from the source are needed without goal-specific guidance, as it provides comprehensive distance information across the without relying on potentially imperfect estimates. Greedy best-first search is an informed that expands the with the lowest heuristic estimate h(n) to the , prioritizing apparent proximity to the target without considering the path cost from the start. This approach makes it faster than uninformed methods in many cases but renders it incomplete in spaces and non-optimal, as it can enter loops or overlook shorter paths by myopically chasing promising leads. It is neither complete nor optimal in general, even when the heuristic is admissible or consistent. In contrast, A* search addresses these limitations by evaluating using the combined function f(n) = g(n) + h(n), where g(n) accounts for the actual from the start , ensuring and optimality when h(n) is admissible (never overestimating true ). This integration prevents the pitfalls of pure , as A* balances exploration of traversed with guidance, avoiding local traps that search might pursue. A illustrative example involves in a grid-based with a concave obstacle blocking a direct route, where costs vary (e.g., open spaces cost 1, walls are impassable). , guided solely by to the goal, might veer into a deceptive narrow that appears closer but requires a costly detour out, leading to a longer overall path. A*, however, weighs the accumulating g(n) costs and selects the globally cheaper route around the obstacle, yielding the true shortest path. While greedy best-first search can outperform A* in speed—expanding fewer nodes when the heuristic is highly accurate—it remains unreliable for critical applications due to its potential for suboptimal solutions, whereas A* provides guaranteed performance at the expense of higher computational demands.

References

  1. [1]
    [PDF] astar.pdf - Stanford AI Lab
    AN ADMISSIBLE SEARCHING ALGORITHM. A. Description of the Algorithm. In order to expand the fewest possible nodes in searching for an optimal path, a search ...
  2. [2]
    Introduction to A* - Stanford CS Theory
    Sep 23, 2025 · A* is like Dijkstra's Algorithm in that it can be used to find a shortest path. A* is like Greedy Best-First-Search in that it can use a ...
  3. [3]
    Lecture 5: Search: Optimal, Branch and Bound, A*
    This lecture covers strategies for finding the shortest path. We discuss branch and bound, which can be refined by using an extended list or an admissible ...
  4. [4]
    1.3 Uninformed Search | Introduction to Artificial Intelligence
    When we have no knowledge of the location of goal states in our search tree, we are forced to select our strategy for tree search from one of the techniques ...
  5. [5]
    Uninformed Search - CS440 Lectures
    Uniform-cost search (UCS) is a similar algorithm that finds optimal paths even when edge costs vary. Recall that the frontier in a search algorithm contains ...
  6. [6]
    [PDF] Uninformed Search - CS:4420 Artificial Intelligence
    A graph is directed if an edge can be traversed only in a specified direction. When an edge is directed from ni to nj. • it is univocally identified by the ...
  7. [7]
    [PDF] CS 540 Intro to Artificial Intelligence
    Uninformed Search: strategies that order nodes without using ... • Depth First Search. • Breadth First Search. • Uniform Cost Search. • Iterative Deepening.
  8. [8]
    [PDF] Lecture 5: Search I
    BFS will search all the paths consisting of one edge, two edges, three edges ... Algorithm: uniform cost search [Dijkstra, 1956]. Add sstart to frontier ...
  9. [9]
    [PDF] Solving Problems by Searching - Santa Clara University
    Graph-Search Scheme. • Idea: never visit the same state twice. • Example: in the following breadth-first graph search, we do not expand the circled nodes.
  10. [10]
    [PDF] Search Problems - CS 188: Artificial Intelligence
    What nodes does BFS expand? ▫ Processes all nodes above shallowest solution. ▫ Let depth of shallowest solution be s. ▫ Search takes time O(bs).
  11. [11]
    Artificial Intelligence: A Modern Approach, 4th US ed.
    by Stuart Russell and Peter Norvig. The authoritative, most-used AI textbook, adopted by over 1500 schools. ... Code (website); Pseudocode (pdf) Covers: US ...AI Instructor's Resource Page · Full Table of Contents for AI · Global Edition · 1500
  12. [12]
    Shakey the Robot - SRI International
    Shakey was the first mobile robot with the ability to perceive and reason about its surroundings. The subject of SRI's Artificial Intelligence Center research ...
  13. [13]
    Shakey - CHM Revolution - Computer History Museum
    Shakey, developed at the Stanford Research Institute (SRI) from 1966 to 1972, was the first mobile robot to reason about its actions. Shakey's playground was a ...
  14. [14]
    SRI's Pioneering Mobile Robot Shakey Honored as IEEE Milestone
    Feb 17, 2017 · Shakey, developed at SRI International between 1966 and 1972, was honored as the world's first mobile, intelligent robot.
  15. [15]
    [PDF] A Formal Basis for the Heuristic Determination of Minimum Cost Paths
    1967. A Formal Basis for the Heuristic Determination of Minimum Cost Paths. PETER E. HART, MEMBER, IEEE, NILS J. NILSSON, MEMBER, IEEE, AND BERTRAM RAPHAEL.
  16. [16]
    A Formal Basis for the Heuristic Determination of Minimum Cost Paths
    A Formal Basis for the Heuristic Determination of Minimum Cost Paths · P. Hart, N. Nilsson, B. Raphael · Published in IEEE Transactions on Systems… 1 July 1968 ...Missing: original | Show results with:original
  17. [17]
    Depth-first iterative-deepening: An optimal admissible tree search
    A depth-first iterative-deepening algorithm is shown to be asymptotically optimal along all three dimensions for exponential tree searches.
  18. [18]
    None
    Nothing is retrieved...<|control11|><|separator|>
  19. [19]
    [PDF] The A* Search Algorithm
    The A* Search Algorithm. Siyang Chen. Page 2. Introduction. A* (pronounced 'A-star') is a search algorithm that finds the shortest path between some nodes S ...
  20. [20]
    [PDF] Informed search algorithms
    Greedy best-first search expands nodes with minimal h(n). It is not optimal, but is often efficient. A* search expands nodes with minimal f(n)=g(n)+h(n). A* s ...
  21. [21]
    Heuristics - Stanford CS Theory
    Sep 23, 2025 · One way to break ties is to nudge the scale of h slightly. If we scale it downwards, then f will increase as we move towards the goal.Missing: strategy | Show results with:strategy
  22. [22]
    [PDF] A Generalized Framework for Lifelong Planning A* Search
    Our baseline algorithm is. A* that uses consistent heuristics and breaks ties among states with the same f-values in favor of states with smaller g-values, like ...
  23. [23]
    [PDF] Robotic Motion Planning: A* and D* Search
    Insert the newly expanded nodes into the priority queue and continue until the goal is found, or the priority queue is empty (in which case no path exists).
  24. [24]
    [PDF] Here's a more detailed look at how A-Star operates - Robotics
    In pseudo-code, A-Star looks like this: Algorithm A-Star. Initialize OPEN list (to the empty list). Initialize CLOSED list (to the empty list). Create goal ...
  25. [25]
    None
    Summary of each segment:
  26. [26]
    Implementation notes - Stanford CS Theory
    Sep 23, 2025 · The A* algorithm, stripped of all the code, is fairly simple. There are two sets, OPEN and CLOSED. The OPEN set contains those nodes that are candidates for ...Set Representation# · Binary Heaps# · Splay Trees#
  27. [27]
    Implementation of A* - Red Blob Games
    Sep 26, 2025 · On this page I show how to implement Breadth-First Search, Dijkstra's Algorithm, Greedy Best-First Search, and A*. I try to keep the code here simple.1 Python Implementation# · 1.1 Breadth First Search# · 2 C++ Implementation#
  28. [28]
    [PDF] 3 solving problems by - Artificial Intelligence: A Modern Approach
    Nodes are the data structures from which the search tree is constructed. Each has a parent, a state, and various bookkeeping fields. Arrows point from child ...
  29. [29]
    [PDF] The D*Algorithm for Real-Time Planning of Optimal Traverses
    2.0 The Basic D* Algorithm​​ The aigorithnl is named D* because it resembles A* [8], except that it is dynamic in the sense that arc costs can change during the ...
  30. [30]
    [PDF] Risk Aware Graph Search With Uncertain Edge Costs
    Search algorithms like A* use total ordering of vertices in a graph based on cost to find a single optimal path. This is necessary because a point can have an ...
  31. [31]
    [PDF] A* Heuristic Search - Auton Lab
    A* only ever considers acyclic paths. On each iteration of A* a new acyclic path is generated because: – When a node is added the first time, a new path exists.
  32. [32]
    Uniform-Cost Search vs. Best-First Search - Baeldung
    Oct 15, 2021 · The condition that edge costs be strictly positive guarantees that there won't be an infinite path with zero-cost edges whose nodes UCS would ...
  33. [33]
    [PDF] 16.410/16.413 Recitation 9 Solutions - MIT OpenCourseWare
    Thus, this heuristic is admissible. Finally, a third heuristic is called the Manhattan distance (also known as the taxicab distance or L1 distance) is computed ...
  34. [34]
    [PDF] 3 SOLVING PROBLEMS BY SEARCHING
    Search algorithms require a data structure to keep track of the search tree that is being con- structed. For each node n of the tree, we will have a structure ...
  35. [35]
    [PDF] Heuristic (Informed) Search - Stanford AI Lab
    8-Puzzle Heuristics. 1. 4. 7. 5. 2. 6. 3. 8. STATE(N). 6. 4. 7. 1. 5. 2. 8. 3. Goal ... ▫ h1(N) = number of misplaced tiles = 6 is admissible. ▫ h2(N) = sum of ...<|control11|><|separator|>
  36. [36]
    [PDF] CS 188 Introduction to Artificial Intelligence Fall 2022 Note 2
    Sep 2, 2022 · In order to prove the above theorem, we first prove that when running A* graph search with a consistent heuristic, whenever we remove a node for ...
  37. [37]
    [PDF] Artificial Intelligence - A Modern Approach Third Edition
    Our coverage of planning goes into more depth on contingent planning in partially observable environments and includes a new approach to hierarchical planning.
  38. [38]
  39. [39]
    On the complexity of admissible search algorithms - ScienceDirect
    In particular, algorithm A∗, due to Hart, Nilsson and Raphael, is shown to require O(2N) steps, in the worst case, for searching a graph with N nodes, if the ...
  40. [40]
    [PDF] Notes on the Complexity of Search
    Sep 23, 2025 · This brief handout is meant to explain to you how we can derive the time and space complexity for various types of search, as outlined in the ...
  41. [41]
    [PDF] Artificial Intelligence - UCSD CSE
    Apr 27, 2023 · *Note that this is not saying it's space/time complexity is optimal. Page 22. 22. How do we evaluate a search algorithm?
  42. [42]
    [PPT] Beam Search - CSE IITB
    Beam search is an optimization of best-first search that reduces its memory requirements. Beam Search Algorithm. OPEN = {initial state}. while OPEN is not empty ...
  43. [43]
    [PDF] Depth-First Iterative-Deepening: An Optimal Admissible Tree Search*
    For example, IDA* is the only known algorithm that can find optimal paths for randomly generated instances of the Fifteen Puzzle within practical time and space ...
  44. [44]
    Is Breadth First Search Space Complexity on a Grid different?
    Mar 29, 2019 · The asymptotic time complexity of both is O(number_rows * number_cols), and the asymptotic space complexity is O(number_rows + number_cols) for BFS or O(number ...
  45. [45]
    [PDF] Artificial Intelligence: A Modern Approach - Engineering People Site
    We have tried to avoid excessive formal- ity in the presentation of these ideas while retaining precision. We have included pseudocode algorithms to make the ...
  46. [46]
    [PDF] Bi - Directional And Heuristic Search In Path Problems
    The first part of this dissertation presents a class of algorithms, denoted VGA, for solving the two point shortest path problem in directed graphs with non- ...
  47. [47]
    [PDF] Online Graph Pruning for Pathfinding on Grid Maps - Daniel Harabor
    In this paper we present a novel search strategy, specific to grids, which is ... Searching with jump point pruning always re- turns an optimal solution.
  48. [48]
    (PDF) Near optimal hierarchical path-finding (HPA*) - ResearchGate
    This paper presents HPA* (Hierarchical Path-Finding A*), a hierarchi-cal approach for reducing problem complexity in path-finding on grid-based maps.
  49. [49]
    [2204.08938] Learning heuristics for A* - arXiv
    Apr 11, 2022 · This paper learns efficient heuristic functions for A* using Neural Algorithmic Reasoning, jointly with Dijkstra's algorithm, to speed up ...
  50. [50]
    [PDF] Practical Search Techniques in Path Planning for Autonomous Driving
    The path planning algorithm uses a modified A* search with a modified state-update rule, followed by numeric non-linear optimization.
  51. [51]
    (PDF) A*-based Pathfinding in Modern Computer Games
    PDF | Pathfinding in ... A Formal Basis for the Heuristic Determination of Minimum Cost Paths. Article. Aug 1968. Peter E. Hart · Nils J. Nilsson · Bertram ...<|control11|><|separator|>
  52. [52]
    Where Graph Theory Meets The Road: The Algorithms ... - Hackaday
    Apr 4, 2024 · By modeling the cities as nodes, an algorithm can then be devised that creates the edges between nodes, taking into account the weight of each ...<|control11|><|separator|>
  53. [53]
    Path planning algorithms in the autonomous driving system
    The A* algorithm is used to obtain an approximate path. Then, a cardinal spline interpolation technique is applied to smooth the obtained path. After that, the ...
  54. [54]
    [PDF] PATTERN DATABASES - Department of Computing Science
    For the 15-Puzzle, iterative-deepening A* with pattern databases (N=8) reduces the total number of nodes searched on a standard problem set of 100 positions by ...
  55. [55]
    Pattern Databases - Culberson - 1998 - Wiley Online Library
    Dec 17, 2002 · For the 15-Puzzle, iterative-deepening A* with pattern databases(N ="8) reduces the total number of nodes searched on a standard problem set ...
  56. [56]
    [PDF] Finding Optimal Solutions to Rubik's Cube Using Pattern Databases
    The first appearance of Rubik's Cube in the pub- lished AI literature was (Korf 1985b).
  57. [57]
    Finding optimal solutions to Rubik's cube using pattern databases
    The algorithm used is iterative-deepening-A* (IDA*), with a lower-bound heuristic function based on large memory-based lookup tables, or "pattern databases" ( ...
  58. [58]
    [PDF] The FF Planning System: Fast Plan Generation Through Heuristic ...
    FF uses forward state space search with a heuristic, combining hill-climbing and systematic search, and a pruning technique, and is based on relaxed GRAPHPLAN.
  59. [59]
    [PDF] Maze Router: Lee Algorithm
    Step 1: Propagate wave from the source s to the closet target. Step 2: Mark ALL cells on the path as s. Step 3: Propagate wave from ALL s cells to the other ...
  60. [60]
    On Monte Carlo Tree Search and Reinforcement Learning
    Dec 20, 2017 · In this paper we re-examine in depth this close relation between the two fields; our goal is to improve the cross-awareness between the two ...
  61. [61]
    [PDF] Shortest Paths - cs.Princeton
    Stop when you reach t. Even better: exploit geometry (A* algorithm). □. For ... generalization of Dijkstra's algorithm. □ encompasses DFS, BFS, and Prim.<|separator|>
  62. [62]
    Dijkstra vs. A* – Pathfinding | Baeldung on Computer Science
    Mar 18, 2024 · Dijkstra has two assumptions: the graph is finite; the edge costs are non-negative. The first assumption is necessary because Dijkstra places ...
  63. [63]
    [PDF] ANA*: Anytime Nonparametric A* - Ken Goldberg
    The A* algorithm (Hart, Nilsson, and Raphael 1968) is a generalization of Dijkstra's algorithm that improves its run- ning time by using a heuristic to ...
  64. [64]
    [PDF] dijkstra-routing-1959.pdf
    The shortest branch of set II is removed from this set and added to set I. As a result one node is transferred from set B to set 4. Step 2. Consider the ...
  65. [65]
    Analysis of Dijkstra's Algorithm and A* Algorithm in Shortest Path ...
    This paper will compare Dijkstra's algorithm and A* algorithm to solve the shortest path problem. There are four stages in mathematical. Page 3. ICCAI 2019.
  66. [66]
    (PDF) Analysis of Dijkstra's Algorithm and A* Algorithm in Shortest ...
    A* operates as a best-first search algorithm, which generally results in faster search times compared to Dijkstra's. ...
  67. [67]
    (PDF) Comparison of A* Algorithm and Greedy Best Search in ...
    Aug 7, 2025 · In comparison, Greedy Best First Search is much faster than the A* Search Algorithm as it excludes the g(n). While Greedy Best First Search does ...