Fact-checked by Grok 2 weeks ago

Admissible heuristic

An admissible heuristic is a type of estimation function used in informed search algorithms for , particularly in and problem-solving, where it provides a lower bound on the true minimum cost from any state to the state, ensuring it never overestimates this cost. Formally, for a function h(n), it is admissible if h(n) \leq h^*(n) for every n, where h^*(n) denotes the optimal cost from n to the goal. In the context of the , introduced in , an admissible heuristic enables to guarantee finding an optimal solution by guiding the search toward promising paths without risking suboptimal results due to overestimation. Specifically, A* uses an f(n) = g(n) + h(n), where g(n) is the exact from the start to n, and the admissibility of h(n) ensures that the first expanded has the minimum possible . This property holds even in graphs with varying step costs, making admissible heuristics essential for applications like route planning and puzzle solving. Admissible heuristics are closely related to consistent heuristics, which satisfy the h(n) \leq c(n, n') + h(n') for every action leading from n to successor n' with cost c; consistency implies admissibility and further optimizes A* by minimizing the number of nodes expanded. Common examples include the zero heuristic h(n) = 0, which is admissible but inefficient as it reduces A* to uniform-cost search, and the perfect heuristic h(n) = h^*(n), which is admissible and allows A* to find the goal in one step along the optimal path. In practice, designing effective admissible heuristics often involves domain-specific relaxations, such as straight-line distance in unobstructed spaces for grid-based .

Fundamentals

Definition

In the context of search problems in , a heuristic function h(n) serves as an estimate of the minimum cost to reach the goal state from a given n in a or state space. This estimate guides the search process by prioritizing nodes that appear closer to the goal, thereby improving efficiency over uninformed methods like . An admissible heuristic is defined as one that never overestimates the true minimal cost h^*(n) from n to the , satisfying h(n) \leq h^*(n) for all nodes n. This underestimation property is crucial because it guarantees that the search algorithm will not overlook the optimal path; any path expanded to a goal will have a cost at least as low as the true optimum, preventing the selection of suboptimal solutions. The concept of admissible heuristics was introduced in the seminal work on heuristic search by Peter E. Hart, Nils J. Nilsson, and Bertram Raphael in 1968, where it forms the foundation for ensuring optimality in informed search algorithms such as A*. In contrast to uninformed search algorithms such as breadth-first search (BFS) and depth-first search (DFS), which explore the state space blindly without leveraging any knowledge about the goal, admissible heuristics enable informed search methods to prioritize exploration towards promising paths. BFS, for instance, expands nodes level by level based solely on depth, guaranteeing optimality in unweighted graphs but often requiring exhaustive examination of irrelevant branches in large spaces. Similarly, DFS delves deeply along a single path, risking inefficient traversal away from the goal. Informed search addresses these limitations by incorporating domain-specific knowledge via heuristics, directing the search algorithm to focus on nodes that appear closer to the goal and thereby accelerating the discovery of solutions. Central to this process is the f(n) = g(n) + h(n), which estimates the total cost of a through n to the . Here, g(n) represents the exact cost from the initial to n, accumulated from the costs of actions taken so far, ensuring an accurate measure of progress made. The heuristic component h(n) provides an admissible lower-bound estimate of the minimum cost from n to the , guiding the search to expand s with the lowest f(n) values first in a best-first manner. This combination allows the algorithm to balance exploration of the actual taken (g(n)) with an optimistic projection of remaining effort (h(n)), effectively steering the search tree or towards the optimal solution without premature commitment to suboptimal branches. Admissibility of h(n)—ensuring it never overestimates the true cost—preserves the potential for finding optimal s. The primary benefits of admissible heuristics in informed search manifest in significantly reduced node expansions and faster to optimal solutions, particularly in expansive state spaces where uninformed methods become computationally infeasible. For example, in problems, heuristics like straight-line distance can prune vast portions of the search space, expanding orders of magnitude fewer nodes than BFS while maintaining solution quality. This efficiency stems from the heuristic's ability to rank fringe nodes intelligently, concentrating computational resources on high-potential paths and avoiding the uniform expansion that plagues searches. In practice, such approaches scale to real-world applications with millions of states, where the time and memory savings enable practical problem-solving.

Mathematical Formulation

Admissibility Condition

In search problems modeled as graphs with states as and non-negative edge costs defined by a c, a h is admissible if it satisfies h(n) \leq h^*(n) for every n, where h^*(n) denotes the minimum total cost from n to any along an optimal . This formal condition, introduced in the foundational work on heuristic search, ensures that h never overestimates the true optimal cost to the . The admissibility condition applies broadly to both uniform-cost spaces, such as unweighted graphs where each has 1 and h(n) underestimates the number of steps to the , and general graphs with additive but varying , where h(n) underestimates the summed along the shortest . In either case, non-negative are required to prevent negative cycles that could undermine the lower-bound property. By providing such a lower bound, admissibility preserves the and optimality of informed search algorithms like A*, ensuring that the first found has the minimal when combined with the path from the start. Edge cases illustrate the range of admissible heuristics. The zero heuristic h(n) = 0 for all nodes is always admissible, as it trivially satisfies the lower-bound condition given non-negative costs, but it offers no guidance and degenerates informed search into uniform-cost search. At the opposite end, a perfect heuristic where h(n) = h^*(n) for all n is admissible and maximally informative, directing the search to expand only nodes on optimal paths, though computing it exactly equates to solving the original problem. Admissibility relates to (or monotonicity) but is weaker, as a implies admissibility without the reverse holding in general.

Consistency Relation

In search algorithms, a function h is said to be consistent if, for every n and every successor n' generated from n, the estimated satisfies h(n) \leq c(n, n') + h(n'), where c(n, n') is the of the edge from n to n'. This condition embodies the in the context of distances, ensuring that the estimates do not overestimate the along any single step in the path to the goal. Consistency implies admissibility, meaning any consistent heuristic is also admissible. To see this, consider the true optimal cost h^*(n) from node n to the goal. The proof proceeds by induction on the length of the shortest path from n to the goal. For the base case, if n is the goal, then h(n) = 0 = h^*(n). For the inductive step, assume the claim holds for all nodes at distance k from the goal; for a node n at distance k+1, let n' be its successor on the optimal path to the goal. By consistency, h(n) \leq c(n, n') + h(n') \leq c(n, n') + h^*(n') = h^*(n), completing the induction. While every is admissible, the converse does not hold: there exist admissible heuristics that are inconsistent. Admissible heuristics guarantee optimality in tree-search versions of algorithms like A*, but may lead to multiple expansions of the same in graph-search variants, requiring additional checks such as a to avoid re-expansions. In contrast, consistent heuristics enable more efficient graph search by ensuring that once a is expanded, its optimal is finalized, allowing for anytime optimality without repeated processing. Consistent heuristics thus reduce the number of re-expansions in algorithms like A*. A classic example of a consistent heuristic is the straight-line (Euclidean) distance in pathfinding problems where edge costs correspond to actual Euclidean distances between points, as this directly satisfies the triangle inequality: the direct distance from n to the goal is at most the edge distance to n' plus the distance from n' to the goal. An example of an admissible but inconsistent heuristic arises in a simple graph with nodes A, B, and goal G, where edges are A to B with cost 1 and B to G with cost 1 (so h^*(A) = 2). Setting h(A) = 2, h(B) = 0, and h(G) = 0 yields an admissible heuristic since h(A) \leq 2 and h(B) \leq 1, but it violates consistency because h(A) = 2 > 1 + h(B) = 1.

Construction Techniques

Problem Relaxation

Problem relaxation is a fundamental technique for constructing admissible heuristics in informed search algorithms, particularly in classical domains. The core idea is to derive a function h(n) by simplifying the original problem's constraints, creating a relaxed version where the optimal solution cost is easier to compute or approximate. A common relaxation involves removing the negative effects of actions, such as delete lists in STRIPS operators, which prevents achieved facts from being undone. This makes the relaxed problem less constrained than the original, ensuring that the cost of the optimal relaxed plan provides a lower bound on the true optimal cost h*(n), thus guaranteeing admissibility since h(n) ≤ h*(n). The process of applying problem relaxation typically follows these steps in planning domains: First, identify the constraints to relax, such as ignoring delete effects (where actions only add positive effects without removing others) or overlooking preconditions (allowing actions to apply regardless of current conditions). Second, formulate the relaxed problem, often as a delete-free task Π⁺ where all operators have empty delete lists but retain add lists and preconditions. Third, compute or approximate the optimal solution cost in this relaxed task from the current n to the ; for instance, this can involve finding the shortest sequence of applicable actions that achieves all goals in the relaxed setting. Admissibility arises because any valid in the original problem is also valid in the relaxed one, so the minimal cost in the relaxation cannot exceed the original h*(n). In STRIPS planning, an example technique derives an admissible heuristic by relaxing delete lists while using a relaxed planning (RPG) to compute levels. The max heuristic h_max(n) = \max_{p \in G \setminus s} \mathrm{level}(p), where level(p) is the earliest RPG layer in which fact p is achieved from state s. This is admissible because the plan must achieve all goals, requiring at least as many steps as the hardest goal in the relaxed task (h_max ≤ h⁺ ≤ h*). More sophisticated approximations like the additive heuristic h_add sum minimal costs but are inadmissible as they can overestimate by ignoring actions achieving multiple goals. This approach offers several advantages: it is systematic, applicable across domains without manual tuning, and often computable in polynomial time via approximations like h_max, which avoids the NP-hardness of exact h⁺. Admissibility directly follows from the relaxation preserving a lower bound on plan costs, as the original problem's solutions form a of the relaxed one's. However, relaxation-based heuristics can be limited if the relaxation is too coarse, such as ignoring deletes leading to overly optimistic estimates that ignore interferences between actions, resulting in weak guidance for search and potentially larger explored spaces. Such techniques have been applied in puzzle solving domains, where relaxing movement constraints yields admissible distance estimates, such as Manhattan distance in sliding puzzles ignoring tile interlocks. In non-planning domains like grid-based , relaxation constructs admissible heuristics by ignoring obstacles: the h(n) = |x_g - x_n| + |y_g - y_n| assumes axis-aligned moves and no walls, satisfying h(n) ≤ h*(n) by ; works for any moves.

Landmark-based Methods

Landmark-based methods for constructing admissible heuristics in involve identifying pivotal subgoals, known as landmarks, within the state space. A is defined as a fact or that must hold true at some point in every optimal plan from the initial state to the goal state. These landmarks serve as unavoidable milestones, enabling the heuristic to estimate the remaining cost by tracking progress toward their achievement. The concept of landmarks was introduced in the planning literature in the early for problem and guidance in suboptimal planners. Seminal work by Hoffmann et al. in 2004 formalized basic landmarks and orders for improving local search. Admissible landmark s for optimal , such as LM-cut, were developed later, e.g., by and Domshlak in 2009, adapting landmarks for A* while ensuring no overestimation. Landmarks are discovered through a forward-backward process in a relaxed , which ignores delete effects to compute approximations efficiently. First, forward search identifies all facts reachable from the initial , filtering out landmarks. Backward search, starting from the facts, computes necessary preconditions via backchaining: if every achieving a potential landmark shares a common precondition, that precondition becomes a landmark ordered before it. facts are trivial landmarks, and the process iterates to build a landmark capturing ordering constraints, such as reasonable orders where achieving one landmark does not undo another. This polynomial-time method, often using delete-relaxation, ensures completeness for basic landmarks while remaining computationally feasible. For admissible heuristics, the LM-cut method partitions landmarks into a cut where no two are achievable by the same in the relaxation, then h_{LM-cut}(n) = \max_{\text{cuts } C} \sum_{l \in C} \mathrm{cost}(l), with cost(l) from RPG levels. This ensures a lower bound as the plan must "pay" for each cut separately. The basic count of unsatisfied landmarks is informative but inadmissible, as one may achieve multiple. In practice, the graph enforces orders to avoid redundant computations, and the heuristic can be strengthened by resolving of independent landmarks via maximum partitioning. For general costs, variants assign costs to landmarks through partitioning, ensuring the estimate remains path-dependent yet admissible. Admissibility for LM-cut follows from the unavoidable nature of landmarks and the cut structure: every optimal plan must cross each cut, incurring at least the sum of costs in that cut, with the max over cuts providing the tightest such bound without overestimation. This guarantees that A* using h_{LM-cut} finds optimal solutions, and it is consistent under reasonable orders. Extensions enhance the basic method for stronger estimates while preserving admissibility. Weighted landmarks incorporate action costs via uniform or optimal partitioning, where costs are split across dependent landmarks to avoid double-counting. Pattern landmarks generalize to sets of facts achieved together, treating them as composite subgoals to capture synergies ignored in single-fact counting. These advancements have been integrated into planners like for cost-optimal search.

Examples and Applications

Grid Pathfinding

Grid pathfinding involves navigating a two-dimensional where each represents a , some cells are marked as obstacles that cannot be traversed, and the can move to adjacent cells (up, down, left, or right) at a uniform unit cost of 1. The goal is to find the shortest path from a starting to a designated goal , minimizing the total number of moves while avoiding obstacles. This setup models many scenarios where space is discretized into a structure. A classic admissible heuristic for this problem is the Manhattan distance, defined as h(n) = |x_n - x_g| + |y_n - y_g|, where (x_n, y_n) is the coordinate of the current n and (x_g, y_g) is the goal coordinate. This heuristic estimates the minimum number of moves required by assuming movement only along axes, effectively computing the of and vertical displacements. The Manhattan distance is admissible because it provides a lower bound on the actual path cost: in the absence of obstacles, the shortest path length equals the Manhattan distance, as the agent must cover the exact horizontal and vertical differences through axis-aligned moves. With obstacles present, any feasible path must detour around them, resulting in a length at least as long as the Manhattan distance; thus, h(n) \leq h^*(n) for all nodes n, where h^*(n) is the true optimal cost to the goal. This can be viewed as deriving from a problem relaxation that ignores obstacles, allowing unrestricted movement. To visualize, consider a start at (0,0) and goal at (3,2) with no obstacles: the heuristic yields h = 3 + 2 = 5, matching the optimal path of five moves (e.g., right three times, up twice). If an obstacle blocks a direct route, the actual path extends beyond five moves, but the heuristic remains unchanged and underestimates correctly. When used in the A* algorithm, the Manhattan distance guides the search toward the goal efficiently, expanding fewer nodes than uninformed methods like , especially in open spaces without dense obstacles, while guaranteeing an optimal path due to admissibility. In practice, this leads to reduced computational overhead, as the prioritizes promising directions and avoids unnecessary . Admissible heuristics like Manhattan distance are widely applied in real-world grid-based navigation, such as mobile for or indoor where environments are discretized into , and in for character movement on tiled maps. They also approximate routing in urban GPS systems, treating city blocks as a to estimate travel times under Manhattan-like constraints.

Puzzle Solving

Admissible heuristics play a crucial role in solving combinatorial puzzles, such as the 8-puzzle and 15-puzzle, where each state represents a configuration of numbered tiles on a square grid with one , and the goal is to rearrange them into sorted order via adjacent slides. These domains feature high branching factors—up to four moves per state—leading to enormous state spaces, with the 8-puzzle having approximately 181,440 reachable states (half of 9! due to parity constraint) and the 15-puzzle expanding to about 10^{13}. The pattern database (PDB) method constructs powerful admissible heuristics by precomputing exact solution costs for abstracted subproblems involving subsets of tiles, ignoring the positions of others by treating them as movable blanks. A basic example is the Manhattan distance heuristic, which for each tile sums the grid units it must travel horizontally and vertically to its goal position; this underestimates the true cost since tiles may block each other, ensuring h(n) \leq h^*(n) for any state n. More effective are additive PDBs, which partition tiles into disjoint groups (e.g., 7-8 tiles in the 15-puzzle), build a database for each group's exact distances to their subgoals, and define the heuristic as h(n) = \sum h_i(n), where h_i(n) is the cost from the i-th database; admissibility holds because each h_i(n) \leq the true cost contribution of that group, and disjointness prevents simultaneous moves across groups, avoiding overestimation from interactions. This technique scales to more complex puzzles like the , a permutation-based challenge with 43 quintillion states, by approximating via PDBs for disjoint components such as 8 corner cubies (88 million entries) and 6 edge cubies (42 million entries), using the maximum of these values as the to maintain admissibility while ignoring interdependencies like constraints. methods can briefly complement PDBs by identifying key subgoals, such as fixed tile positions, to guide construction in these permutation spaces. Empirically, additive PDBs dramatically enhance search efficiency, solving all 100 random 15-puzzle instances optimally—previously unsolved by some methods—in under 7 CPU-seconds on average, and enabling the first optimal solutions to 50 random 24-puzzle instances (state space of 10^{25}) by exploring millions of states per instance. For the , these heuristics facilitated optimal solutions to 10 random hard instances at depths up to 18 moves, generating 352 billion nodes in under four weeks on hardware.

Optimality and Proofs

Optimality in A* Algorithm

The A* algorithm is a method that employs an f(n) = g(n) + h(n), where g(n) represents the exact from the initial state to n, and h(n) is the estimate of the from n to a goal state. It operates by repeatedly expanding the with the lowest f(n) value from an , generating successors and updating their f values until a goal is expanded. A key optimality theorem states that if h(n) is admissible—meaning h(n) \leq h^*(n) for all nodes n, where h^*(n) is the true optimal cost from n to the —and h(goal) = 0, then A* is guaranteed to find an optimal path to the . Specifically, the first node expanded by A* will have the minimal possible path cost from the start. This holds because admissibility ensures that the f(n) values along any optimal path never exceed the true optimal cost, preventing the algorithm from overlooking cheaper solutions. The mechanism relies on the fact that for nodes on the optimal , f(n) \leq C^*, where C^* is the optimal , since g(n) + h(n) \leq g(n) + h^*(n) \leq C^*. Thus, A* will expand all nodes with f(n) < C^* before reaching a suboptimal goal, ensuring the first goal expanded is optimal. In contrast to greedy best-first search, which uses only f(n) = h(n) and can be misled by optimistic but inaccurate estimates leading to suboptimal paths, A*'s inclusion of g(n) combined with an admissible h(n) balances exploration and guidance to maintain optimality. A* can be implemented as either tree search or graph search variants. In tree search, which does not track visited states, admissibility alone suffices for optimality, though it may revisit equivalent states inefficiently. Graph search, which maintains a closed list of expanded states to avoid re-expansion and handles duplicates by checking if a newly discovered path to a state offers a lower g(n), requires the stronger condition of consistency (where h(n) \leq c(n, n') + h(n') for successor n') to guarantee optimality without reopening nodes; otherwise, admissibility still ensures optimality if re-expansions are allowed. This distinction is crucial in state spaces with cycles, as graph search improves efficiency by pruning duplicate states while preserving the optimal guarantee under admissible heuristics.

Proof of Optimality

The proof of optimality for the A* algorithm relies on specific assumptions about the search problem and the heuristic function. These include non-negative edge costs (ensuring no negative cycles), an admissible heuristic h(n) that never overestimates the true cost to the goal (i.e., h(n) \leq h^*(n) for all nodes n, where h^*(n) is the optimal cost from n to the goal), and a finite branching factor to guarantee termination in finite spaces. A central result in this proof is the following lemma: for any node n on the optimal path from the start to the goal, the evaluation function satisfies f(n) = g(n) + h(n) \leq C^*, where g(n) is the cost from the start to n and C^* is the optimal path cost to the goal. This holds because g(n) + h^*(n) = C^* along the optimal path (by definition of optimality), and admissibility ensures h(n) \leq h^*(n), so f(n) \leq C^*. The full proof proceeds by contradiction. Assume there exists an optimal solution path p^* with cost C^*, but A* first expands a suboptimal goal node via path p with cost c(p) > C^*. At the point when p is expanded, all nodes on p^* must still be in the (unexpanded), including the goal node on p^*. However, by the , every node n on p^* has f(n) \leq C^* < c(p) \leq f(p) (since h at the goal is 0, so f(p) = c(p)). A* selects the node with the minimum f value for expansion, so it would have expanded a node from p^* before p, contradicting the assumption. Thus, A* must expand the optimal goal node first among all goal nodes, ensuring the returned path is optimal. Under the stated assumptions, A* is also complete, meaning it will find a solution if one exists, even in infinite state spaces, because the finite branching factor and non-negative costs ensure that f values increase monotonically, preventing infinite loops without progress toward the goal. Ties in f values may lead to multiple expansion orders, but admissibility preserves optimality regardless of tie-breaking rules. If the heuristic is inadmissible (overestimating in some cases), A* may expand a suboptimal path first; for instance, in a graph where a misleading overestimate diverts search toward a longer route, the algorithm can terminate with a non-optimal solution. A common extension, weighted A*, modifies the heuristic to w \cdot h(n) with w > 1 to prioritize promising paths faster, but this trades guaranteed optimality for bounded suboptimality (solutions within factor w of optimal) and faster convergence in practice.

References

  1. [1]
  2. [2]
    [PDF] Search: A* - CS221 Stanford
    Algorithm: A* search [Hart/Nilsson/Raphael, 1968]. Run uniform cost search ... Using A* with an admissible heuristic is only guaranteed to find the minimum cost ...
  3. [3]
    [PDF] A Formal Basis for the Heuristic Determination of Minimum Cost Paths
    The paper discusses the problem of finding minimum cost paths through a graph, and how heuristic information can be incorporated into a formal mathematical ...
  4. [4]
    [PDF] astar.pdf - Stanford AI Lab
    This paper describes how heuristic information can be incorporated into a formal mathematical theory of graph searching, which lacks underlying theory.
  5. [5]
    [PDF] 4 INFORMED SEARCH AND EXPLORATION
    It is fairly easy to show (Exercise 4.7) that every consistent heuristic is also admissible. The most important consequence of consistency is the following: A∗ ...
  6. [6]
    A Formal Basis for the Heuristic Determination of Minimum Cost Paths
    How heuristic information from the problem domain can be incorporated into a formal mathematical theory of graph searching is described and an optimality ...
  7. [7]
    [PDF] Artificial Intelligence
    This Instructor's Solution Manual provides solutions (or at least solution sketches) for almost all of the 400 exercises in Artificial Intelligence: A Modern ...
  8. [8]
    [PDF] CS3243, Solutions for Tutorial 3— 1 - NUS Computing
    An example of an admissible heuristic function that is not consistent is as follows: h(n) = 3,h(n0)=1. h is admissible since h(n) ≤ h∗(n)=1+2=3, and h(n0) ≤ h∗ ...
  9. [9]
    [PDF] Planning as heuristic search
    the search in planning has been advanced recently by Drew McDermott [31,33], and by. Bonet, Loerincs and Geffner [6].1 In this paper, we extend these ideas ...
  10. [10]
    [PDF] Fast Plan Generation Through Heuristic Search - Computer Science
    Thus, the number of unsatisfied preconditions we get here is exactly f(v). Secondly, we \give a reward" to each action that is ordered before u. We simply do ...
  11. [11]
    [PDF] 11 PLANNING
    ... admissible heuristic for the original problem. The second approach is to pretend that a pure divide-and-conquer algorithm will work. This is called the ...<|control11|><|separator|>
  12. [12]
  13. [13]
    [PDF] Cost-optimal Planning with Landmarks
    with an admissible path dependent heuristic still guarantees that an optimal solution is found ... Karpas, C. Domshlak. Cost-optimal Planning with Landmarks.
  14. [14]
    [PDF] Automatic Planning – Chapter 14: Landmark Heuristics
    Landmarks were originally introduced as a method for problem decomposition [Hoffmann et al. (2004)]. They traditionally come with a colorful variety of concepts ...
  15. [15]
    Heuristics - Stanford CS Theory
    Sep 23, 2025 · For the best paths, and an “admissible” heuristic, set D to the lowest cost between adjacent squares. In the absence of obstacles, and on ...
  16. [16]
    [PDF] Improved Heuristics for Optimal Pathfinding on Game Maps
    (Manhattan distance extended to allow diagonal moves) is commonly used. However, as the game maps become larger and more complex such a simplistic heuristic ...
  17. [17]
    [PDF] Real-World Modeling of a Pathfinding Robot Using Robot Operating ...
    ABSTRACT. This paper presents a practical approach towards implementing pathfinding algorithms on real-world and low-cost non- commercial hardware platforms ...
  18. [18]
    Disjoint pattern database heuristics
    ### Summary of Key Points on Pattern Databases for Sliding-Tile Puzzles
  19. [19]
  20. [20]
    [PDF] Informed search algorithms
    A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never ...
  21. [21]
  22. [22]
    [PDF] CS 188 Introduction to Artificial Intelligence Fall 2022 Note 2
    Sep 2, 2022 · The condition required for optimality when using A* tree search is known as admissibility. The admissi- bility constraint states that the value ...
  23. [23]
    [PDF] A* optimality proof, cycle checking - UBC Computer Science
    Jan 21, 2011 · A* is optimally efficient among the algorithms that extend the search path from the initial state. • It finds the goal with the minimum # of ...
  24. [24]
    Why does A* with admissible non consistent heuristic find non ...
    Aug 4, 2018 · If you use the A* algorithm as originally defined[PDF], it will find an optimal solution with an admissible heuristic.
  25. [25]
    [PDF] When does Weighted A* Fail? - Computer Science
    The most famous of these is Weighted A* (Pohl 1970), which expands nodes in f′ order, where f′(n) = g(n) + w · h(n) : w ∈ (1, ∞).