Fact-checked by Grok 2 weeks ago

Backward chaining

Backward chaining is an inference method in and logic that begins with a desired goal or query and works backward through applicable rules to identify the supporting facts or premises needed to achieve it, often using a strategy with unification for variables in . This goal-driven approach contrasts with , which starts from known facts and derives conclusions by applying rules forward; backward chaining is particularly efficient for verifying specific hypotheses, as it avoids exploring irrelevant inferences, though it can risk incomplete searches or loops without proper safeguards like . The technique gained prominence in the 1970s through its implementation in languages such as , where it serves as the core mechanism for and query resolution by recursively decomposing goals into subgoals until base facts are matched or failure is confirmed. , developed by Alain Colmerauer and colleagues at the University of Marseille starting in 1972, adopted backward chaining as a subset of earlier systems like Planner, enabling where programs are expressed as logical rules and facts, with the system handling the inference automatically. In expert systems, backward chaining proved effective for diagnostic tasks requiring focused evidence gathering, as exemplified by , an early program developed at from 1972 to 1976 to identify causing severe infections and recommend antibiotics. 's use of backward chaining allowed it to start from potential pathogens as goals and trace back to patient symptoms and lab data via approximately 450 production rules, achieving performance comparable to human experts in controlled evaluations and influencing subsequent systems like EMYCIN for broader domains including and . Despite its successes, backward chaining in such systems can limit flexibility, as it typically does not incorporate unsolicited user input without extensions like meta-rules or hybrid forward-backward strategies. Today, backward chaining remains foundational in knowledge representation and rule-based reasoning, where it supports applications from natural language processing to automated planning, often optimized with techniques to mitigate redundancy and improve scalability in large knowledge bases. It also plays a role in neuro-symbolic AI hybrids.

Fundamentals

Definition

Backward chaining is a top-down, goal-driven inference method employed in artificial intelligence and logic programming, wherein reasoning commences from a desired conclusion or hypothesis and proceeds backward to ascertain whether the available knowledge base provides supporting evidence through the identification and verification of subgoals. This approach systematically decomposes the initial goal into constituent subgoals, recursively applying rules in reverse until base facts—known assertions in the knowledge base—are reached or the goal is proven unsupportable. Unlike data-driven methods that generate conclusions from initial facts, backward chaining prioritizes efficiency by exploring only paths relevant to the specified goal, thereby avoiding the derivation of extraneous information. A key mechanism in backward chaining is unification, a process that matches the current goal against the antecedents (preconditions) of applicable rules by finding substitutions for variables that make the goal and rule components logically equivalent. This matching enables the transformation of a goal into one or more subgoals derived from the rule's body, facilitating the backward propagation of the proof. Unification ensures precise alignment between abstract goals and concrete knowledge representations, handling variables to instantiate specific instances during inference. To illustrate, consider a simple for disease diagnosis where the goal is to confirm if a has a specific illness, such as . Backward chaining would begin with the "patient has " and apply a like "If has and has fever, then has ," generating subgoals to verify the presence of and fever against observed data. If these subgoals are satisfied by known facts (e.g., documented symptoms), the original goal succeeds; otherwise, alternative or further subgoals are explored until resolution or failure. This method, as exemplified in early systems like for antimicrobial therapy selection, traces backward from potential diagnoses to required symptoms or tests, enabling targeted verification in diagnostic contexts.

Key Principles

Backward chaining relies on the principle of , a foundational that enables the matching of goals against rule heads and the generation of subgoals from rule bodies. In this process, operates by selecting a goal and a with complementary literals, unifying them to derive a resolvent that replaces the goal with new subgoals derived from the rule's body. This mechanism ensures soundness and completeness in first-order logic, as it systematically reduces the problem to proving simpler subgoals until base facts are reached or contradictions arise. Subgoal expansion forms the core recursive mechanism of backward chaining, where a complex is iteratively decomposed into a of simpler subgoals based on applicable rules. Starting from an initial , the identifies rules whose heads unify with the goal, then expands the goal into the of literals in the rule's body, treating each as a new subgoal to prove. This continues depth-first, expanding subgoals until they match known facts in the , confirming the original , or until no further expansion is possible, refuting it. The process prioritizes efficiency by focusing only on goal-relevant inferences, avoiding the exhaustive generation of irrelevant facts. Handling variables and unification is essential for flexible matching in backward chaining, allowing goals and rules with variables to bind consistently during . Unification finds a that makes two expressions identical, with the most general unifier (MGU) providing the least restrictive to avoid premature commitments. For instance, a like \textit{Knows(John, } x\textit{)} unifies with a fact \textit{Knows(John, Jim)} via the \{x / \textit{Jim}\}, propagating to subgoals. This occurs recursively, ensuring all variables across and rules are consistently . A G unifies with a rule head H if there exists a \theta such that G\theta = H\theta. G\theta = H\theta The backtracking mechanism addresses failures in subgoal proof by systematically exploring alternative resolution paths. When a subgoal cannot be satisfied—due to no unifying facts or failed sub-subgoals—the system retracts the most recent bindings and rule choice, returning to the previous choice point to try the next applicable rule or subgoal ordering. This depth-first search with chronological backtracking ensures completeness by exhaustively covering the inference space without redundant recomputation, though it may lead to inefficiency in deeply nested goals.

Comparison to Forward Chaining

Similarities

Both forward chaining and backward chaining are inference mechanisms employed in rule-based systems, utilizing if-then rules to derive conclusions from a set of known facts or hypotheses. These rules typically take the form of logical implications, where the antecedent conditions, if satisfied, trigger the consequent outcomes, enabling systematic reasoning within a . A fundamental shared principle is their reliance on as the core rule, which allows the system to affirm the consequent of a rule when the antecedent is established as true. This deductive process underpins in , facilitating knowledge representation through structured facts and rules stored in a central that both methods access and update during . Both approaches are integral to applications requiring logical deduction, such as expert systems for or . Forward and backward chaining encounter common challenges, including managing incomplete knowledge—where the knowledge base lacks sufficient facts or rules to fully resolve queries—and resolving conflicts arising from multiple applicable rules, often addressed via prioritization strategies like rule ordering or specificity matching. For instance, in the blocks world planning domain, both methods can generate valid sequences to achieve a target configuration, such as stacking specific blocks, though they initiate from different points: forward chaining from initial facts and backward chaining from the goal state.

Differences

Backward chaining and represent two fundamental inference strategies in rule-based systems, differing primarily in their starting points and search directions. Backward chaining initiates the reasoning process from a specific or , working backwards to identify supporting or facts that substantiate it, whereas begins with an established set of initial facts and applies rules forward to derive potential conclusions or new facts. In terms of efficiency, backward chaining proves more effective for goal-specific queries within large knowledge bases, as it focuses the search on relevant subgoals and avoids deriving extraneous information, making it ideal for scenarios where the objective is narrowly defined. Conversely, is better suited for data-driven discovery, where the aim is to exhaustively infer all possible outcomes from available facts, though this can become computationally intensive in expansive rule sets. The further distinguishes the two: backward chaining employs a top-down, goal-directed approach akin to , recursively decomposing goals into prerequisites until base facts are reached or refuted. Forward chaining, by contrast, follows a bottom-up, data-driven strategy resembling , iteratively applying all applicable rules to propagate inferences across the . Regarding resource usage, backward chaining optimizes computational resources by irrelevant rules early, thus preventing the exploration of unnecessary paths and reducing overall memory demands in targeted reasoning tasks. Forward chaining, however, may incur higher resource costs due to the potential generation of numerous irrelevant or intermediate that do not contribute to the ultimate query, particularly in knowledge bases with many rules. A illustrative comparison arises in medical diagnosis, as exemplified by the MYCIN expert system: backward chaining queries for symptoms and evidence to confirm a hypothesized , enabling efficient, focused consultations, while forward chaining would propagate from observed symptoms through all possible rules, potentially yielding an exhaustive but less directed set of diagnostic possibilities. Both methods, however, rely on shared mechanisms such as production rules or definite clauses to represent and perform inferences.

Algorithm and Implementation

Steps in Backward Chaining

Backward chaining operates as a goal-directed mechanism in rule-based systems, beginning with a target or and working backward to verify it against known facts through a series of applications. The process relies on a consisting of facts and production rules in the form of "if antecedents then conclusion," where antecedents may themselves be subgoals. To execute backward chaining, the system maintains an agenda of goals to prove, typically implemented as a or , and employs unification to match goals with rule components, ensuring logical consistency in substitutions for variables. The first step involves selecting the initial or that the aims to prove, such as determining whether a specific condition holds based on the query. This is placed on the agenda as the starting point for . In the second step, the searches the for rules whose conclusion unifies with the current , identifying potential rules that could support the if their antecedents are satisfied. Unification binds variables in the and rule conclusion to achieve a match, allowing for flexible in representations. For each matching rule, the third step adds the rule's antecedents as new subgoals to the agenda, expanding the proof tree backward from the original goal. These subgoals represent conditions that must be proven true to establish the parent goal. The fourth step applies the previous steps recursively to each subgoal on the agenda: the system checks if the subgoal matches a known fact in the , succeeding immediately if so, or fails and backtracks if no supporting rules exist; otherwise, it generates further subgoals from applicable rules. This recursion continues until base facts are reached or all paths are exhausted. The fifth step handles failure by backtracking to previous choice points, such as alternative rules for an earlier , and attempting the next option; the overall process succeeds only if all subgoals for the initial resolve to true through this depth-first exploration. To prevent infinite in cyclic bases, the algorithm incorporates detection by tracking previously encountered subgoals and avoiding reprocessing them.

Pseudocode Example

A standard implementation of backward chaining uses a recursive function to determine if a goal can be proven from a knowledge base of definite clauses. The following pseudocode, adapted from the first-order logic backward chaining algorithm in Russell and Norvig (2020), illustrates the core process.
function FOL-BC-ASK(KB, goals, θ) returns a set of substitutions that satisfy the query
    inputs: KB, a knowledge base of first-order definite clauses
            goals, a list of conjuncts forming a query (with θ already applied)
            θ, the current substitution, initially empty {}
    if goals is empty then
        return {θ}
    let q′ = SUBST(θ, FIRST(goals))
    answers ← {}
    for each sentence r in KB do
        let (p₁ ∧ … ∧ pₙ ⇒ q) = STANDARDIZE-APART(r)
        let θ′ = UNIFY(q, q′)
        if θ′ is not null then
            let new_goals = [SUBST(θ′, p₁), …, SUBST(θ′, pₙ) | REST(goals)]
            let answers′ = FOL-BC-ASK(KB, new_goals, COMPOSE(θ′, θ))
            answers ← answers ∪ SUBST(θ′, answers′)
    return answers
This function returns a set of substitutions if the goals can be satisfied; an empty set indicates failure. Unification handles variable binding (e.g., matching literals with substitutions), and standardization-apart avoids variable clashes in rules. Error handling occurs implicitly: unification failure skips the rule, and recursion terminates on empty goals or exhaustive rule checks, enabling through alternative rules. For illustration, consider a simple in Prolog-style definite clauses, focusing on :
  • Facts: (tweety). penguin(tweety).
  • Rules: flies(X) :- (X), not(penguin(X)).
    wings(X) :- (X).
This knowledge base supports queries about flight properties. A trace of execution for the query flies(tweety) proceeds as follows, assuming order:
  1. Unify goal flies(tweety) with rule head flies(X); bind X = tweety, yielding subgoals (tweety) and not(penguin(tweety)).
  2. Check (tweety): Matches fact (tweety), succeeds (no further recursion).
  3. Check not(penguin(tweety)): Unify penguin(tweety) with fact penguin(tweety), which succeeds, so negation fails—backtrack to alternatives (none here, so overall failure).
  4. No other rules match flies(tweety); return empty substitutions (query fails).
If the fact were bird(tweety). without penguin(tweety)., the trace would succeed after verifying both subgoals, returning {X/tweety}. This demonstrates unification success, subgoal recursion, negation handling, and backtracking on failure.

Applications

In Logic Programming

In logic programming, backward chaining forms the core of the execution model in languages like , where it is implemented through Selective Linear Definite clause resolution (). , introduced by in 1974, operates top-down by starting from a goal and recursively reducing it to subgoals until facts are matched or failure occurs, enabling efficient query answering in definite clause logic programs. This mechanism aligns with backward chaining's goal-directed nature, as 's interpreter selects clauses to resolve against the current goal, unifying variables and backtracking on mismatches to explore alternative derivations. A prominent application of backward chaining in is definite clause grammars (DCGs), which extend the language's clause syntax to define context-free grammars for and tasks. DCGs leverage backward chaining by treating non-terminals as goals that expand into sequences of terminals and subgoals, with the resolution process consuming input tokens incrementally through difference lists. This allows natural encoding of parsing rules, where the backward search prunes invalid paths early, making it suitable for and syntax analysis. Query in exemplifies backward 's practical use: consider a defining family relations with facts like parent(tom, bob). parent(tom, liz). parent(bob, ann). parent(liz, pat). and a rule grandparent(X, Z) :- parent(X, Y), parent(Y, Z).. Querying ?- grandparent(tom, ann). initiates backward chaining by treating grandparent(tom, ann) as the initial goal, which resolves to the body parent(tom, Y), parent(Y, ann) via the rule. The first subgoal parent(tom, Y) succeeds with Y = bob, then parent(bob, ann) succeeds, yielding a ; backtracking explores Y = liz but fails the second subgoal, confirming only one answer through systematic search. Extensions to basic SLD resolution address limitations in expressiveness and efficiency. Negation as failure, formalized by Keith Clark in 1977, allows treating not(Goal) as true if all attempts to prove Goal via backward chaining fail, enabling closed-world assumptions in programs with incomplete information. The cut operator (!), a primitive, prunes the search space during backward chaining by committing to the current clause and preventing to alternatives, thus improving efficiency in branching rules without altering declarative semantics when used judiciously. Backward chaining's advantages in stem from the "ask-tell" model, where facts and rules are asserted (told) to the , and queries (asks) drive via , separating from and promoting reusable, logic-based specifications. This query-centric approach ensures that emerges from logical rather than imperative steps, facilitating modular development in domains like .

In Expert Systems

In expert systems, backward chaining serves as a core mechanism for goal-oriented , enabling the system to start from a target conclusion or and systematically verify it by identifying and gathering required supporting evidence. This approach is particularly effective in domains requiring hypothesis testing, such as , where the system avoids irrelevant data collection by focusing solely on information pertinent to the goal. A seminal example is the , developed at in the 1970s, which employed backward chaining to identify causative bacteria in severe infections like bacteremia and , beginning with hypotheses about possible organisms (e.g., ) and querying evidence such as symptoms, results, and patient history to confirm or eliminate them. The rule structure in backward chaining expert systems like incorporates meta-rules for control and confidence factors to manage uncertain reasoning, allowing nuanced evaluation of . Rules are typically framed as IF premise THEN conclusion statements, with meta-rules referencing rule content to prioritize or sequence applications during goal decomposition, thus optimizing the backward search through the . Confidence factors (CFs), scaled from -1 (complete disbelief) to +1 (complete belief), quantify the evidential strength of premises and are propagated and combined via formulas during chaining; for instance, used a of 0.2 to avoid low-confidence pursuits, enabling handling of probabilistic data where absolute certainty is rare. User interaction in these systems is driven by the need to resolve missing facts during subgoal pursuit, with backward chaining prompting targeted queries to elicit essential information. In , this resulted in 50-60 focused questions per session, posed to physicians for details like fever presence or results, ensuring efficient evidence gathering but limiting flexibility by rejecting volunteered data to maintain goal-directed focus. A practical of backward chaining in configuration tasks is its application to selection, where the primary goal of a valid, compatible guides rule selection to determine component needs. In such systems, the process starts from the end-goal (e.g., a meeting performance specs) and chains backward through rules assessing constraints, such as processor-memory pairings, to specify required parts without exploring unrelated options. Despite its strengths, backward chaining in systems faces limitations, notably in deep goal trees, where high branching factors generate exponentially many subgoals, straining computational resources in complex domains. In diagnostics, this contrasts with forward chaining's fact-driven accumulation, which suits data-rich monitoring but may generate extraneous inferences.

History and Developments

Origins

Backward chaining, as a goal-driven mechanism, traces its conceptual roots to early methods in developed in the mid-20th century. Pre-1970s precursors include analytic tableaux methods, introduced by Evert W. Beth in the as a semantic approach to proof search that systematically explores contradictions by branching from assumptions backward to atomic formulas. These tableaux, refined by Jaakko Hintikka in 1955 through model sets and by in the 1960s with unified rules for , emphasized backward reasoning to derive models or proofs, laying groundwork for automated deduction without forward expansion of all possibilities. The technique gained formal traction in through J. Alan Robinson's resolution principle, published in 1965, which provided a refutation-complete for by generating resolvents from clauses in a backward manner—starting from the negated goal and searching for via unification and . This backward search avoided exhaustive by focusing on goal-relevant clauses, marking a shift toward efficient machine-oriented logics. Influences from further underpin backward chaining, deriving from Jacques Herbrand's theorem of 1930, which reduces first-order validity to propositional over Herbrand universes, enabling backward proof search by instantiating existentials and expanding disjunctions from the goal formula. This theorem supports refutational approaches in by allowing systematic backward reduction of proofs via cut-elimination and witnessing substitutions. Early adoption in appeared in the STRIPS planner, developed by Richard E. Fikes and Nils J. Nilsson in , which employed backward chaining through goal regression to decompose objectives into subgoals by applying operator preconditions in reverse. This method used a prover to regress goals against the world model, generating a that efficiently identifies sequences from desired states back to the initial state. A pivotal formalization came in Robert A. Kowalski's 1974 work on logic for , which integrated backward chaining into procedural semantics for logic programs, interpreting implications as recursive subgoal reductions executed top-down from goals to facts. Kowalski's approach separated declarative logic from control, enabling non-deterministic search in systems like early prototypes while ensuring soundness through unification.

Key Milestones

In 1972, the development of Prolog by Alain Colmerauer and his team at the University of Marseille marked a pivotal advancement in embedding backward chaining within a practical programming language, enabling efficient goal-directed inference through depth-first search on Horn clauses. This implementation transformed theoretical resolution-based reasoning into a tool for natural language processing and automated deduction, with Prolog's backward chaining mechanism systematically reducing goals to subgoals until facts or failures were reached. During the and , backward chaining became integral to expert systems, exemplified by (developed in 1976), which applied it for diagnostic consultations in infectious diseases by starting from hypotheses and verifying preconditions through rule invocation. EMYCIN, a generalized derived from MYCIN in the early 1980s, further propagated backward chaining for domain-independent rule-based reasoning in applications like and . These systems demonstrated backward chaining's efficacy in handling uncertainty and providing explanations, influencing commercial deployments with thousands of rules. In the 1990s, enhancements to backward chaining emerged in abductive reasoning frameworks, extending it beyond deduction to generate explanatory hypotheses by treating observations as goals and seeking minimal rule sets that entail them. Pioneering work, such as Eugene Santos Jr.'s 1992 formulation, modeled abduction as a constrained backward-chaining process over causal rules, optimizing for consistency and minimality in diagnostic and planning tasks. This integration addressed limitations in pure deductive systems, enabling applications in knowledge discovery where multiple potential explanations were ranked. The 2000s saw backward chaining incorporated into semantic web technologies, particularly OWL reasoners that employed it for efficient query answering and consistency checking over ontologies. Systems like those based on F-logic and backward-chaining hybrids for RDF/OWL, as explored in Harold Boley’s work around , allowed scalable by deferring rule application until query goals were specified, reducing materialization overhead in large-scale knowledge bases. This approach supported web-scale applications, such as alignment and semantic querying. In the 2010s and up to 2025, hybrid systems combining backward chaining with machine learning have advanced explainable AI, particularly in neuro-symbolic frameworks that leverage symbolic inference for interpretable decision traces alongside neural pattern recognition. For instance, best-first backward-chaining strategies integrated with neural heuristics, as proposed in 2021, guide subgoal selection in logic programs to enhance efficiency and transparency in tasks like ethical reasoning and fault diagnosis. Recent developments include Logical Neural Networks (introduced in 2020), which integrate backward chaining with differentiable logic to enable verifiable explanations in neuro-symbolic AI, including applications in high-stakes domains such as healthcare that demonstrate improved accuracy over pure neural baselines while maintaining logical soundness. As of 2025, surveys highlight continued advancements in logic-oriented fuzzy neural networks combining backward chaining with fuzzy logic for enhanced interpretability in AI systems.

References

  1. [1]
    [PDF] 9 INFERENCE IN FIRST-ORDER LOGIC
    We will also see that backward chaining has some disadvantages com- pared with forward chaining, and we look at ways to overcome them. Finally, we will look at.
  2. [2]
    [0904.3036v16] Middle History of Logic Programming - arXiv
    Kowalski's approach has been to advocate limiting Logic Programming to backward-chaining only inference building on the resolution uniform proof procedure ...
  3. [3]
    [PDF] Rule-Based Expert Systems: The MYCIN Experiments of the ...
    A strong result from the MYCIN experiment is that simple backward chaining (goal-driven reasoning) is adequate for reasoning at the level an expert. As with ...
  4. [4]
    [PDF] Backward Chaining First-Order Logic Example: Proof Knowledge ...
    As a consequence, many practical Knowledge Representation formalisms in AI use a restricted form and specialized inference. – Logic programming (Prolog). – ...
  5. [5]
    [PDF] Notes for Chapter 12 Logic Programming
    Given a query, Prolog uses unification with backtracking. • All rules have local context. • A query such as: q1, q2, …, qn. • Unification implementation: first.
  6. [6]
    [PDF] Forward and Backward Chaining Techniques of Reasoning in Rule ...
    The forward and backward chaining techniques are well-known reasoning concepts used in rule-based systems in Artificial Intelligence. The forward chaining is ...
  7. [7]
    None
    Below is a merged summary of forward and backward chaining based on the provided segments from *Artificial Intelligence: Structures and Strategies for Complex Problem Solving* by George F. Luger, 6th Edition, and related content from the PDF source (https://www.uoitc.edu.iq/images/documents/informatics-institute/exam_materials/artificial%20intelligence%20structures%20and%20strategies%20for%20complex%20problem%20solving.pdf). Since the content spans multiple chapters and sections with overlapping and varying levels of detail, I will consolidate the information into a comprehensive narrative and use tables in CSV format where appropriate to retain all details efficiently. The response will cover sources, sections, similarities, challenges, examples, and URLs, ensuring no information is lost.
  8. [8]
    [PDF] 7 LOGICAL AGENTS - Artificial Intelligence: A Modern Approach
    Forward chaining and backward chaining are very natural reasoning algorithms for knowledge bases in Horn form. • Two kinds of agents can be built on the basis ...
  9. [9]
    [PDF] The Comparison between Forward and Backward Chaining
    The aim of this paper is to make a comparative study to identify which reasoning strategy system (forward chaining or backward chaining) is more applicable when ...
  10. [10]
    None
    ### Summary of MYCIN's Use of Backward Chaining
  11. [11]
    Inference in Datalog
    In backward chaining you start with a goal , a statement that you want to prove, in addition to the knowledge base, and you look for derivations that allow you ...
  12. [12]
    [PDF] Forward and backward chaining
    Forward chaining algorithm function PL-FC-Entails?(KB,q) returns true or ... Backward chaining example. Q. P. M. L. A. B. Chapter 7.5–7.7. 17. Backward chaining ...
  13. [13]
    [PDF] • First order logic: inference • Production systems
    Backward chaining (goal reduction). Idea: To prove the fact that appears in the conclusion of a rule prove the premises of the rule. Continue recursively.<|control11|><|separator|>
  14. [14]
    [PDF] Lecture 20: Automatic Theorem-Proving
    Mar 8, 2025 · • Definition: Backward Chaining is a search algorithm in which. • State = {goal set}. • Action = apply a known rule, backward: replace the ...
  15. [15]
    (PDF) Predicate Logic as Programming Language - ResearchGate
    PDF | On Jan 1, 1974, Robert A. Kowalski published Predicate Logic as Programming Language | Find, read and cite all the research you need on ResearchGate.Missing: calculus intelligence<|control11|><|separator|>
  16. [16]
    4.13 DCG Grammar rules - SWI-Prolog
    Each predicate is given two additional arguments. Chaining together these arguments implements the accumulator.
  17. [17]
    [PDF] Negation as failure - Department of Computing
    whose construction depends on at most n failure proofs n > 0. ....& ~ P&.... fail. A query evaluation for P succeeds ...
  18. [18]
    (PDF) Configuration Expert Systems: A Case Study And Tutorial
    ... expert system development. In the first part of the paper four configuration ... backward chaining prevails, while most configuration applications ...
  19. [19]
    [PDF] Principles of Expert Systems
    face the danger of a combinatorial explosion. Moreover, for systems based ... Using backward chaining, R1 will be the first rule selected for application.
  20. [20]
    [PDF] Handbook of Tableau Methods
    The tableau methodology, invented in the 1950s by Beth and Hintikka and later perfected by Smullyan and Fitting, is today one of the most popular, since it ...
  21. [21]
    A Machine-Oriented Logic Based on the Resolution Principle
    The raw implementation of the Resolution Theorem would produce a very inefficient refutation procedure, namely, the procedure would consist of com- puting, ...
  22. [22]
    [PDF] On Herbrand's Theorem - UCSD Math
    Herbrand's theorem is one of the fundamental theorems of mathematical logic and allows a certain type of reduction of first-order logic to propositional logic.
  23. [23]
    [PDF] STRIPS: A New Approach to the Application of .Theorem Proving to ...
    It employs a resolution theorem prover to answer ques- tions of particular models and uses means-ends analysis to guide it to the desired goal-satisfying model.Missing: regression | Show results with:regression
  24. [24]
    [PDF] for Problem - Department of Computing
    Logic programs express only the logic of prOblem-solving methods. They are easier to understand, easier to verify and easier to change. are especially ...
  25. [25]
    [PDF] The birth of Prolog - Alain Colmerauer
    During the fall of 1972, the first Prolog system was implemented by Philippe in Niklaus Wirt's language Algol-W; in parallel, Alain and Robert Pasero created.
  26. [26]
    [PDF] A view of the origins and development of Prolog
    In this paper I present a review of the origins and development of Prolog based on a long-term associa- tion, both academic and personal, with Colmerauer, the.
  27. [27]
    [PDF] EXPERT SYSTEMS IN THE 1980s - Stacks
    It uses the domain-independent components of the MYCIN system, notably the production rule mechanism and backward-chaining control structure. Then for each ...
  28. [28]
    [PDF] AN OVERVIEW OF EXPERT SYSTEMS
    what it is, techniques used, existing systems, applications, who is doing it, who is funding it, the ...
  29. [29]
    Tech Report CS-92-03 - Brown CS
    Eugene Santos Jr. January 1992. Abstract: Abductive reasoning (or explanation) is basically a backward-chaining process on a collection of causal rules.Missing: frameworks 1990s
  30. [30]
    [PDF] Notes for Meeting 6 Abductive Inference
    Backward-Chaining Approaches to Abduction. A common approach to abduction involves adapting backward-chaining methods for deduction; such methods: - start ...
  31. [31]
    [PDF] Reasoning Strategies for Semantic Web Rule - DSpace@MIT
    Nov 13, 2008 · Over the course of this thesis, I wrote several kinds of reasoners including backward, forward, and hybrid reasoners for RDF rule languages.
  32. [32]
    [PDF] F-OWL: an Inference Engine for Semantic Web1
    F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining kame-based systems in logic. F-OWL is ...
  33. [33]
    [PDF] Large-Scale Reasoning with OWL - arXiv
    Feb 14, 2016 · The following sections explain how the approaches can be used for Semantic. Web reasoning and what advantages they have over the other. Section ...
  34. [34]
    A review of neuro-symbolic AI integrating reasoning and learning for ...
    This paper analyzes the present condition of neuro-symbolic AI, emphasizing essential techniques that combine reasoning and learning.
  35. [35]
    [PDF] A Best-first Backward-chaining Search Strategy based on Learned ...
    A hybrid neural-symbolic inference method is proposed in this paper. It is a best-first search strategy for backward chaining. The strategy is based on neural ...
  36. [36]
    Explainable Diagnosis Prediction through Neuro-Symbolic Integration
    In this study, we explore the use of neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis ...Missing: backward | Show results with:backward
  37. [37]
    Neuro-Symbolic AI: Explainability, Challenges, and Future Trends
    Nov 7, 2024 · This article proposes a classification for explainability by considering both model design and behavior of 191 studies from 2013, focusing on neuro-symbolic AI.Missing: 2020s | Show results with:2020s