Fact-checked by Grok 2 weeks ago

Local consistency

Local consistency refers to a class of properties in problems (CSPs) that ensure partial assignments to small subsets of variables—typically of size 1 to 3—can be extended to consistent assignments involving additional variables, allowing for the systematic elimination of inconsistent values from variable domains to simplify . These properties, enforced via polynomial-time algorithms, serve as preprocessing steps or integrated mechanisms in search-based solvers to reduce the of finding solutions in CSPs, which model and decision problems across fields like , scheduling, and configuration. Key levels of local consistency form a of increasing strength. Node consistency, the simplest form, requires that every value in a 's domain satisfies all constraints involving that alone. Arc consistency extends this to binary constraints, mandating that for every between two variables x and y, each value in x's has at least one supporting value in y's that satisfies the , and vice versa; this can be achieved in O(ed^2) time for a CSP with e constraints and domain size d. Path consistency further generalizes to triples of variables, ensuring that constraints between any two are compatible through the third via relational composition. Higher-order k-consistency requires that every consistent to any k-1 variables can be extended to a consistent including one more variable, providing stronger guarantees but at higher computational cost, often exponential in k. The application of local consistency plays a central role in CSP resolution by preserving all solutions while pruning the search space, often dramatically improving efficiency in algorithms like AC-3 for arc consistency. Extensions, such as generalized arc consistency for constraints of greater than two, address non-binary relations with complexities like O(ekd^k), where k is the . In advanced frameworks like junction graphs for constraint-based , local concepts like single consistency ensure local tightness in graphical models, enabling scalable via message-passing protocols. While full global consistency (solvability) may require higher levels, local methods provide practical approximations that detect early failures and unify techniques across related domains, including relational databases and theorem proving.

Introduction

Definition and Assumptions

In (CSPs), the standard formulation involves a of X = \{x_1, \dots, x_n\}, where each x_i is associated with a D(x_i) of possible values, and a of constraints C, each defined as a over a of specifying the allowable value combinations for those variables. Local consistency refers to a property of partial assignments in a CSP, where an assignment to a subset of variables Y \subseteq X is locally consistent if it assigns values from the respective domains and satisfies all constraints whose scopes are contained within Y. More precisely, it ensures that solutions to local subproblems—defined over small subsets of variables and the constraints among them—are solvable and, in certain forms, extensible to larger subsets without violating additional constraints. This concept encompasses various levels, such as node consistency as the simplest form where each variable individually satisfies its constraints. Key assumptions underlying local consistency include finite domains for all variables, which ensures tractability in and ; constraints of or higher , allowing representation as relations over two or more variables; and, unless otherwise specified, undirected constraint graphs where edges represent symmetric constraints between variables. These assumptions facilitate the modeling of CSPs as networks where local checks can be efficiently propagated. Local consistency emphasizes subproblems over complete global solutions because directly solving the full CSP is often NP-complete, whereas verifying and enforcing consistency on small subsets prunes inconsistent values early, significantly reducing the search space for algorithms without guaranteeing a global solution.

Importance in Constraint Satisfaction

Local consistency plays a pivotal role in solving problems (CSPs) by enabling the early detection and elimination of inconsistent variable assignments, which significantly reduces the search space explored during algorithms. In CSP solvers, enforcing local consistency prunes domains of variables by removing values that cannot participate in any complete solution, thereby minimizing the frequency and depth of backtracking failures that occur when incompatible assignments are discovered late in the search process. This pruning effect is particularly valuable in combinatorial problems where exhaustive search would otherwise be computationally prohibitive, as it transforms a potentially exponential search tree into a more manageable one. One key benefit of local consistency is its ability to identify unsatisfiable CSPs without resorting to a full search; for instance, if enforcing a certain level of consistency results in an empty domain for any variable, the problem is provably inconsistent, avoiding unnecessary exploration. Additionally, local consistency provides an approximation to global consistency, where lower levels like arc consistency offer a tractable way to ensure that partial assignments are extendable to larger subsets of variables, while higher levels strengthen this guarantee at the cost of increased computation. For certain levels, such as arc and path consistency, enforcement can be achieved in polynomial time relative to the problem size, making it feasible for preprocessing steps in solvers. The origins of local consistency trace back to the 1970s, emerging from early CSP applications in , notably Waltz's 1975 line-labeling , which propagated constraints across line junctions in drawings to resolve labeling ambiguities efficiently without exhaustive . This approach was formalized and generalized in Montanari's work on networks of constraints for picture , introducing relational representations and properties, and further developed by Mackworth in 1977, who defined a hierarchy of levels to systematically reduce local inconsistencies in relational networks. However, trade-offs exist: while higher-order consistencies (e.g., path or k-consistency) provide stronger and better of solutions, they incur substantially higher computational costs, often scaling cubically or worse with the number of variables and domain sizes, necessitating careful selection based on problem characteristics.

Basic Local Consistencies

Node Consistency

Node consistency represents the simplest form of local consistency in problems (CSPs), addressing only unary constraints on individual s. In a CSP, a x_i is node-consistent if every in its D(x_i) satisfies all unary constraints applicable to x_i. This ensures that no is inherently infeasible due to constraints involving only that single , thereby the search space at the outset without considering interactions between s. Formally, for every constraint C(x_i), the D(x_i) is -consistent if \forall a \in D(x_i), (x_i = a) \in C(x_i). This condition guarantees that each potential assignment to x_i alone is valid under its restrictions. Enforcing consistency across all variables in the CSP results in a network that is fully -consistent, eliminating trivial inconsistencies before more complex checks. The algorithm for achieving node consistency is straightforward and efficient: for each x_i, iterate through its and remove any value a that violates a C(x_i). This process operates in O(|D| \cdot k) time, where |D| is the domain size and k is the number of unary constraints on x_i, making it computationally inexpensive even for large domains. As a prerequisite for higher consistencies like arc consistency, node consistency establishes a clean foundation by resolving unary issues independently. A practical example arises in scheduling CSPs, where a might require a task to occupy a time slot of at least a certain . Node consistency ensures that only valid time slots—those accommodating the task's —are retained in the for that , filtering out incompatible options like overly short intervals. This step simplifies subsequent solving without altering the problem's core structure.

Arc Consistency

Arc consistency is a local consistency property in problems (CSPs) that ensures, for every pair of variables connected by a , that each value in the of one has at least one compatible value in the of the other . This property focuses on pairwise interactions, inconsistent values from domains to reduce without losing solutions. Formally, in a CSP defined by X = \{x_1, \dots, x_n\}, domains D(x_i), and C(x_i, x_j), an (x_i, x_j) is arc-consistent if for every value a \in D(x_i), there exists a value b \in D(x_j) such that (a, b) \in C(x_i, x_j). The reverse (x_j, x_i) must satisfy the symmetric condition. A CSP is arc-consistent if all its arcs are arc-consistent. Enforcing arc consistency involves revising domains through constraint propagation, which removes values lacking support from neighboring variables. This builds on node consistency by extending filtering from unary constraints to binary ones. Key algorithms for achieving arc consistency include AC-3 and AC-4. AC-3, introduced by Mackworth, uses a to process arcs whose domains may have changed, repeatedly revising domains until no further reductions occur or a domain empties (indicating inconsistency). Its time is O(e d^3), where e is the number of arcs and d is the maximum domain size, as each arc may be revised up to O(d^2) times with O(d) work per revision. AC-4 improves on AC-3 by using support counting and directional processing to avoid redundant revisions, achieving optimal worst-case of O(e d^2). In AC-4, supports for each value are precomputed and updated incrementally, counting compatible pairs to identify unsupported values efficiently. This makes AC-4 particularly effective for dense constraint graphs, though AC-3 often performs better in practice for sparse networks due to lower overhead. A classic example is the problem, where regions are , colors are domain values, and adjacent regions share "not equal" constraints. consistency ensures that for each color assigned to a region, at least one different color remains available for each adjacent region, pruning impossible colorings early.

Higher-Order Local Consistencies

Path Consistency

consistency extends consistency by ensuring that consistent assignments to pairs of can be extended to a third variable connected via a path of length two in the constraint graph. Introduced by Montanari in , it addresses potential inconsistencies that consistency alone might overlook in chains of constraints. Formally, in a (CSP) defined by variables X = \{x_i\}, domains D(x_i), and constraints C(x_i, x_j), a (x_i, x_j, x_k) is path-consistent if for every pair of values a \in D(x_i) and b \in D(x_j) such that (a, b) \in C(x_i, x_j), there exists a value c \in D(x_k) satisfying (b, c) \in C(x_j, x_k). If a direct constraint C(x_i, x_k) exists, then additionally (a, c) \in C(x_i, x_k) must hold. A CSP is path-consistent if every such of length two is path-consistent; Montanari proved that checking all length-two paths suffices for the entire due to properties. Enforcing path consistency typically involves algorithms like PC-1 and PC-2. PC-1, proposed by Montanari, iterates over all triples of variables, revising the constraints by removing unsupported value combinations until no changes occur; this brute-force approach has time complexity O(n^3 d^3), where n is the number of variables and d is the maximum domain size. PC-2, developed by Mackworth in 1977, improves efficiency by using a to process only potentially inconsistent paths, analogous to the for arc consistency, achieving worst-case O(n^3 d^3) but often performing better in practice. Despite its strengths, path consistency does not guarantee global consistency or solvability of the CSP, particularly in dense constraint graphs where higher-order interactions may introduce inconsistencies undetectable by checks. For instance, certain four-variable networks can be path-consistent yet lack a complete , highlighting the need for stronger consistencies in problems.

k-Consistency

k-Consistency generalizes local consistency in constraint satisfaction problems (CSPs) by ensuring extendability of partial assignments to larger tuples of variables. A CSP is k-consistent if every partial assignment to any k variables that satisfies all constraints among them can be extended to an assignment of k+1 variables that satisfies all relevant constraints. Formally, for any set of k+1 and any to a of k from that set, there exists a value in the of the remaining such that the full to the k+1 is with all constraints involving them. This property builds on lower-order consistencies, with path consistency corresponding to 3-consistency as a special case. Achieving n-consistency, where n is the total number of variables, equates to full global consistency and guarantees a solution if one exists. The computational cost of establishing k-consistency grows rapidly with k, with a time complexity of O(n^k d^k), where n is the number of variables and d is the maximum domain size; this renders it feasible only for small values of k in practice, typically up to 3 or 4. An illustrative example arises in puzzle solving, such as Sudoku, where enforcing 3-consistency on variables within subgrids (e.g., rows, columns, or 3x3 blocks) prunes inconsistent partial assignments early, enhancing propagation without exhaustive search.

Advanced Consistency Notions

Directional Consistency

Directional consistency is a local consistency notion in problems (CSPs) that leverages a specific ordering of variables to ensure efficient constraint propagation in a forward direction, from earlier to later variables in the ordering. Given a total ordering d = (x_1, x_2, \dots, x_n) of the variables, a CSP is said to be i-directionally consistent if every partial assignment to the first i variables in the ordering can be extended to a value for the (i+1)-th variable that satisfies all relevant constraints involving those variables. This property guarantees that is unnecessary when instantiating variables sequentially along the ordering up to i+1, provided the induced width of the constraint graph aligns with the consistency level. Key variants include directional arc consistency (DAC), which corresponds to 2-directional , and directional path consistency (DPC), which corresponds to 3-directional . DAC enforces that for every ordered (x_i \to x_j) with i < j, each value in the domain D(x_i) has at least one supporting value in D(x_j) that satisfies the binary constraint between x_i and x_j, considering the directional ordering. Formally, a CSP is directionally arc-consistent under ordering d if for all variables x, y with x < y in d, the directed from x to y is arc-consistent, meaning \forall a \in D(x), \exists b \in D(y) such that (a, b) satisfies the constraint C(x, y). DPC extends this by ensuring over paths of length 2, requiring that every pair of values for earlier variables has a supporting value in a later variable connected to both. These variants assume a fixed variable ordering, which influences the constraint graph's induced structure. The primary advantage of directional consistency lies in its reduced computational overhead compared to undirected counterparts like full arc or path consistency, as it avoids symmetric enforcement and focuses propagation forward, eliminating redundant checks. For instance, achieving DAC requires O(ek^2) time, where e is the number of arcs and k the domain size, versus O(ek^3) for standard arc consistency. This efficiency enables backtrack-free search on tree-structured CSPs (width 1) via DAC and on width-2 graphs via DPC, making it foundational for algorithms like bucket elimination that exploit orderings to solve CSPs optimally relative to the graph's induced width.

Relational Consistency

Relational consistency is a relation-based form of local consistency in constraint satisfaction problems (CSPs), where the primitive entities are the constraint relations rather than individual variables. Introduced by Dechter and van Beek (1997), it generalizes traditional variable-based consistencies (like k-consistency) to networks with constraints of arbitrary arity. Relational m-consistency requires that for any variable x and any set of m-1 relations R_{S_1}, \dots, R_{S_{m-1}} whose scopes S_i all include x, every consistent partial assignment to the variables A = \bigcup S_i \setminus \{x\} can be extended to a value for x such that the full assignment satisfies all m-1 relations simultaneously. Formally, \varsigma(A) \subseteq \pi_A (\bowtie_{i=1}^{m-1} R_{S_i}), where \varsigma(A) denotes the set of consistent instantiations of A, and \pi_A is the projection onto A. This property unifies constraint processing operations such as projection, join, and variable elimination within a relational algebra framework, making it suitable for non-binary constraints. Enforcement algorithms include relational-consistency (RC_m), a brute-force method using extended m-composition, and adaptive-relational-consistency (ARC_m), which achieves global consistency with time complexity O(n \cdot \exp((n+e) w^*(d))), where n is the number of variables, e the number of constraints, and w^*(d) the induced width under ordering d. A directional variant, DRC_m, propagates consistency forward along an ordering with complexity O(\exp(m w^*(d))). In contrast to variable-centric approaches, which focus on extending assignments to individual variables, relational consistency treats entire relations as units, providing a more natural handling of high-arity constraints and linking local properties to global solvability via . It complements directional methods by offering an order-independent perspective, though directional extensions like combine both paradigms for efficient solving.

Algorithms for Enforcing Consistency

Constraint Propagation Techniques

Constraint propagation techniques form a cornerstone of solving constraint satisfaction problems (CSPs) by iteratively pruning variable domains to enforce local consistency, thereby reducing the effective search space and detecting inconsistencies early. These methods operate by maintaining a queue of constraints or arcs (variable pairs connected by binary constraints) that require revision; when a domain is reduced, affected arcs are enqueued for processing. Revision involves checking each value in a domain for support in neighboring domains and removing unsupported values, repeating until a fixed point is achieved where no further changes occur. Propagation is typically triggered by events such as variable assignments or domain reductions that violate constraints, enabling dynamic adaptation during search. This queue-based approach, rooted in early consistency algorithms, ensures efficient handling of dependencies without exhaustive enumeration. A fundamental technique is forward checking, which provides a lightweight form of propagation akin to basic during backtracking search. After assigning a value to a variable, forward checking immediately prunes domains of all unassigned future variables connected by constraints, removing values lacking support from the current assignment. Introduced by Haralick and Elliott in 1980, this method anticipates deeper search failures by focusing only on forward constraints, avoiding full network revision. It operates with a time complexity of O(ed^2) per assignment in the worst case, where e is the number of constraints and d the maximum domain size, but empirically reduces backtracks significantly on problems with tight constraints. Maintaining arc consistency (MAC) builds on forward checking by fully enforcing arc consistency after each assignment, using optimized revision algorithms like AC-3 to propagate changes across the entire remaining network. In MAC, arc consistency ensures every value in a domain has a supporting value in each neighboring domain, achieved by queuing and revising all affected arcs until stability. Sabin and Freuder demonstrated that MAC outperforms forward checking by reducing the search tree size, often exploring fewer than 1% of nodes on structured benchmarks like scene labeling problems, due to more aggressive pruning. The overall complexity during search remains polynomial per node but scales with network density, yielding empirical speedups of up to two orders of magnitude in solver performance on random and structured CSPs. For higher-order consistencies like path consistency, specialized propagators such as PC-3 extend the queue-based paradigm to ternary relations formed by paths of length two. PC-3 revises value pairs in a domain by ensuring support through a common intermediary variable, using support lists to track valid combinations and avoid redundant verifications, similar to AC-4's approach for arcs. Developed by Mohr and Henderson in 1986, PC-3 achieves path consistency with a time complexity of O(e d^3), improving over prior algorithms by minimizing constraint checks. This enables pruning of incompatible value pairs, further contracting the search space, though at higher computational cost than arc-level methods. Recent enhancements include singleton arc consistency (SAC), which strengthens propagation by temporarily reducing each variable to a singleton domain and enforcing arc consistency on the resulting network, pruning original values that lead to inconsistencies. This detects near-inconsistencies invisible to standard arc consistency, as each value is tested for viability in isolation. Prosser's 2000 analysis showed SAC solves all tree-structured CSPs and provides empirical benefits on sparse random problems, reducing search nodes by up to 50% in preprocessing, though it incurs O(n e d^3) time where n is the number of variables. SAC is particularly effective in modern solvers for identifying global inconsistencies early without full search. These propagation techniques are applied to enforce arc and path consistencies dynamically during CSP solving.

Bucket Elimination

Bucket elimination is an algorithm for enforcing higher-order consistency in constraint satisfaction problems (CSPs) through variable elimination ordered from a specified sequence. It organizes constraints into buckets associated with each variable and processes them sequentially, generating new constraints that summarize the eliminated variables' impacts on the remaining ones. The formal steps of bucket elimination are as follows:
  1. Select a variable ordering d = (X_1, X_2, \dots, X_n), where variables are eliminated from X_n to X_1.
  2. For each variable X_i, form bucket B_i by placing all original constraints (or functions) that involve X_i as their highest-indexed variable according to d.
  3. Process the buckets in reverse order, from i = n down to i = 1: Within B_i, compute the relational join (natural join) of all functions in the bucket, then project out X_i via existential quantification to produce a new function over the remaining variables in the join; add this new function to the buckets of those remaining variables (typically the lowest one).
This procedure yields an equivalent CSP that enforces directional i-consistency relative to the ordering d, ensuring that any solution to the reduced problem can be extended to a solution of the original without backtracking. The time and space complexity of bucket elimination is O(n \cdot d^{w^* + 1}), where n is the number of variables, d is the maximum domain size, and w^* is the induced width of the constraint graph along ordering d. This complexity is exponential in the treewidth of the graph, which quantifies its deviation from tree-structured sparsity. As an illustrative example, consider a chain-structured CSP with variables A_1, A_2, \dots, A_n connected by binary constraints only between consecutive pairs (e.g., A_i and A_{i+1}). Using the elimination ordering from A_n to A_1, bucket elimination processes each bucket sequentially: the bucket for A_n yields a unary function over A_{n-1}, which is then joined and projected in the next step to produce a unary over A_{n-2}, and so on, effectively propagating consistency through the chain in linear time given the induced width of 1. Bucket elimination generalizes dynamic programming to the CSP framework, enabling exact solution computation by leveraging the induced structure of the problem graph.

Theoretical Properties

Consistency and Satisfiability

In constraint satisfaction problems (CSPs), enforcing arc consistency is a sound operation that preserves the satisfiability of the original instance, meaning that if the CSP has a solution, the reduced CSP after arc consistency enforcement also has a solution, and vice versa, with the solutions corresponding exactly via projection. This property holds because arc consistency algorithms, such as AC-3, only eliminate values from variable domains that lack support under the relevant binary constraints, without introducing new inconsistencies or removing viable solution tuples. Higher-order local consistencies exhibit hierarchical implications: a CSP that is k-consistent for k > 1 is also (k-1)-consistent, as the ability to extend partial assignments to k variables presupposes the extendability to fewer variables. However, lower-level consistencies like arc consistency or path consistency do not guarantee global , as they address only local support without ensuring compatibility across the entire constraint network. A classic is a CSP modeling 2-coloring of a (odd ), where variables represent vertices with domains {red, blue} and binary constraints enforce adjacent vertices having different colors; this instance is arc-consistent, as each value in a domain has a supporting value in neighboring domains, yet it is globally unsatisfiable due to the impossibility of 2-coloring an odd . Recent theoretical advancements frame local consistency enforcement as a form of between CSP instances, establishing in under specific conditions, such as when the reduction preserves the of partial solutions in promise-CSP variants. This perspective highlights how local consistencies can transform problems while maintaining solvability properties, enabling more efficient algorithmic mappings.

and Solvability

In general, the (CSP) is NP-complete, reflecting its computational intractability for arbitrary instances. However, subclasses defined by bounded-width constraint graphs admit polynomial-time solvability via local consistency methods, as the width parameter captures the tree-likeness of the structure and enables efficient elimination algorithms. A prominent tractable case involves CSPs whose constraint graphs form ; in such instances, enforcing consistency along a suitable ordering guarantees global and thus determines solvability without . This result leverages the acyclic nature of trees, where local pairwise checks propagate to resolve the entire problem. Further advancements characterize broader classes solvable purely by local . Notably, count-free CSPs—those lacking or counting constraints—are solvable by checking bounded-width consistency, as established in a analysis that confirms the absence of counting mechanisms aligns with tractable polymorphisms. Specific constraint types also yield solvability through targeted consistency enforcement. For the all-different (Alldiff) constraint, which requires distinct values across a set of variables, generalized arc (GAC) propagation efficiently prunes domains and, when applied iteratively, resolves in polynomial time for many practical instances. Similarly, for monotonic constraints—where feasible tuples preserve order—arc consistency alone suffices to achieve global consistency due to the constraints' directional properties. Higher levels, such as path consistency, resolve solvability for row-convex constraints, defined by feasible regions forming convex bands in relation matrices; enforcing path consistency on connected row-convex constraints ensures global consistency. Post-2014 developments extend these ideas to promise-CSPs, where instances are promised to satisfy partial conditions (e.g., near-satisfiability). Bounded-width local consistency algorithms solve certain promise-CSPs in time, providing reductions that preserve solution existence under relaxed guarantees. Recent work in the further explores limit behaviors, showing that local consistency yields approximations of optimal solutions with quality bounds approaching the global optimum as instance size grows, particularly for sparse or structured promise instances. These results highlight local methods' role in approximation, even when exact solvability fails.

Applications

In CSP Solvers

Local consistency plays a crucial role in search algorithms for solving problems (CSPs), where it serves as both a preprocessing step and an ongoing mechanism during search. In preprocessing, arc is enforced to prune inconsistent values from variable domains before initiating search, reducing the overall search space by eliminating values that cannot participate in any complete solution. This is typically achieved using algorithms like AC-3, which detect and remove such values efficiently. During search, maintaining arc (MAC) integrates propagation after each variable assignment, ensuring that the partial assignment remains consistent with unassigned variables and further pruning domains to avoid exploring dead-end branches. For non-binary constraints, generalized arc (GAC) extends this by enforcing support for every value in a variable's domain across the entire constraint arity, often implemented via algorithms like GAC2001. Modern CSP solvers integrate these techniques to enhance performance, with empirical evidence showing substantial reductions in search effort. For instance, the Choco solver employs a propagation engine that maintains arc consistency for binary constraints throughout the solving process, using dedicated propagators to filter domains dynamically. Similarly, OR-Tools' CP-SAT solver leverages constraint , including domain filtering akin to arc consistency, within its lazy clause generation framework to efficiently handle large-scale CSPs. Benchmarks comparing to simpler forward checking demonstrate that MAC can reduce the number of nodes explored by up to several orders of magnitude on structured problems, with studies reporting sizes much smaller than those of forward checking alone, though at the cost of increased time per node. Advanced local consistency filters, such as max-restricted path consistency (max-RPC), provide stronger pruning than arc consistency while maintaining low computational overhead, making them suitable for integration into solvers. Introduced by Debruyne and Bessière, max-RPC ensures that for every value in a , there exists a path-consistent support through intermediate variables, removing more inconsistent values without the full expense of path consistency. This technique prunes the search space more aggressively, with empirical results indicating better performance on binary CSPs compared to arc consistency alone, and it has been adapted for use during search with algorithms achieving O(e n ) time , where e is the number of constraints, n the number of variables, and d the maximum size. In distributed CSPs, local consistency enforcement can be parallelized to exploit multi-processor architectures, enabling concurrent propagation across agents. Research from the 1990s in the Artificial Intelligence journal explored parallel algorithms for achieving arc consistency in constraint networks, demonstrating that distributed enforcement reduces solution times by partitioning the constraint graph and synchronizing local propagations, with techniques like asynchronous updates maintaining global consistency while minimizing communication overhead.

Beyond Constraint Satisfaction

Local consistency techniques extend beyond traditional constraint satisfaction problems (CSPs) into optimization domains, where they enhance bounding mechanisms within branch-and-bound algorithms. In weighted CSPs and cost function networks (CFNs), local consistency filtering, such as soft arc consistency or bounds arc consistency, prunes infeasible values and tightens cost bounds to guide the search toward optimal solutions more efficiently. For instance, cost-based domain filtering integrates local consistency to adjust variable domains based on objective function costs, enabling solvers to handle larger optimization instances by reducing the explored search space. This approach has been pivotal in seminal works on soft constraints, where maintaining arc consistency during branch-and-bound significantly improves performance on problems like resource allocation. In , particularly within graphical models and Bayesian networks, local consistency plays a key role in structure learning and . For Bayesian networks, locally consistent scoring functions ensure that model selections align with data dependencies, facilitating the identification of conditional independencies in multi-relational datasets. These methods enforce consistency at the local level to propagate probabilistic constraints, improving the accuracy of network structures without exhaustive enumeration. Recent advancements incorporate neural networks into . Additionally, neural projection techniques embed physical or logical constraints into neural architectures, using local consistency checks to project outputs onto feasible regions, as seen in learning or initialization schemes for deep networks. Applications of local consistency appear in diverse domains like scheduling and bioinformatics. In scheduling problems, such as meetings or akin to airline crew rostering, distributed local consistency reinforcement enhances solver efficiency by propagating constraints across agents to resolve conflicts in time and availability. This technique reduces in multi-agent environments, making it suitable for real-time adjustments in crew pairings and duty rosters. In bioinformatics, local consistency supports computational and folding subproblems by filtering incompatible amino acid assignments in CFNs, ensuring gap-free enumeration of sequences that satisfy structural constraints. For example, enforcing local consistency in weighted CSPs models the inverse folding problem, optimizing side-chain placements while respecting energy-based constraints derived from folding pathways. Emerging research since 2023 explores local consistency in constraint satisfaction problems (PCSPs), where it serves as a mechanism between CSPs and their promise variants, enabling efficient approximation for decision problems with partial guarantees. These reductions leverage local consistency to transform instances, preserving solvability properties while addressing ambiguities in promise settings, as demonstrated in analyses of existence. Such integrations highlight local consistency's role in bridging with continuous learning paradigms. Despite these advances, local consistency faces challenges in very large-scale problems, where enforcing higher-order consistencies (e.g., k-consistency for k>2) incurs time and costs relative to domain size and . In domains with long or continuous variables, such as bioinformatics sequences or massive scheduling instances, generic filtering algorithms struggle, necessitating specialized consistencies or approximations to mitigate computational overhead.

References

  1. [1]
    [PDF] Lecture 4 Local Consistency
    Consistent instantiation is k-consistent if its domain consists of k variables. An instantiation is a solution to P if it is consistent and defined on all.
  2. [2]
    Arc Consistency
    Local consistencies are properties that can be applied in a CSP, using (typically) polynomial algorithms, to remove inconsistent values either prior to or ...
  3. [3]
    [PDF] Local Consistency in Junction Graphs for Constraint-Based Inference
    Introduction. The concept of local consistency plays a central role in con- straint satisfaction. Given a constraint satisfaction problem.
  4. [4]
    Local and global relational consistency - ScienceDirect.com
    In this paper, we present a new definition of local consistency, called relational consistency. The new definition is relation-based, in contrast with the ...
  5. [5]
    [PDF] Notions of Local Consistency - IMADA
    Local Consistency define properties that the constraint problem must satisfy after constraint propagation. Rules Iteration defines properties on the process ...
  6. [6]
    [PDF] Consistency in Networks of Relations - UBC Computer Science
    CONSISTENCY IN NETWORKS OF RELATIONS i 05. In noting this fact Waitz [24] ... Mackworth, A. K. Using models to see. Proc. Artificial Intelligence and the ...
  7. [7]
    [PDF] Constraint Satisfaction
    Article title: Constraint Satisfaction. Article code: 26. Authors: Rina ... variable-elimination), or incomplete versions, usually called local consistency.
  8. [8]
    Consistency in networks of relations - ScienceDirect.com
    Consistency in networks of relations. Author links open overlay panelAlan K ... 12. A.K. Mackworth. Using models to see. Proc. Artificial Intelligence and ...
  9. [9]
    [PDF] Networks of Constraints - UNL School of Computing
    UGO MONTANARI. NET! 4. APPROXIMATE SOLUTION OF THE CENTRAL PROBLEM. In this section, we consider the problem of computing the minimal network.
  10. [10]
    [PDF] 5 CONSTRAINT SATISFACTION PROBLEMS
    5.3 LOCAL SEARCH FOR CONSTRAINT SATISFACTION PROBLEMS ... Specific classes of constraint satisfaction problems occur throughout the history of computer science.
  11. [11]
    [PDF] Constraint Satisfaction Problems - csail
    Constraint Satisfaction Problems. Section. Chapter 6. 6.2 CONSTRAINT ... The key idea is local consistency. If we treat each variable as a node in a ...
  12. [12]
    [PDF] Constraint Satisfaction Problems - MITU Skillologies
    In local consistency, variables are treated as nodes, and each binary constraint is treated as an arc in the given problem. There are following local.<|control11|><|separator|>
  13. [13]
    Constraint-satisfaction problems (and how to solve them)
    Node consistency means that for any one-arity (aka unary) constraints on a variable, we can filter out any domain values that don't satisfy the constraint ...<|control11|><|separator|>
  14. [14]
    [PDF] Why AC-3 is Almost Always Better than AC-4 for Establishing Arc ...
    On the basis of its optimal asymptotic time complexity, AC-4 is often considered the best algorithm for establishing arc consistency in constraint.
  15. [15]
    A Sufficient Condition for Backtrack-Free Search | Journal of the ACM
    A Sufficient Condition for Backtrack-Free Search. Author: Eugene C. FreuderAuthors Info & Claims.
  16. [16]
    [PDF] Foundations of Constraint Satisfaction
    Theorem: CSP is PC iff every path of length 2 is PC. Proof: 1) PC ⇒⇒ paths of length 2 are PC. 2) (paths of length 2 are PC ⇒⇒ ∀∀N paths of length N are PC) ⇒⇒ ...<|control11|><|separator|>
  17. [17]
    [PDF] The Anatomy of Easy Problems: A Constraint-Satisfaction Formulation
    Note that when the constraint graph is a tree, the complexity of the directional arc-consistency al- gorithm is 0(nk2). Theorem 3: A tree-like CSP can be solved ...<|control11|><|separator|>
  18. [18]
    [PDF] Directional consistency Chapter 4
    Adaptive consistency generates a constraint network that is backtrack-free (can be solved without dead- ends). ○. The time and space complexity of adaptive ...
  19. [19]
    [PDF] Chapter 3 Fundamental concepts in the CSP
    In other words, a CSP is arc-consistent if and only if for every variable x, for every label <x,a> that satisfies the constraints on x, there exists a value b ...
  20. [20]
    [PDF] Constraint Satisfaction Problems Module 4: Directional Consistency ...
    But the point about I am trying to make here is that directional path consistency will introduce only one edge as opposed to full path consistency would have ...
  21. [21]
  22. [22]
    [PDF] Constraint Propagation
    Constraint reasoning involves various types of techniques to tackle the inherent intractability of the problem of satisfying a set of constraints.
  23. [23]
    [PDF] Increasing Tree Search Efficiency for Constraint Satisfaction Problems
    We experimentally show that a lookahead procedure called forward checking (to anticipate the future) which employs the most likely to fail principle performs ...
  24. [24]
    [PDF] Understanding and Improving the MAC Algorithm - AMiner
    MAC (Maintaining Arc Consistency) is an efficient CSP algorithm for solving large and hard problems, using enhanced look-ahead for informed choices.
  25. [25]
    [PDF] Arc and Path Consistency Revisited - UNL School of Computing
    AC-4 builds an arc consistent solution. Step 3. From Steps 1 and 2 we conclude that AC-4 builds the largest arc consistent solution. 2.2. Space complexity of AC ...
  26. [26]
    [PDF] Singleton Consistencies - UNL School of Computing
    Abstract. We perform a comprehensive theoretical and empirical study of the benefits of singleton consistencies. Our theoretical results help.
  27. [27]
    [PDF] Bucket Elimination: A Unifying Framework for Reasoning
    Abstract. Bucket elimination is an algorithmic framework that generalizes dy- namic programming to accommodate many problem-solving and reason- ing tasks.Missing: seminal | Show results with:seminal
  28. [28]
    [PDF] An Efficient Arc Consistency Algorithm for a Class of CSP Problems
    Arc consistency algorithms work on binary CSP and make sure that the constraints are individually consistent. Arc consistency algorithms have a long story on ...
  29. [29]
    [2301.05084] Local consistency as a reduction between constraint ...
    Jan 12, 2023 · We study the use of local consistency methods as reductions between constraint satisfaction problems (CSPs), and promise version thereof.
  30. [30]
    Complexity of constraint satisfaction - Wikipedia
    Solving a constraint satisfaction problem on a finite domain is an NP-complete problem in general. Research has shown a number of polynomial-time subcases, ...
  31. [31]
    [PDF] Constraint Satisfaction Problems of Bounded Width I
    Bounded Width CSPs I. CanaDAM 2009. 4 / 12. Page 15. Computational complexity of CSPs. Some CSPs are NP-complete and some are solvable in polynomial time. . . L ...
  32. [32]
    [PDF] The Alldifferent Constraint: A Survey∗ - andrew.cmu.ed
    A consistent CSP is a CSP for which a solution exists. An inconsistent CSP is a CSP for which no solution exists.Missing: solvability | Show results with:solvability
  33. [33]
    [PDF] Promise Constraint Satisfaction and Width
    Jul 13, 2021 · Abstract. We study the power of the bounded-width consistency algorithm in the context of the fixed-template Promise Constraint Satisfaction ...
  34. [34]
    [PDF] Local consistency as a reduction between constraint satisfaction ...
    ABSTRACT. We study the use of local consistency methods as reductions be- tween constraint satisfaction problems (CSPs), and promise version.
  35. [35]
    [PDF] Limit Behavior of Locally Consistent Constraint Satisfaction Problems
    Abstract. An instance of a constraint satisfaction problem (CSP) is variable k-consistent if any subinstance with at most k variables has a solution.
  36. [36]
    [PDF] Constraint Satisfaction Problems (CSPs)
    Backtrack on the assignment of Q. • Arc consistency detects failure earlier than forward checking. • Can be run as a preprocessor or after each assignment. • ...
  37. [37]
    Arc-Consistency Algorithm - an overview | ScienceDirect Topics
    A CSP is arc consistent if each of its constraints is arc consistent . In the literature, arc consistency is also referred to as hyper-arc consistency or ...
  38. [38]
    4.3 Consistency Algorithms
    The generalized arc consistency (GAC) algorithm is given in Figure 4.4. It takes in a CSP with variables V ⁢ s , constraints C ⁢ s , and (possibly reduced) ...
  39. [39]
    [PDF] Choco User Guide
    view: The propagation engine maintains arc-consistency for binary constraints throughout the solving process, while for n-ary constraints, it uses a weaker.
  40. [40]
    Constraint Optimization | OR-Tools - Google for Developers
    Aug 28, 2024 · The CP method keeps track of which solutions remain feasible when you add new constraints, which makes it a powerful tool for solving large, ...CP-SAT Solver · Solving a CP Problem · Original CP Solver · Employee SchedulingMissing: arc consistency
  41. [41]
    [PDF] The Phase Transition Behaviour of Maintaining Arc Consistency
    Study of the performance of MAC at the population level shows that the size of its search trees are much smaller than for FC, that the extra look-ahead produces ...
  42. [42]
    From restricted path consistency to Max-Restricted Path Consistency
    The second aim of this paper is to extend RPC to new local consistencies, k-RPC and Max-RPC, and to compare their pruning efficiency with the other practicable ...
  43. [43]
    Local consistency in parallel constraint satisfaction networks
    In this paper we present several basic techniques for achieving parallel execution of constraint networks. The major result supported by our investigations ...Missing: enforcement | Show results with:enforcement
  44. [44]
    (PDF) Cost-Based Domain Filtering - ResearchGate
    Aug 7, 2025 · By using cost-based filtering in global constraints, we can optimally solve problems that are one order of magnitude greater than those solved ...
  45. [45]
    [PDF] Bounds Arc Consistency for Weighted CSPs - arXiv
    for characterizing CSP local consistency by Apt (1999). During the ... Bioinformatics, 22(17), 2074–80. Van Hentenryck, P., Deville, Y., & Teng, C.-M ...
  46. [46]
    [PDF] Locally Consistent Bayesian Network Scores for Multi-Relational Data
    A fundamental component of structure learning is a model selection score that measures how well a model fits a dataset.
  47. [47]
    [PDF] Learning Physical Constraints with Neural Projections
    First, in Figure 11a, we show the consistency between the constraint satisfaction (the shape of a rigid body) and the magnitude of the learned constraint error ...
  48. [48]
    Meetings scheduling solver enhancement with local consistency ...
    Meetings scheduling solver enhancement with local consistency reinforcement. Published: April 2006. Volume 24, pages 143–154, (2006); Cite this article.
  49. [49]
    Cost function network-based design of protein–protein interactions
    ... local consistency' property (Cooper et al., 2010; Schiex, 2000). Each ... Bioinformatics and Computational Biology · Biological Sciences · Science and ...
  50. [50]
    [PDF] Computational protein design as an optimization problem
    It is often described as the inverse problem of protein folding [70] ... As in classical CSP, enforcing a local consistency property on a problem P ...
  51. [51]
    Hybrid Local Causal Discovery - arXiv
    arXiv:2412.19507v2 [cs.AI] 12 May 2025. Hybrid Local Causal Discovery ... According to local consistency, the S A / ℬ ⁢ ( X → T , D ) − S A / ℬ ⁢ ( ∅ → T , D ) > 0 subscript S A ℬ → X T D subscript S A ℬ ...
  52. [52]
    A new local consistency for weighted CSP dedicated to long ...
    A new local consistency for weighted CSP dedicated to long domains ... BioinformaticsTrends in Constraint Programming ... A new local consistency for weighted CSP ...
  53. [53]
    [PDF] Untitled - Department of Computer Science - UPC
    ... local consistency coupled with global constraints. In the soft constraints realm, satisfaction is replaced by optimization. This causes that problems with ...