Fact-checked by Grok 2 weeks ago

Transitive closure

In , the transitive closure of a R on a set X is the smallest on X that contains R as a . This means that for elements a and b in X, a is related to b in the transitive closure if there exists a finite sequence of elements connecting a to b through applications of R. The concept is fundamental in relation theory, where it ensures the relation satisfies the transitivity property: if a R b and b R c, then a relates to c in the closure. For finite sets, the transitive closure can be computed as the union of all positive powers of R, i.e., R^+ = \bigcup_{i=1}^n R^i, where n is the size of the set. In graph theory, the transitive closure of a directed graph corresponds to adding an edge from u to v whenever there is a directed path from u to v, effectively capturing reachability between nodes. Transitive closure finds wide applications across mathematics and , including determining connected components in undirected graphs and computing in networks. In , the transitive closure of a set is the smallest transitive set containing it and all its elements' transitive closures, which is crucial for defining hereditarily finite sets. In , it supports entity resolution by identifying co-referent pairs through transitive relationships, as in merging duplicate records in , and enables efficient querying in relational for path-based inferences. Algorithms like Warshall's compute it in O(n^3) time for graphs with n vertices, making it practical for many computational tasks.

Fundamentals

Definition and Basic Concepts

A R on a set X is a of the X \times X, consisting of ordered pairs (x, y) where x, y \in X. Such relations form the foundation for studying structural properties in sets, including reflexivity, , and . A R is reflexive if (x, x) \in R for every x \in X; symmetric if (x, y) \in R implies (y, x) \in R; and if (x, y) \in R and (y, z) \in R imply (x, z) \in R. These properties characterize important classes of relations: a partial order is a reflexive, antisymmetric (where (x, y) \in R and (y, x) \in R imply x = y), and transitive , while an is reflexive, symmetric, and transitive. The of a R on X, denoted R^+, is the smallest on X that contains R as a . Formally, it is defined as R^+ = \bigcup_{n=1}^\infty R^n, where R^n denotes the n-fold of R with itself, and R^1 = R. This construction extends R by including all pairs (x, z) such that there exists a finite sequence of elements connecting x to z through repeated applications of R, thereby capturing all implied transitive connections without adding extraneous pairs. In this way, the transitive closure ensures while remaining minimal with respect to inclusion among all such relations containing R. Unlike the transitive closure, which augments a by adding edges to enforce , the transitive removes redundant edges from a while preserving its transitive closure; further details appear in the sections. relations, including their transitive closures, are often represented as directed graphs, with vertices corresponding to elements of X and edges to pairs in the .

Examples of Transitive Closure

One common example of transitive closure arises in familial relations. Consider the "is a of" on a set of individuals in a . This connects each person directly to their immediate children. The transitive of this is the "is an of" , which includes connections across all generations, such that if A is a of B and B is a of C, then A is an of C, and so on for longer chains. A simple numerical illustration involves a . Let the set be A = \{1, 2, 3\} and the R = \{(1,2), (2,3)\}. The transitive closure R^+ adds the pair (1,3) to account for the chain from 1 to 3 via 2, resulting in R^+ = \{(1,2), (2,3), (1,3)\}. This closure ensures that all indirect connections implied by repeated composition of R are explicitly included. Transitive closure also plays a role in completing partial equivalences. Suppose a on a set is already reflexive and symmetric but lacks full , such as pairs that connect elements in clusters without linking all members of a potential class (e.g., (a,b) and (b,c) present but (a,c) absent). Applying the transitive closure adds the missing pairs like (a,c), yielding full classes where all elements within a class are mutually related, thus forming a complete . To visualize the computation, consider a step-by-step process for a small like the numerical example above on set A = \{1, 2, 3\} with R = \{(1,2), (2,3)\}:
  • Step 1: Start with R. Pairs: (1,2), (2,3). No direct transitivity violations yet.
  • Step 2: Compute compositions. Check for paths of length 2: (1,2) followed by (2,3) implies add (1,3).
  • Step 3: Include new pairs and repeat. Updated set: (1,2), (2,3), (1,3). No further paths of length greater than 1 exist beyond these.
  • Final closure: R^+ = \{(1,2), (2,3), (1,3)\}, now fully transitive.
This iterative addition demonstrates how transitive closure systematically incorporates all reachable pairs without redundancy./06%3A_Relations/6.05%3A_Closure_Operations_on_Relations)

Mathematical Properties

Existence and Construction

The transitive closure R^+ of a R on a set X exists and is uniquely determined as the of all transitive s on X that contain R. This family of relations is non-empty, since the universal relation X \times X is transitive and supersedes R. The is itself transitive, as the of any collection of transitive relations preserves transitivity: if (x, y) and (y, z) belong to every such relation, then (x, z) does as well. This intersection yields the smallest transitive relation containing R, in the sense that it is a subset of every transitive relation superseding R. An explicit construction of R^+ is given by the formula R^+ = \bigcup_{k=1}^\infty R^k, where R^k is the k-fold relational composition of R with itself, consisting of all pairs (x, y) connected by a chain of exactly k steps under R. This union contains R (as R^1 = R) and is transitive, since if (x, y) \in R^m and (y, z) \in R^n, then (x, z) \in R^{m+n} \subseteq R^+. It is the smallest such relation because any transitive superset S of R contains all powers R^k by induction—R^1 = R \subseteq S, and if R^m \subseteq S then R^{m+1} = R^m \circ R \subseteq S \circ S \subseteq S by transitivity—hence R^+ \subseteq S. The reflexive-transitive closure R^* extends this to include reflexivity and is defined as R^* = \Delta_X \cup R^+, where \Delta_X = \{ (x, x) \mid x \in X \} is the identity relation on X. This yields the smallest reflexive and transitive relation containing R, equivalent to the set of all pairs connected by paths of length at least zero (including trivial self-paths of length zero). It is employed when reflexivity is essential, such as in graph reachability problems where nodes reach themselves or in formal language theory for the operation modeling arbitrary repetitions including none. For finite X with |X| = n, R^+ is always computable via the finite union \bigcup_{k=1}^n R^k, as any path of length exceeding n must repeat vertices by the and thus reduces to a shorter path already captured in lower powers. On infinite sets, however, R^+ generally requires the full infinite union, which may not terminate in finitely many steps and relies on set-theoretic limits rather than algorithmic enumeration.

Key Properties and Theorems

The transitive closure of a R, denoted R^+, exhibits several key algebraic properties that underscore its role as a in the of s. One fundamental property is : (R^+)^+ = R^+. This holds because R^+ is defined as the smallest containing R, and since R^+ is itself transitive, applying the transitive closure again yields no additional pairs, preserving minimality. Another essential property is monotonicity: if R \subseteq S, then R^+ \subseteq S^+. To see this, note that S^+ is a containing S, and hence containing R. By the minimality of R^+ as the smallest containing R, it follows that R^+ \subseteq S^+. This monotonicity ensures that the transitive closure behaves consistently with the on relations. In the context of , the transitive closure preserves or enhances structural properties of partial orders and preorders. If R is a strict partial (irreflexive and transitive), then R^+ = R, and irreflexivity is preserved: suppose for contradiction that x (R^+) x for some x; this implies a finite x = x_0 R x_1 R \cdots R x_n = x with n \geq 1, which by of R yields x R x, contradicting irreflexivity. Thus, R^+ remains a strict partial . For preorders (reflexive and transitive relations), the transitive closure R^+ = R yields the preorder itself, but to obtain a partial , one forms the quotient by the \sim where x \sim y iff x R^+ y and y R^+ x; the induced on equivalence classes is antisymmetric, reflexive, and transitive, hence a partial .

Applications in Graph Theory

Transitive Closure of Directed Graphs

In theory, a G = (V, E) corresponds to a R \subseteq V \times V defined by R = \{ (u, v) \mid (u, v) \in E \}. The transitive closure of G, often denoted G^+, is the with the same set V, but with an edge from u to v there exists a from u to v in G of positive length. This construction ensures that G^+ captures all pairwise reachabilities via multi-step connections, making it the smallest containing R. The transitive closure can also be represented using the adjacency matrix A of G, where A_{ij} = 1 if there is a directed edge from vertex i to j, and 0 otherwise. The adjacency matrix of G^+ has entries equal to 1 precisely where paths exist between vertices, corresponding to A^+ in the Boolean semiring, where ^+ denotes the Kleene plus operation summing (OR-ing) powers of A to capture all positive-length paths. This matrix formulation highlights how the closure extends the original relation without introducing new vertices. Unlike directed graphs, where the transitive closure preserves directionality and may result in asymmetric reachability, the transitive closure of an undirected graph transforms each connected component into a complete subgraph by adding edges between all pairs of vertices within the component. In directed graphs, cycles play a key role: if a vertex u lies on a directed cycle in G, then G^+ includes a self-loop at u, as the cycle constitutes a positive-length path from u to itself. For the reflexive transitive closure G^*, self-loops are added at every vertex regardless of cycles, incorporating empty paths for reflexivity.

Reachability and Path Analysis

In the context of directed graphs, the transitive closure G^+ provides a complete representation of via positive-length , where an edge (u, v) exists in G^+ if and only if there is a directed from u to v in the original graph G. This relation captures all possible indirect connections through intermediate , extending the direct edges of G to include any sequence of edges that links u to v. Unlike approaches focused on shortest paths, which prioritize the minimal number of edges or total weight between vertices, the transitive closure encompasses paths of arbitrary length without distinguishing between them based on distance or efficiency. This makes it particularly useful for qualitative analysis of rather than quantitative optimization, as it simply affirms the existence of at least one regardless of its complexity. For undirected graphs or the symmetric part of a directed graph—obtained by treating all edges as bidirectional—the transitive closure yields a cluster graph consisting of complete cliques for each , effectively linking all vertices within the same weakly connected set. In , this underlying structure helps identify broader connectivity patterns beyond directional constraints. A practical application arises in , where the transitive closure models chains like "," revealing indirect influences or in relationships. It is also used in finding strongly connected components and in program dependence graphs for software analysis. In directed acyclic graphs (DAGs), the transitive serves as the inverse of transitive closure, yielding the sparsest subgraph that preserves the exact same by eliminating all redundant edges implied by longer paths. This is unique for DAGs and facilitates compact representations for dependency analysis.

Logical and Computational Foundations

Role in Logic and

In , the transitive closure of the generated by a set of axioms and rules defines the deductive , which comprises all provable theorems of the . Specifically, starting from the axioms, repeated applications of rules produce new statements, and the transitive closure ensures that all derivable consequences—through chains of such inferences—are included in the theory. This process captures the completeness of deduction, where the full set of theorems is the smallest set closed under the given rules. A concrete example arises in , where the deductive closure under (from A and A \to B, infer B) applied to a set of axioms yields the entire logical . For instance, given axioms such as closure assumptions and logical truths, iterative applications of generate all tautological consequences, ensuring that the transitive closure of this immediate relation exhausts the provable sentences without redundancy. This mechanism underpins the and theorems for , as the closure operation mirrors the exhaustive exploration of proof trees. In the context of inductive definitions, transitive closure provides the fixed-point semantics for , where definite clause programs define predicates via the least fixed point of an immediate consequence . The transitive closure of the base relations (clauses) computes this least fixed point, enabling recursive definitions like in graphs, as the semantics iteratively expands the relation until stability. This approach, foundational to the operational and declarative equivalence in logic programs, ensures that computed answers correspond to the minimal model satisfying the program. Furthermore, transitive closure relates to through the star operation (*), which denotes the reflexive transitive closure of a and satisfies axioms such as a^* = 1 + a \cdot a^* and b^* a^* \leq (a b)^*, providing an algebraic framework for reasoning about iterative processes in logic and computation. In proof-theoretic settings, this structure models the closure properties of inference chains, allowing equational proofs of derivability in systems like regular languages or relational deductions.

Complexity Classes and Decidability

The computation of the transitive closure for a with n vertices requires O(n^3) time in the worst case when using matrix-based methods over the semiring. However, the problem—determining whether a exists between two specified vertices in the —is NL-complete, meaning it captures the full power of nondeterministic logspace . Transitive closure queries, such as single-source or all-pairs , thus belong to the NL, which consists of problems solvable using a in O(\log n) space. Savitch's theorem establishes that NL is contained in deterministic space O(\log^2 n), providing an upper bound on the deterministic space required to simulate nondeterministic logspace computations for transitive closure queries; this simulation recursively checks for paths of length up to n by exploring intermediate configurations. The Immerman–Szelepcsényi theorem further shows that NL = coNL, implying that the complement of reachability (non-reachability) is also in NL, achieved via a nondeterministic counting procedure that tallies reachable nodes without explicitly listing paths. For finite structures, such as graphs with a fixed number of vertices, the transitive closure is always decidable, as it can be computed exhaustively using finite resources. In contrast, for infinite structures without additional bounds or restrictions, transitive closure problems become undecidable in expressive logical frameworks, such as certain extensions of monadic , where no can determine closure properties for all inputs. In computation models, the transitive closure can be computed within the NC², which corresponds to polylogarithmic-depth, polynomial-size circuits, achievable through logarithmic-depth matrix operations.

Database and Relational Models

Transitive Closure in

In the , a is formalized as a set of tuples over a of attributes, where relations often represent associations such as edges in a graph-like . The transitive closure of such a R, denoted R^+, is the smallest that includes R and is closed under the join , effectively extending R recursively to capture all indirect associations through repeated joins. This construction arises naturally in scenarios requiring path traversal, such as determining hierarchical dependencies in data. Standard relational algebra, equipped with core operators like selection (\sigma), projection (\pi), natural join (\bowtie), union (\cup), set difference (-), and Cartesian product (\times), cannot express transitive closure queries. This limitation stems from the algebra's finite composition of non-recursive operators, which prevents the iteration needed to accumulate results across arbitrary depths of dependency. Seminal work demonstrated that no expression in this algebra can uniformly compute the transitive closure of an input relation R, as it would require handling variable-length paths without a looping mechanism. To overcome this, relational algebra is extended with fixpoint semantics, such as the least fixpoint operator applied to recursive expressions involving joins, which defines R^+ as the limit of the sequence R, R \bowtie R, R \bowtie R \bowtie R, \dots until stabilization. These extensions maintain the declarative nature of the algebra while enabling recursive computation, analogous to reachability in directed graphs modeled as relations. Efficient evaluation of transitive closure in these extended algebras often employs seminaive techniques to minimize redundant work during . In seminaive evaluation, the result is built incrementally by joining the base only with the (newly added tuples) from the previous , rather than rejoining the entire accumulated result, which reduces computational overhead from quadratic to near-linear in the number of new derivations per step. This approach is particularly effective for sparse where the closure does not densely populate. A representative application involves querying an employee-supervisor hierarchy stored in a relation EmpSup with attributes EmpID and SupID, where each tuple indicates direct reporting. The transitive closure Subordinates = EmpSup+ includes all pairs (e, s) such that employee e reports indirectly to supervisor s through any chain of supervision, enabling queries like finding all subordinates of a given manager without enumerating paths manually.

Implementation in Query Languages

Transitive closure queries in database systems are commonly implemented using extensions to standard query languages that support . In SQL:1999, recursive common table expressions () provide a mechanism to compute transitive closures, such as queries in graphs or hierarchical relationships in relational . The syntax employs the WITH RECURSIVE clause, where an member initializes the recursion (e.g., direct edges), and a recursive member joins the CTE with itself to extend paths iteratively until no new results are produced. For example, to compute the transitive closure of a represented by an edges relation with columns source and target, the following SQL query identifies all reachable pairs:
sql
WITH RECURSIVE paths(source, target) AS (
    SELECT source, target FROM edges  -- Anchor: direct edges
    UNION
    SELECT p1.source, p2.target
    FROM paths p1, edges p2
    WHERE p1.target = p2.source      -- Recursive: extend paths
)
SELECT DISTINCT source, target FROM paths;
This query starts with direct connections and repeatedly appends edges to existing paths, converging to the full transitive closure under set semantics. Implementations in systems like and SQL Server enforce termination by detecting when the recursive step yields no new rows, but non-linear recursions (involving self-joins on the ) may require careful ordering to avoid incomplete results in some engines. In , transitive closure is expressed through recursive rules that define an intensional database , typically evaluated bottom-up to compute the least fixed point. A program for in a with an extensional edge(X, Y) uses two rules:
path(X, Y) :- edge(X, Y).
path(X, Y) :- path(X, Z), edge(Z, Y).
The first rule seeds direct edges, while the second recursively composes s, naturally capturing the transitive closure for all pairs. This formulation leverages Datalog's declarative nature, where evaluation iterates until fixpoint, ensuring termination for finite, positive programs even on cyclic graphs, as the monotonic semantics prevent infinite derivations by only adding new facts. However, recursive implementations in both SQL and face limitations with cyclic data. In SQL recursive CTEs, cycles can trigger infinite loops if the recursion does not naturally terminate, as the query may repeatedly traverse loops without adding unique rows; database systems mitigate this with options like MAXRECURSION in SQL Server to cap iterations or the CYCLE clause in to detect and exclude cyclic paths. In , while bottom-up evaluation terminates on finite domains, top-down strategies (e.g., query-subquery) risk non-termination on cycles without stratification or binding patterns, necessitating termination conditions like acyclicity checks or stratified negation for safe . To optimize bottom-up evaluation in , particularly for transitive closure queries with specific (e.g., paths from a fixed source), the magic sets technique rewrites the by introducing auxiliary predicates that propagate binding information, irrelevant computations and reducing the search space. This method, originally proposed for efficient fixed-point computation, transforms the rules to focus only on relevant derivations, significantly improving performance on large graphs without altering the semantics.

Computational Algorithms

Floyd-Warshall Algorithm

The Floyd–Warshall algorithm, when adapted for computing the transitive closure of a directed graph, determines reachability between all pairs of vertices through dynamic programming on the graph's adjacency matrix. Originally formulated for all-pairs shortest paths, the version for transitive closure replaces arithmetic operations with boolean ones: logical OR for path combination and logical AND for edge existence, effectively operating over the (OR, AND) semiring. Given a directed graph with n vertices, the input is an n \times n boolean adjacency matrix R, where r_{ij} = 1 if there is a direct edge from vertex i to j, and 0 otherwise; the output is the transitive closure matrix R^+, where r^+_{ij} = 1 if there exists any path from i to j, including the trivial path when i = j. The algorithm proceeds with a triple nested loop, systematically considering each vertex k (from 1 to n) as a potential intermediate vertex in paths. For each such k, it updates the reachability entries for all pairs (i, j) by checking if a path exists through k. The pseudocode is as follows:
initialize reach[i][j] = 1 if i == j else r_ij for all i, j
for k = 1 to n
    for i = 1 to n
        for j = 1 to n
            reach[i][j] = reach[i][j] OR (reach[i][k] AND reach[k][j])
This computes the full transitive closure in O(n^3) time, as each of the three loops runs n times, performing constant-time operations per . In matrix terms, each corresponds to updating the current reachability matrix D^{(k)} via D^{(k)}_{ij} = D^{(k-1)}_{ij} \lor (D^{(k-1)}_{ik} \land D^{(k-1)}_{kj}), where D^{(0)} is the initial augmented with the for reflexivity; iteratively applying this yields the transitive closure after n steps. Correctness follows from dynamic programming principles, analogous to the shortest-path case but for path existence. By on k, after the k-th , reach holds there is a from i to j using only the first k vertices as intermediates (besides i and j themselves), including the trivial path when i = j. The base case (k=0) captures direct edges and self-reachability. For the inductive step, assume it holds for k-1; the update for k preserves existing paths and adds those routing through k (via paths to/from k using prior intermediates), ensuring no paths are missed or falsely added due to the monotonicity of OR-AND operations. When k=n, all possible intermediates are considered, yielding the complete transitive closure. This proof extends Warshall's original theorem on boolean matrix powers. For space efficiency, particularly with dense graphs where the boolean matrix is sparse in values but dense in structure, each row of the reachability matrix can be represented as a bit vector (bitset), packing n bits into \lceil n / w \rceil words of size w bits (e.g., 64 on modern architectures). This reduces overall space from O(n^2) words to O(n^2 / w) words while enabling fast bitwise OR operations for updates, though the time remains O(n^3 / w) in practice due to word-parallelism. Such bit-packing is especially useful for large n, fitting matrices that would otherwise exceed memory limits.

Alternative Algorithms and Optimizations

Graph traversal algorithms, such as (BFS) and (DFS), provide an efficient alternative for computing the transitive closure, particularly in sparse directed s. These methods determine by initiating a search from each and marking all reachable vertices, effectively building the reachability relation for all pairs. For a with n vertices and m edges, the total is O(n(n + m)), which is superior to cubic-time methods like Floyd-Warshall when m \ll n^2. This approach excels in practice for sparse structures, as BFS variants can significantly outperform dynamic programming methods on large sparse s. Matrix exponentiation offers a theoretically efficient by leveraging fast in the semiring. The A is raised to successive powers using , with entries representing path existence via logical OR and AND operations; the transitive closure is then the OR of I + A + A^2 + \dots + A^{n-1}. Algorithms like Coppersmith-Winograd achieve O(n^{2.376}) time, reducing the exponent below 3, though practical implementations are limited by high constants and are better suited for dense graphs rather than providing a direct approximation for sparse cases. This technique equates transitive closure to repeated , enabling optimizations in theoretical models. For directed acyclic graphs (DAGs), optimizations exploit the absence of cycles via . A linear-time orders vertices such that for every u \to v, u precedes v; a subsequent single pass processes nodes in this order, computing each node's reachable set as the of its immediate successors' sets. This yields an overall O(n + m) , significantly faster than general-case algorithms, and is widely adopted in implementations like NetworkX for acyclic structures. Pruning during propagation further minimizes memory usage by avoiding redundant unions. Parallel algorithms enhance scalability by distributing computations, often building on paradigms. In the PRAM model, high-probability randomized methods compute the transitive closure in O(\log^2 n) time using O(n^3 / \log^2 n) processors, by iteratively multiplying matrices in parallel to approximate path existence across all pairs. These are near-optimal for dense graphs and have influenced distributed systems for large-scale queries. Hybrid approaches merge traversal and algebraic techniques to balance efficiency across graph densities. The hybrid algorithm of Agrawal, Borgida, and Jagadish combines DFS-based exploration with selective successor list joins, reducing join operations compared to pure graph methods and outperforming pure graph-based and matrix-based methods in experimental evaluations on various graph densities. For equivalence closures—where symmetry implies undirected connectivity—union-find structures can be integrated to dynamically union reachable components during traversals, accelerating equivalence class computations in symmetric transitive settings.

References

  1. [1]
    Transitive Closure -- from Wolfram MathWorld
    The transitive closure of a graph is a graph which contains an edge whenever there is a directed path from to.Missing: definition | Show results with:definition
  2. [2]
  3. [3]
    Transitive Closure - an overview | ScienceDirect Topics
    Transitive closure refers to the iterative process of determining co-referent pairs by considering the transitive relationships between references.
  4. [4]
    [PDF] Binary Relations - Purdue Engineering
    Definition of Binary Relation. • Binary Relation (ρ) on a set S is a subset of S×S. • xρy ↔ (x,y) ∈ρ. • A binary relation is often defined by giving.
  5. [5]
    [PDF] Section 2: Reflexivity, Symmetry, and Transitivity - UMBC CSEE
    • Definition: Let R be a binary relation on a set A. The transitive closure of R is the binary relation Rt on A satisfying the following three properties: 1. ...<|control11|><|separator|>
  6. [6]
    Rel: Properties of Relations - Catalin Hritcu
    The reflexive, transitive closure of a relation R is the smallest relation that contains R and that is both reflexive and transitive. Formally, it is defined ...
  7. [7]
    Closures of Binary Relation
    Definition (transitive closure): A relation R' is the transitive closure of a relation R if and only if (1) R' is transitive, (2) R R', and (3) for any relation ...
  8. [8]
    Closures
    Thus, for a relation on n elements, the transitive closure of R is \bigcup_{k=1}^{n} R^k. We can finally write an algorithm to compute the transitive ...
  9. [9]
    4.2 Directed Graphs - Algorithms, 4th Edition
    Jan 14, 2020 · The transitive reduction of a digraph is a digraph with the fewest number of edges that has the same transitive closure as the original digraph.
  10. [10]
    Closure of Relations - GeeksforGeeks
    Sep 27, 2024 · Transitive Closure. The transitive closure of a relation R on a set A is the smallest relation R′ that contains R and is transitive. This means ...
  11. [11]
    [PDF] Section 6.5 Equivalence Relations
    Then the reflexive, symmetric, transitive closure of R, tsr(R), is an equivalence relation on A, called the equivalence relation induced by R. Example: a b.
  12. [12]
    [PDF] Order Theory - Columbia University
    Transitive closures are handy things for us to work with, so it is worth describing some of their properties. Remark 1 Every binary relation R on any set X has ...
  13. [13]
    [PDF] cse303 ELEMENTS OF THE THEORY OF COMPUTATION
    Definition 2 of R∗. Let R be a binary relation on a set A. The reflexive, transitive closure of R is the relation. R. ∗. = {(a,b) ∈ A × A : there is a path ...
  14. [14]
    [PDF] Introduction to Relations Binary relation - University at Buffalo
    Intuitively, a closure of R is the smallest extension of R to achieve a certain property (e.g., being reflexive / symmetric / transitive). Example: Suppose A = ...
  15. [15]
    Dense order - Wikipedia
    A non-empty and dense relation cannot be antitransitive. A strict partial order < is a dense order if and only if < is a dense relation. A dense relation that ...
  16. [16]
    [PDF] 1 Graph Transitive Closure and BMM - People | MIT CSAIL
    Definition 1.1. The Transitive Closure (TC) of a directed graph G = (V,E) on n nodes is an n × n matrix. such that ∀u, v ∈ E (T(u, v) = ( 1 if v is reachable ...
  17. [17]
    [PDF] Transitive Closure and Metric Inequality of Weighted Graphs
    Thus the transitive closure of an undirected unweighted graph is the complete graph on the nodes of any connected component. Now we further extend this concept ...
  18. [18]
    A Theorem on Boolean Matrices | Journal of the ACM
    A Theorem on Boolean Matrices. Author: Stephen Warshall. Stephen Warshall ... Lancia GDalpasso M(2025)Speeding Up Floyd–Warshall's Algorithm to Compute ...
  19. [19]
    The Transitive Reduction of a Directed Graph
    A directed graph G t is said to be a transitive reduction of the directed graph G provided that (i) G t has a directed path from vertex u to vertex v.Missing: seminal | Show results with:seminal
  20. [20]
    [PDF] Lecture Notes on Deductive Inference
    Aug 30, 2016 · Exercise 1 As an alternative way to defining paths through graph, we can say that paths represent the reflexive and transitive closure of the ( ...
  21. [21]
    [PDF] 5 Deduction in First-Order Logic - UCLA Department of Mathematics
    (MP) Modus ponens is the rule we are familiar with from the system SL. (QR) As we shall explain later, the Quantifier Rule is not a valid rule. The reason ...
  22. [22]
    [PDF] The Semantics of Predicate Logic as a Programming Language
    The Semantics of Predicate Logic as a Programming Language. M. H. VAN EMDEN AND R. A. KOWALSKI. Umverslty of Edinburgh, Edmburgh. Scotland. ABSTRACT Sentences ...
  23. [23]
    [PDF] A Completeness Theorem for Kleene Algebras and the Algebra of ...
    In this paper we introduce yet another: a Kleene algebra is any model of the equations and equational implications given in x2. By general considerations of ...
  24. [24]
    [PDF] The Boundary Between Decidability and Undecidability for ...
    In this paper we explore the boundary between decid- ability and undecidability for transitive closure logics. Ra- bin proved that the monadic second order ...
  25. [25]
    None
    ### Summary of Transitive Closure in Relational Algebra
  26. [26]
  27. [27]
    [PDF] direct algorithms for computing the transitive closure
    We present new algorithms for computing the transitive closure of large database relations. Unlike iterative algorithms, such as the semi-naive and the ...
  28. [28]
    [PDF] On the Computation of the Transitive Closure of Relational Operators
    The semi-naive algorithm does the same but only on the tuples produced in the last iteration. Finally the smart algorithm at each step applies a different.
  29. [29]
    [PDF] Course Notes on Relational Algebra
    Incomplete Algebra Solution for Transitive Closure. BorgSSN ← πSSN(σFName='James'∧LName='Borg' (Employee)). Supervision(SSN1, SSN2) ← πSSN,SuperSSN(Employee).
  30. [30]
    [PDF] SQL:1999, formerly known as SQL3
    Well,. SQL:1999 has provided a facility called recursive query to satisfy just this sort of requirement. Writing a recursive query involves writing the query.
  31. [31]
    (PDF) Unrolling SQL: 1999 Recursive queries - ResearchGate
    Aug 7, 2025 · In order to query such data, one can use SQL:1999 recursive queries based on Common Table Expressions. ... transitive closure of a graph ...
  32. [32]
    Language-Integrated Recursive Queries - arXiv
    Apr 3, 2025 · In SQL:1999, support was added for recursive common table expressions ... For example, given a linearly recursive transitive closure query ...
  33. [33]
    [PDF] Datalog and Recursive Query Processing - - blogs.evergreen.edu
    As our first example, we consider a Datalog program that computes all-pairs reachability, essentially a transitive closure computation in a graph for figuring ...
  34. [34]
    Preventing Infinite Recursion Due to Cyclic Data - Teradata Vantage
    Recursive views can recurse infinitely when there are cycles in the underlying data they are processing, as they can with acyclic data.
  35. [35]
  36. [36]
    Magic sets and other strange ways to implement logic programs ...
    Magic sets and other strange ways to implement logic programs (extended abstract). Authors: Francois Bancilhon. Francois Bancilhon. View Profile. , David Maier.
  37. [37]
    [PDF] Shortest Paths in Graphs (Tucker Section ... - CMPSCI 575/MATH 513
    Sep 30, 2016 · • The Floyd-Warshall Algorithm. • Correctness of Floyd-Warshall. Page ... this as a means to find the transitive closure of a relation ...
  38. [38]
    [PDF] Bitwise Algorithms to Compute the Transitive Closure of Graphs in ...
    We introduce two complementary algorithms removing HPC memory limitations: (1) an algorithm that efficiently converts edges into bit vectors and (2) a database- ...Missing: boolean | Show results with:boolean
  39. [39]
    [PDF] Faster Fully Dynamic Transitive Closure in Practice - DROPS
    The traditional static algorithms BFS, DFS, as well as the hybrid DBFS were distinctly slower by a factor of up to 31k (BFS) and almost 70k (DFS, DBFS) in ...<|control11|><|separator|>
  40. [40]
    [PDF] Efficient Transitive Closure Computation in Large Digraphs
    This thesis presents new efficient transitive closure algorithms using strong component detection and new representations based on intervals and chain ...<|control11|><|separator|>
  41. [41]
    High-Probability Parallel Transitive-Closure Algorithms
    This algorithm is within a log factor of optimal in work (processor-time product), for solving the all-pairs transitive-closure problem for dense graphs.<|separator|>
  42. [42]
    [PDF] Hybrid Transitive Closure Algorithms - VLDB Endowment
    We briefly review the features of matrix-based and graph- based transitive closure algorithms that bear comparison with the hybrid algorithms. 2.1 Matrix-Based ...