Fact-checked by Grok 2 weeks ago

Kruskal's algorithm

Kruskal's algorithm is a that computes a minimum spanning tree (MST) for a connected, undirected with weighted edges by selecting a subset of edges that connects all vertices while minimizing the total edge weight sum. First described by mathematician Joseph B. Kruskal in his 1956 paper "On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem," the algorithm provides an efficient solution to the MST problem, which has applications in network design, clustering, and approximation algorithms for problems like the traveling salesman problem. The algorithm operates by sorting all edges in non-decreasing order of weight and initializing a forest where each vertex forms its own tree. It then iterates through the sorted edges, adding an edge to the MST if it connects two distinct components (i.e., does not form a cycle), using a union-find (disjoint-set) data structure to track components and detect cycles efficiently. This process continues until all vertices are connected, resulting in a tree with exactly V-1 edges for a graph with V vertices. Kruskal's algorithm relies on the cut property of MSTs, which states that the minimum-weight edge crossing any cut in the graph belongs to some MST, ensuring the greedy choice yields an optimal solution. Its is O(E log E), where E is the number of edges, dominated by the sorting step, though it simplifies to O(E log V) for sparse graphs; the union-find operations contribute nearly constant time per operation with path compression and union by rank. Compared to , Kruskal's is particularly effective for graphs with fewer edges relative to vertices, as it processes edges globally rather than growing from a starting vertex.

Background

Minimum spanning tree

In an undirected, connected with n vertices, a is a that forms a and includes all the original vertices, thereby connecting them without any cycles. Such a consists of exactly n-1 edges. A (MST) of a connected, edge-weighted undirected is a whose total edge weight is minimized among all possible spanning trees. In such graphs with non-negative edge weights, an MST is guaranteed to exist and also contains exactly n-1 edges. The MST problem was first addressed by Otakar Borůvka in 1926 for practical network applications, with the modern algorithmic formulations developed in the 1950s by , Robert Prim, and references to Borůvka's earlier work. Minimum spanning trees are essential in optimization problems, particularly for designing efficient with minimal cost, such as connecting cities via roads or laying telecommunication cables while minimizing total length or expense. For instance, in transportation , an MST determines the lowest-cost to link all locations without redundant paths.

Union–find data structure

The union–find data structure, also known as the disjoint-set data structure, maintains a collection of disjoint subsets that partition an underlying set of elements, enabling efficient tracking of connectivity among elements. It supports two fundamental operations: find, which returns the representative (typically the root) of the subset containing a specified element, and union, which merges two subsets into a single one. This structure models the components of a graph during processes like building a spanning tree, where elements represent vertices and subsets represent connected components. The find operation traverses a tree from the given element to its , and path compression enhances efficiency by updating the parent pointers of all nodes along this path to point directly to the , effectively flattening the for subsequent operations. The union operation connects the of two ; to keep trees shallow, union by attaches the of the tree with the smaller to the one with the larger , where serves as an approximation of tree (initially 0 for single-node , and incremented only when ranks are equal). Union by size is an alternative that merges the smaller tree under the of the larger, tracking actual subtree sizes instead of . These heuristics ensure balanced tree growth. A basic implementation uses an parent of n (for n elements), where parent initially equals i (each element starts in its own set), and nodes point to their parents to form trees. An auxiliary array (for union by ) or array (for union by ) stores balancing information, updated during merges. To perform find(x), iteratively follow parent until reaching a self-pointing , optionally applying path compression by setting parent along the path to the root. For union(x, y), first find the roots r_x and r_y; if distinct, link the one with smaller rank (or size) to the other, incrementing rank if needed. With path compression combined with union by rank or size, the amortized time complexity for a sequence of m operations on n elements is O(m \alpha(n)), where \alpha(n) is the inverse Ackermann function, a very slowly growing function that remains below 5 for all feasible n (effectively constant time per operation). Without these optimizations, operations can degenerate to O(n) in the worst case due to skewed trees. In the context of minimum spanning tree computation, the union–find structure facilitates cycle detection: before adding an edge between vertices u and v, the find operation checks if u and v share the same root; if they do, the edge connects vertices already in the same component and would form a cycle, so it is skipped; otherwise, union merges the components. This ensures the selected edges form a tree without cycles.

Algorithm

Steps

Kruskal's algorithm constructs a (MST) for an undirected, weighted by greedily selecting the smallest edges that do not form cycles. The procedure assumes the graph is connected; if disconnected, it produces a minimum spanning consisting of MSTs for each . The first step is to sort all edges of the in non-decreasing order of their weights. This ordering ensures that the algorithm considers potential MST edges starting from the lightest. Next, initialize a union–find data structure where each begins in its own separate component, representing n singleton sets for a with n vertices. This structure efficiently tracks connected components during edge additions. Then, iterate through the sorted list of edges. For each edge, check if its endpoints belong to different components using the union–find structure. If they do, add the edge to the MST and merge (union) the two components; if they are already in the same component, skip the edge to avoid creating a . Continue this process until exactly n-1 edges have been added to the MST or all edges have been processed. At this point, the resulting structure is the MST (or if the is disconnected). The algorithm handles edge cases such as graphs with multiple edges of equal weight correctly, though it may produce one of several possible MSTs in such scenarios, as the choice among equal-weight edges is arbitrary.

Pseudocode

Kruskal's algorithm is presented in pseudocode form below, utilizing a (also known as disjoint-set union or DSU) to efficiently manage connected components and detect cycles. This implementation follows the standard procedure introduced by Kruskal, adapted for modern computational representation. The input G = ([V](/page/V.), E) is undirected and connected, with [V](/page/V.) denoting the set of vertices and E the set of edges, where each edge is represented as a triple (u, v, w) with endpoints u, v \in [V](/page/V.) and weight w. The output is a T, a subset of |[V](/page/V.)| - 1 edges from E that connects all vertices without cycles and minimizes the total edge weight.
KRUSKAL(G, w)
1 A ← ∅
2 for each vertex u ∈ G.V
3     MAKE-SET(u)
4 sort the edges of G.E into nondecreasing order by weight w
5 for each edge (u, v) ∈ G.E, taken in sorted order
6     if FIND-SET(u) ≠ FIND-SET(v)
7         A ← A ∪ {(u, v)}
8         UNION(u, v)
9 return A
In this pseudocode, lines 1–3 initialize an empty set A to store the edges of the minimum spanning tree and create a separate set for each vertex using the union-find operations MAKE-SET, FIND-SET, and UNION, which track connected components. Line 4 sorts the edges by increasing weight, ensuring lighter edges are considered first to greedily build the tree. The loop in lines 5–8 iterates over the sorted edges; for each edge, it checks if adding it would form a cycle by verifying if its endpoints are in different components via FIND-SET—if not, the edge is added to A and the components are merged with UNION. This pseudocode assumes the graph is provided as an explicit edge list for direct sorting; if the input is an adjacency list representation, the edges must first be collected into a list before sorting.

Illustration

Example

Consider a simple undirected graph with four vertices labeled A, B, C, and D, and five weighted edges: A–B with weight 1, A–D with weight 2, B–C with weight 3, A–C with weight 4, and C–D with weight 5. This graph can be visualized as a nearly complete network where A connects to all others, B connects to A and C, and C connects to B and D, forming a connected structure suitable for demonstrating minimum spanning tree construction. To apply Kruskal's algorithm, first sort the edges in non-decreasing order of weight: A–B (1), A–D (2), B–C (3), A–C (4), C–D (5). Initialize the –find with each in its own component: {A}, {B}, {C}, {D}. Process the edges sequentially:
  • Add A–B (weight 1): Vertices A and B are in different components, so them. New components: {A, B}, {C}, {D}.
  • Add A–D (weight 2): A and D are in different components, so them. New components: {A, B, D}, {C}.
  • Add B–C (weight 3): B and C are in different components, so them. New components: {A, B, C, D}.
  • Skip A–C (weight 4): A and C are already in the same component, forming a .
  • Skip C–D (weight 5): C and D are already in the same component, forming a .
The resulting minimum spanning tree consists of the edges A–B, A–D, and B–C, with a total weight of 6. This MST connects all vertices without cycles and can be depicted as a tree branching from A to B and D, with B further connected to C.

Analysis

Time complexity

Kruskal's algorithm has an overall time complexity of O(E \log E), where E is the number of edges in the , or equivalently O(E \log V) since E \leq V^2 for a with V vertices. The dominant operation is the edges by weight, which takes O(E \log E) time using a comparison-based such as or heap sort. The subsequent processing involves E iterations, each performing a find operation to check for cycles and a potential union operation to merge components using a union–find data structure. With union by rank and path compression, the amortized time for E such operations is O(E \alpha(V)), where \alpha is the inverse , which grows so slowly that it is effectively constant for all practical sizes. Thus, the union–find operations contribute negligibly to the total time, leaving as the bottleneck. For sparse graphs where E = O(V), the complexity simplifies to O(V \log V), as the sorting cost dominates and \log E \approx \log V. The best-case and worst-case complexities are the same asymptotically for connected graphs, as the algorithm always sorts all edges and processes each one. If edge weights are integers within a small range (up to W), a can reduce the sorting time to O(E + W), yielding a pseudopolynomial complexity, though the standard assumes comparison-based for arbitrary weights.

Space complexity

Kruskal's algorithm requires O( + E) space overall, where V denotes the number of vertices and E the number of edges. This arises from the need to store the edge list in O(E) space, the union–find using O(V) space for parent and rank arrays, and the output requiring O(V) space to hold its edges. The sorting of edges typically demands auxiliary space of O(E), though in-place methods can avoid this additional allocation in some implementations. Relative to the input representation, the space usage remains linear in the graph's size, rendering the algorithm memory-efficient for large-scale graphs.

Correctness

Spanning tree property

Kruskal's algorithm constructs a for connected undirected graphs and a for disconnected ones, ensuring the output connects all vertices without . The acyclicity of the output follows from the algorithm's use of a union–find data structure to add only that connect distinct components, preventing the formation of . At every step, an edge is included in the growing only if its endpoints belong to different connected components in the current ; otherwise, it is discarded. This guarantees that no is introduced, as adding an edge within the same component would close a . A key property is that the selected edges form a forest at every iteration of the algorithm. This can be established by induction on the number of edges added. For the base case with zero edges, the structure consists of V isolated vertices, which is a forest with V components. Assume that after adding i edges, the current set S forms a forest with V - i components. The next edge e added connects two distinct components in this forest (verified by the find operation in union–find), resulting in a new set with V - i - 1 components and no cycles, preserving the forest property. Regarding connectivity, the algorithm begins with V components and reduces the number by exactly one with each added edge, as each merges two components. For a connected , it continues adding edges until V-1 edges are selected or a single component remains, at which point no further connecting edges exist among the remaining candidates; since the is connected, exactly V-1 edges are added, spanning all vertices. In a disconnected with c components, the process yields a minimum spanning consisting of c trees, with a total of V - c edges.

Minimality

The minimality of the produced by Kruskal's algorithm follows from two fundamental properties of minimum s (MSTs): the cut property and the property. The cut property states that, given any cut of the graph's vertices into two nonempty subsets S and V \setminus S, the minimum-weight crossing the cut belongs to some MST of the graph. To see why, suppose T is an MST and e is the lightest crossing for the cut, but e \notin T. Adding e to T creates a , which must contain another crossing f. Replacing f with e yields a of weight at most that of T (since w(e) \leq w(f)), contradicting the minimality of T unless w(e) = w(f), in which case T \cup \{e\} \setminus \{f\} is also an MST containing e. Kruskal's algorithm satisfies the cut property at each step, ensuring the selected form part of some MST. The are considered in non-decreasing order of weight. When an e = (u, v) is added, u and v lie in different connected components of the current (say, components A and B), defining the cut (A, B). Any lighter must have been considered earlier; if such an crossed (A, B), it would have connected A and B already, contradicting the separation. Thus, e is the lightest crossing the cut, so by the cut property, e belongs to some MST extending the current . By on the number of added, the final tree is an MST. An alternative proof uses contradiction to show the total weight is minimal. Suppose T is the tree produced by Kruskal's algorithm and T' is another spanning tree with strictly smaller total weight. Order the edges by non-decreasing weight and let e be the lowest-weight edge in the symmetric difference T \Delta T' (i.e., the first point of divergence). Since previous edges match, the endpoints of e are in different components in the partial forest when e is added to T. In T', however, these endpoints are connected without e, so the unique path in T' between them consists of edges heavier than or equal to w(e). Replacing the heaviest edge f on that path with e yields a spanning tree of weight at most that of T' (and strictly less if w(f) > w(e)), contradicting the minimality of T'. If w(f) = w(e), the process continues without increasing weight, but the assumption of smaller total weight leads to inconsistency. The property complements this by ensuring no suboptimal are included: for any in the , the maximum-weight on the cannot belong to any MST, as removing it and adding a lighter non-cycle would reduce the total weight. Kruskal's algorithm inherently respects this by rejecting that would form , always selecting the lightest available that avoids them, thus maintaining minimality while building a . If edge weights are not distinct, multiple MSTs may exist with the same minimum total weight. Kruskal's algorithm still produces one such MST, as ties in weight do not violate the greedy choice property; the specific order among equal-weight s may vary, but the total weight remains minimal.

Extensions

Parallel implementation

Parallelizing Kruskal's algorithm for concurrent execution encounters significant challenges, primarily due to the sequential nature of all s by weight, which serves as a major bottleneck, and the requirement for thread-safe operations on the union-find to handle simultaneous additions without conflicts. In theoretical models like the (PRAM), the step can be addressed using parallel comparison-based algorithms that achieve O(log E) time with O(E / log E) processors. For the union-find component, concurrent implementations are essential; the seminal work by Shiloach and Vishkin introduced a for disjoint-set operations in the CRCW PRAM model, enabling O(log V) time per operation with O(V) processors through techniques like pointer jumping to propagate finds efficiently. To mitigate these issues and enhance parallelism, adaptations often integrate elements of , which naturally decomposes into independent phases where each simultaneously identifies its minimum-weight outgoing edge, reducing reliance on global sorting. This hybrid approach allows for efficient execution, as demonstrated in the work by Chung and Condon, who developed a Borůvka-based for shared-memory architectures, achieving near-linear on multiprocessors by distributing edge selections across processors. In the PRAM model, such variants attain O(log V) using E/V processors per phase, with the overall completing in O(log V) phases, providing a theoretical over the sequential O(E log V) bound. Practical implementations leverage these ideas in software libraries and distributed frameworks. The Parallel Boost Graph Library includes distributed-memory minimum spanning tree algorithms based on Dehne and Götz's BSP-model variants of Borůvka and hybrids, which scale to thousands of processors for graphs with millions of edges by partitioning the graph and synchronizing supersteps. In large-scale distributed environments, adaptations of Kruskal's algorithm process edge sorting and union-find operations across clusters, as shown in implementations on Hadoop that handle billion-edge graphs with linear scalability in the number of reducers. For accelerator hardware, GPU-based versions, such as ECL-MST, unify Kruskal's edge processing with Borůvka phases using operations for union-find, achieving up to 10x speedup over CPU baselines on dense graphs with billions of edges. Recent advancements as of 2025 include concurrent and distributed adaptations of Kruskal's algorithm for dynamic graphs, where edges are added or deleted over time. For instance, ContraMST employs incremental for efficient MST maintenance under updates, while other works propose online concurrent versions using pipeline distribution for streaming inputs.

Comparison with Prim's algorithm

Prim's algorithm constructs a minimum spanning tree (MST) for a connected, undirected by starting from an arbitrary and iteratively adding the minimum-weight that connects a vertex in the current to one outside it, maintaining a of candidate edges adjacent to the growing subtree. This local, vertex-centric expansion contrasts with Kruskal's global strategy, which sorts all edges by weight and greedily adds the smallest that does not form a , using a union-find structure to track connected components. While Kruskal processes the entire set independently of vertex selection, Prim builds the tree incrementally from a vertex, focusing on the frontier of unexplored adjacent edges. In terms of data structures, Kruskal relies on union-find for efficient and component merging, whereas Prim uses a to extract the minimum edge from the set of fringe edges, enabling dynamic updates as the tree expands. Both achieve O(E log V) with optimized implementations—Kruskal via and nearly constant-time union-find operations, and Prim via binary heap-based —but Prim can improve to O(E + V log V) with heaps, favoring it in dense graphs where the number of edges E approaches V². Conversely, Kruskal excels in sparse graphs with fewer edges, as the initial cost O(E log E) remains low relative to Prim's per-vertex operations. Kruskal is simpler to apply to disconnected graphs, naturally yielding a minimum spanning forest by including all components without modification, while Prim requires separate invocations on each connected component to achieve the same result and assumes connectivity from a single starting vertex. Their correctness stems from shared foundational properties: the greedy choice property, ensuring safe edge additions, and the cut property, which guarantees that the lightest edge across any cut in the graph belongs to some MST, allowing both algorithms to build an optimal solution step by step. Kruskal's algorithm was first described in 1956 by Joseph B. Kruskal in the Proceedings of the , with Prim's formulation following in 1957 by Robert C. Prim in the . In contemporary contexts like streaming or distributed , where edges arrive incrementally, Kruskal's edge-focused processing offers advantages for adaptation, such as handling unsorted streams with approximate , highlighting trade-offs beyond classical sparse-versus-dense distinctions. Recent works (2024–2025) have extended this to incremental MST algorithms that efficiently handle updates using variants of Kruskal's selection.

References

  1. [1]
    4.3 Minimum Spanning Trees - Algorithms, 4th Edition
    Jan 10, 2025 · Kruskal's algorithm processes the edges in order of their weight values (smallest to largest), taking for the MST (coloring black) each edge ...
  2. [2]
    Kruskal's Algorithm - cs.wisc.edu
    This page goes over Kruskal's Algorithm, a way to find the Minimum Spanning Tree of certain graphs. Below is the pseudocode for the algorithm and an example ...
  3. [3]
    [PDF] Greedy Algorithms
    Theorem: Kruskal's algorithm always produces an MST. Proof: Let T be the tree produced by Kruskal's algorithm and T* be an MST. We will prove c(T) = c(T*).
  4. [4]
    [PDF] Minimum Spanning Trees
    The two classical algorithms for computing MST are Kruskal's and Prim's algorithms. They both have same running time complexity, and they are both greedy ...
  5. [5]
    18.1 Networks and Minimal Spanning Trees
    A spanning tree for a graph is a subgraph which is a tree and which connects every vertex of the original graph. 🔗. So, when given a graph, we will find a ...
  6. [6]
    Minimum Spanning Tree -- from Wolfram MathWorld
    The minimum spanning tree of a weighted graph is a set of edges of minimum total weight which form a spanning tree of the graph. When a graph is unweighted, ...
  7. [7]
    [PDF] On the history of the minimum spanning tree problem. - UCSD Math
    It is standard practice among authors discussing the minimum spanning tree problem to refer to the work of Kruskal (1956) and Prim (1957) as the sources of the ...
  8. [8]
    [PDF] Applications of minimum spanning trees
    1 Network design. Minimum spanning trees have direct applications in the design of networks, including computer networks, telecommunications networks ...
  9. [9]
    6.3 minimum-spanning-tree problem - MIT
    ... tree (MST) problem has direct applications of its own, primarily in problems of transportation network design, and also constitutes an important building ...
  10. [10]
    Efficiency of a Good But Not Linear Set Union Algorithm
    A linear-time algorithm for a special case of disjoint set union. STOC '83: Proceedings of the fifteenth annual ACM symposium on Theory of computing.
  11. [11]
    [PDF] Kruskal's algorithm
    Kruskal's algorithm is an algorithm that finds the minimum spanning tree for a graph. The algorithm works as follows: 1 Page 2 1. Sort all edges by weight, ...
  12. [12]
    [PDF] Kruskal's Minimum Spanning Tree Algorithm & Union-Find Data ...
    Jan 21, 2013 · Reverse-Delete Algorithm: Remove edges in decreasing order of weight, skipping those whose removal would disconnect the graph. Theorem. Reverse- ...<|control11|><|separator|>
  13. [13]
    [PDF] CSE 421: Introduction to Algorithms - Washington
    Kruskal's Pseudocode: • Let we denote the weight of edge e. Kruskals(V, E): sort E in non-decreasing order (w0. ≤ w1 … ≤ wm). Initialize each vertex ...
  14. [14]
    [PDF] Greedy Algorithms
    ... time complexity, by. Theorem 21.1. Overall, the complexity for Kruskal's algorithm is: O(|E|lg|E|) = O(|E|lg|V |). ' &. $. %. CS404/504. Computer Science. 29.<|control11|><|separator|>
  15. [15]
    [PDF] Chapter 12 Greedy Algorithms for Minimum Spanning Trees
    Mar 3, 2011 · 3 Kruskal's algorithm can be implemented in O(mlog m) time. Proof : Sorting takes O(mlog m) time, O(m) finds take O(m) time and O(n) unions ...
  16. [16]
    [PDF] ‣ naïve linking ‣ link-by-size ‣ link-by-rank ‣ path compression ...
    Jan 15, 2020 · Starting from an empty data structure, link-by-rank with path compression performs any intermixed sequence of m ≥ n MAKE-SET, UNION, and FIND ...Missing: seminal | Show results with:seminal
  17. [17]
    Minimum spanning trees - UC Irvine
    Minimum spanning trees · A randomized algorithm can solve it in linear expected time. · It can be solved in linear worst case time if the weights are small ...Missing: complexity | Show results with:complexity
  18. [18]
    Time and Space Complexity Analysis of Kruskal Algorithm
    Jul 23, 2025 · The space complexity of Kruskal's algorithm is O(V + E), where V is the number of vertices and E is the number of edges in the graph.
  19. [19]
    [PDF] A Review on Minimum Spanning Tree Algorithms - IJSRD.com
    The space complexity of Kruskal's algorithm [16] is ϴ (E+V). The time ... Cormen, “Introduction to Algorithms”, 3rd. Edition, MIT Press, 2009. [13] B ...
  20. [20]
    [PDF] Kruskal's Algorithm: Correctness Analysis
    Feb 1, 2011 · Given any connected edge-weighted graph G, Kruskal's algorithm outputs a minimum spanning tree for G. 3 Discussion of Greedy Algorithms. Before ...
  21. [21]
    [PDF] Kruskal's Minimum Spanning Tree Algorithm & Union-Find Data ...
    Theorem. Kruskal's algorithm produces a minimum spanning tree. Proof. Consider the point when edge e = (u,v) is added: v u. S = nodes to which v has a path.
  22. [22]
    [PDF] 4.3 MINIMUM SPANNING TREES | Algorithms - cs.Princeton
    Mar 29, 2021 · Cut property: correctness proof. Def. A cut in a graph is a partition of its vertices into two nonempty sets. Def. A crossing edge of a cut ...
  23. [23]
    [PDF] Greedy Algorithms
    The Cut Property. ○ The previous correctness proof relies on a property of MSTs called the cut property: Theorem (Cut Property): Let (S, V – S) be a ...
  24. [24]
    [PDF] CS245-2017S-18 Spanning Trees - Data Structures and Algorithms
    Proof (by contradiction). Assume that no optimal MST T contains the minimum ... Coding Kruskal's Algorithm: Place all edges into a list. Sort list of ...
  25. [25]
    [PDF] Minimum Spanning Trees and Greedy Algorithms - Washington
    Proof: Suppose, for the sake of contradiction, that 𝑒 = (𝑢,𝑣) is a safe edge, but not in the MST. Let (𝑆,𝑉 ∖ 𝑆) be a cut where 𝑒 is the minimum edge spanning (𝑆 ...
  26. [26]
    Parallel BGL Minimum Spanning Tree - Boost
    Dehne and Gotz reported [DG98] mixed results when evaluating these four algorithms. No particular algorithm was clearly better than the others in all cases.
  27. [27]
    [PDF] Shortest Connection Networks And Some Generalizations
    The basic problem considered is that of interconnecting a given set of terminals with a shortest possible network of direct links. Simple and prac- tical ...Missing: Robert | Show results with:Robert
  28. [28]
    Kruskal's vs Prim's Algorithm | Baeldung on Computer Science
    Mar 18, 2024 · The Kruskal algorithm is better to use regarding the easier implementation and the best control over the resulting MST. However, Prim's algorithm offers better ...
  29. [29]
    [PDF] State-of-the-Art Algorithms for Minimum Spanning Trees∗
    The paper begins by reviewing the classical 1950's MST-construction algorithms of Kruskal [11] (previously invented by Varnık in 1930) and Prim [13], as well as ...
  30. [30]
    Minimum Spanning Tree: The Cut Property - Baeldung
    Oct 7, 2020 · We presented the correctness of the cut property and showed that cut property is valid for all minimum spanning trees.
  31. [31]
    [PDF] kruskal-1956.pdf
    JOSEPH B. KRUSKAL, JR. Several years ago a typewritten translation (of obscure origin) of. [1] raised some interest. This paper is devoted to the following.Missing: original | Show results with:original
  32. [32]
    [PDF] Adaptive Prim–Kruskal (APK): A Hybrid Density - JETIR.org
    promising foundation for modern graph analytics frameworks, particularly in distributed or semi-streaming environments where adaptability to graph sparsity ...