Fact-checked by Grok 2 weeks ago

Borůvka's algorithm

Borůvka's algorithm is a that computes the (MST) of a connected, undirected, weighted graph by iteratively identifying and adding the minimum-weight edges connecting distinct components until all vertices are connected in a single tree. Developed by Otakar Borůvka in 1926, the algorithm was originally motivated by the practical problem of optimizing electrical power distribution networks for the West Moravian Power Company, predating the advent of modern computers and representing the first known solution to the MST problem. The algorithm proceeds in phases: it begins with each as its own component, then in each , for every component, selects the minimum-weight outgoing to another component, adds these edges, and contracts the components until only one remains; this process guarantees an MST and runs in O(E log V) time, where E is the number of and V is the number of . Though rediscovered independently multiple times—by French mathematician Gustave Choquet in 1938, a team (K. Florek, J. Łukaszewicz, H. Perkal, J. Steinhaus, and S. Zubrzycki) in 1951, and American engineer George Sollin in 1961—Borůvka's original contribution laid foundational groundwork for subsequent MST algorithms like Kruskal's and Prim's, and it remains influential in due to its inherent parallelism in selecting edges across components. The algorithm's elegance lies in its simplicity and efficiency for dense graphs, and it has applications beyond electrical grids, including design, clustering in , and approximating solutions in problems.

Overview

Description

Borůvka's algorithm is a designed to find a (MST) of an undirected, connected, weighted , or a minimum spanning forest in the case of a disconnected graph. The algorithm selects edges in a way that progressively builds the tree by always choosing the lowest-weight options that connect components without forming cycles, ensuring the final structure spans all vertices with minimal total edge weight. The primary purpose of Borůvka's algorithm is to solve the problem, which seeks a of edges that connects every in the while minimizing the sum of their weights and avoiding cycles. Given an input G = (V, E) where V is the set of , E is the set of edges, and each edge e \in E has a weight w(e), the algorithm outputs a T \subseteq E (or for disconnected graphs) such that the total weight \sum_{e \in T} w(e) is minimized. Key characteristics of the algorithm include its phased operation, where each phase merges connected components by identifying and adding the cheapest outgoing edge from each component to a different one, reducing the number of components by at least half per phase. It is highly parallelizable, as the selection of minimum outgoing edges for different components can occur independently, making it suitable for concurrent implementations. Additionally, the algorithm performs efficiently on dense graphs, where the number of edges is quadratic in the number of vertices, due to its balanced workload across phases.

History

Borůvka's algorithm was first published in 1926 by the Czech mathematician Otakar Borůvka as a solution to the minimum spanning tree problem motivated by the practical need to optimize the electrical power grid in the region of Moravia. Borůvka presented his method in two papers: a mathematical treatment titled "O jistém problému minimálním" (On a certain minimal problem) and an applied version titled "Přispěvek k řešení otázky minimálního obdělání Moravské elektrické sítě" (Contribution to the solution of a problem of minimizing the construction of the Moravian electrical network), aimed at engineers involved in network design. These works established the algorithm as the earliest known approach to constructing a minimum spanning tree in a weighted graph, predating later developments by over three decades. The algorithm was independently rediscovered multiple times due to its inherent simplicity and applicability across fields like optimization and clustering. In 1938, French Gustave Choquet described a similar procedure in a note analyzing the problem, though without a full proof of correctness; this work was communicated to the Comptes Rendus de l'Académie des Sciences by . It reemerged in 1951 in the Polish publication "Taxonomia Wrocławska" by Jerzy Florek, Jerzy Łukaszewicz, Józef Perkal, Hugo Steinhaus, and Stanisław Zubrzycki, where it was applied to taxonomic clustering problems in . In 1961, Georges Sollin independently formulated the method in a entitled "Problème de l'Arbre Minimum" and presented it at a seminar in organized by Claude Berge. These repeated independent discoveries underscore the algorithm's intuitive appeal, as it relies on straightforward choices without requiring advanced theoretical machinery. As a result, naming conventions vary: it is commonly known as Borůvka's algorithm in recognition of the original publication, but frequently referred to as Sollin's algorithm in contexts involving parallel algorithms, reflecting Sollin's influence on computational implementations. This predates the more widely recognized algorithms of in 1956 and Robert Prim in 1957, positioning Borůvka's contribution as foundational in the evolution of graph optimization techniques.

Background Concepts

Minimum Spanning Trees

In a connected, undirected with distinct weights, a (MST) is a subset of the s that forms a connecting all vertices, contains no cycles, and has the minimum possible total weight among all such spanning trees. This structure ensures full while minimizing cost, making it a core problem in . Key properties of MSTs include uniqueness under certain conditions and optimality criteria based on cuts and cycles. If all edge weights in the graph are distinct, the MST is unique, as any deviation would increase the total weight. The cut property states that for any partition of the vertices into two nonempty sets (a cut), the minimum-weight edge crossing the cut belongs to every MST. Complementing this, the cycle property asserts that for any cycle in the graph, the maximum-weight edge on that cycle cannot belong to any MST, ensuring no redundant high-cost connections. For disconnected graphs, the analogous structure is a minimum spanning forest, which consists of an MST for each connected component, collectively minimizing the total weight while spanning all vertices without cycles. MSTs hold significant importance in applications such as network design, where they optimize connectivity at minimal cost in telecommunications, electrical grids, or transportation systems. They also underpin clustering techniques, such as forming k clusters by computing an MST and removing the k-1 heaviest edges, and serve as building blocks in approximation algorithms for harder problems like the traveling salesman problem.

Graph Theory Prerequisites

An undirected graph G = (V, E) consists of a V of vertices and a set E of edges, where each edge is an of distinct vertices from V. In a weighted undirected graph, each edge e \in E is assigned a positive real weight w(e) \in \mathbb{R}^+, often representing costs or distances. Connected components in an undirected graph partition the vertex set V into maximal where every pair of vertices within a subset is connected by a (a sequence of adjacent edges), but no exists between vertices in different subsets. These components form induced subgraphs that are themselves connected and cannot be extended further within G. Borůvka's algorithm operates on undirected graphs with positive edge weights and accommodates possibly disconnected inputs, producing a minimum spanning forest consisting of minimum spanning trees for each . Standard notations denote the number of as n = |V| and the number of edges as m = |E|. Graphs are commonly represented using adjacency lists, where each maintains a list of its neighboring (and weights for weighted graphs), or adjacency , an n \times n where entry (i,j) indicates the weight of the edge between i and j (or absence thereof). Adjacency lists are space-efficient for sparse graphs (m \ll n^2), while facilitate quick edge queries.

The Algorithm

Step-by-Step Procedure

Borůvka's algorithm initializes a spanning F by treating each of the input G = (V, E) as its own component, yielding |V| components in total. This setup represents the starting point of the , where no edges have been added yet. The algorithm then enters an iterative phase that continues as long as there is more than one component in the . In each such phase, for every current component C, the minimum-weight edge e is identified that has one in C and the other in a different component. This search is performed independently for each component, focusing only on outgoing edges to external vertices. Once these minimum-weight edges are found for all components, they are simultaneously added to the spanning forest F. The components connected by each such edge are then unioned together, forming larger components. Because every component contributes at least one outgoing edge (and incoming edges pair with them), this merging process reduces the number of components by at least a factor of two per phase, though it may reduce them more substantially if multiple components connect in chains or cycles of merges. The iteration repeats with the updated components until only a single component remains, at which point F constitutes the minimum spanning tree of G. If the original graph is disconnected, the process terminates with a minimum spanning when no further outgoing edges exist between components. In cases of tied minimum weights, an arbitrary edge among the minima is selected for each component, ensuring the added edges connect distinct components and thus avoid cycles.

Pseudocode

Borůvka's algorithm can be implemented using a union-find to efficiently track connected components and merge them iteratively. The high-level structure initializes an empty F consisting of all vertices and no edges. While the number of components in F exceeds one, for each component, the minimum-weight outgoing edge is selected and added to F, followed by merging the connected components. The following pseudocode provides a detailed implementation, assuming the graph G = (V, E) is undirected and connected with distinct edge weights for simplicity; it uses an array to track safe (minimum outgoing) edges per component and a union-find subroutine for labeling and merging components.
BORUVKA(V, E)
1  F ← (V, ∅)  // Initialize forest with all vertices, no edges
2  count ← ConnectedComponents(F)  // Number of components, initially |V|
3  while count > 1
4      AddAllSafeEdges(E, F, count)
5      count ← ConnectedComponents(F)
6  return F

ADDALLSAFEEDGES(E, F, count)
1  for i ← 1 to count
2      safe[i] ← null  // No safe edge selected yet for component i
3  for each edge uv ∈ E
4      if comp(u) ≠ comp(v)  // u and v in different components
5          if safe[comp(u)] = null or w(uv) < w(safe[comp(u)])
6              safe[comp(u)] ← uv
7          if safe[comp(v)] = null or w(uv) < w(safe[comp(v)])
8              safe[comp(v)] ← uv
9  for i ← 1 to count
10     if safe[i] ≠ null
11         add safe[i] to F  // Union the components implicitly via later ConnectedComponents
Here, ConnectedComponents(F) labels each vertex with its component ID (via union-find with path compression and union by rank for efficiency) and returns the number of components; comp(v) returns the label of vertex v. Tie-breaking is handled by selecting the minimum weight edge, with the assumption of distinct weights ensuring uniqueness; if weights are not distinct, an arbitrary minimum can be chosen. For implementation, the graph is typically represented using adjacency lists to facilitate iteration over edges in O(|E|) time per phase. The union-find structure supports finding and merging components in nearly amortized constant time per operation. To handle disconnected graphs, the algorithm naturally produces a minimum spanning forest by terminating when no further merges are possible, with each component yielding its own spanning tree.

Analysis

Time and Space Complexity

Borůvka's algorithm exhibits a worst-case of O(m \log n), where n is the number of vertices and m is the number of edges in the input . This bound arises from the algorithm's -based structure, in which there are at most O(\log n) phases, as each reduces the number of connected components by at least half. In each , the algorithm identifies the minimum-weight outgoing for every component by scanning all edges, which takes O(m) time, and performs unions using a union-find with path compression and union by rank, which operates in nearly linear time overall across all phases. The of Borůvka's algorithm is O(n + m), primarily due to the storage required for the 's adjacency lists or edge list representation and the union-find structure, which uses arrays of size proportional to n. Optimizations exist that achieve linear time O(m) for restricted classes. For planar graphs, a contraction-based variant of Borůvka's algorithm runs in O(m) time by leveraging the 's bounded and properties to efficiently manage contractions and edge cleanups per . For broader minor-closed families (excluding trivial cases like empty or complete graphs), deterministic linear-time MST algorithms build upon Borůvka-style contractions combined with linear-time separators. Additionally, randomized variants, such as the one incorporating Borůvka s with edge sampling and verification, achieve expected O(m) time for general graphs.

Correctness

The correctness of Borůvka's algorithm relies on the greedy choice property of minimum spanning trees (MSTs), where the algorithm selects edges—those that can be included in some MST without compromising optimality—in each phase. Specifically, in every iteration, for each in the current , the algorithm identifies the minimum-weight edge leaving that component, which crosses the cut defined by the component and the rest of the graph. By the MST cut property, this edge is and belongs to at least one MST of the graph. The proof proceeds by on the number of phases. In the base case, the initial consists of single-vertex components with no edges, which is trivially a of any MST. Assume that after k phases, the current F_k is contained in some MST T of the . In the (k+1)-th , the added edges are with respect to F_k, so they can replace any conflicting edges in T while preserving or reducing the total weight, ensuring F_{k+1} remains a of some MST. Since each connects components without cycles (as edges only link distinct components), the process maintains a that grows toward a . For disconnected graphs, the algorithm naturally extends to produce a minimum spanning (MSF), as it connects components within each of the original using the same safe-edge selection, yielding the minimum-weight forest with no . Cycle avoidance is inherent: edges are only added between different components in the current , preventing intra-component connections that would form cycles, consistent with the acyclicity of MSTs.

Illustrative Example

Graph Setup

To illustrate Borůvka's algorithm, consider the following connected, undirected, weighted graph with seven vertices labeled A, B, C, D, E, F, and G. The edges and their respective weights are:
  • A–B: 1
  • A–D: 4
  • A–G: 7
  • B–C: 2
  • B–E: 5
  • B–G: 8
  • C–D: 3
  • C–F: 6
  • D–E: 2
  • D–G: 5
  • E–F: 4
  • F–G: 1
This configuration is connected, ensuring the graph has a minimum spanning tree. Initially, the algorithm treats each vertex as a separate component, resulting in seven singleton sets: {A}, {B}, {C}, {D}, {E}, {F}, {G}. The can be visualized with vertices arranged in a roughly circular layout (A at the top, proceeding clockwise to G), with edges drawn as straight lines labeled by their weights; thicker lines could represent lower weights for emphasis in a . This example is chosen for its simplicity, allowing manual tracing of the algorithm's steps, while its structure necessitates multiple merging phases to fully connect the components, highlighting the iterative process.

Execution Steps

In the first phase of Borůvka's algorithm applied to the example graph, each singleton component selects its minimum-weight outgoing edge:
  • {A} selects A–B (1)
  • {B} selects B–A (1)
  • {C} selects C–B (2)
  • {D} selects D–E (2)
  • {E} selects E–D (2)
  • {F} selects F–G (1)
  • {G} selects G–F (1)
The unique selected edges added to the MST are A–B (1), B–C (2), D–E (2), and F–G (1). These cause merges into three components: {A, B, C}, {D, E}, and {F, G}. In the second phase, the three components each select their minimum-weight outgoing edge to another component:
  • {A, B, C} selects C–D (3) to {D, E}
  • {D, E} selects D–C (3) to {A, B, C}
  • {F, G} selects F–E (4) to {D, E}
Adding C–D (3) and E–F (4) merges all into one component: {A, B, C, D, E, F, G}, completing the . The total weight of the MST is 13, formed by the edges A–B (1), B–C (2), C–D (3), D–E (2), E–F (4), and F–G (1).

Comparisons and Variants

Comparison with Other MST Algorithms

Borůvka's algorithm shares the greedy nature of , as both construct the (MST) by repeatedly selecting the lightest s that connect disjoint components without forming cycles. However, while requires sorting all edges globally in ascending order before processing them sequentially, Borůvka's algorithm avoids this by identifying the minimum-weight incident to each component locally in each phase, which eliminates the need for a full edge sort upfront. Both algorithms achieve the same asymptotic of O(m \log n), where m is the number of edges and n is the number of vertices, but Borůvka's phase-based structure makes it more amenable to parallelization, as the minimum edges for different components can be found concurrently. In contrast to , which grows a single MST starting from an arbitrary by iteratively adding the minimum-weight connecting the growing to an unvisited , Borůvka's algorithm simultaneously expands multiple trees—one per component—by selecting the cheapest outgoing from each. This multi-source growth leads to faster initial contraction of components compared to Prim's single-focus approach, particularly in distributed or parallel environments. , especially when implemented with Fibonacci heaps, performs well on dense graphs with O(m + n \log n) time, making it preferable for scenarios where density is high and sequential efficiency is prioritized over parallelism. A key strength of Borůvka's algorithm lies in its simplicity and inherent parallelism, requiring no global data structures like priority queues for all edges, which reduces overhead in multi-processor settings and allows for efficient implementation on modern such as GPUs. However, it can suffer in with skewed distributions, where the number of phases may approach \log n without halving components evenly, and repeated graph contractions can become computationally expensive if edge counts do not decrease proportionally. Borůvka's is particularly advantageous in contexts, such as distributed systems, whereas Kruskal's excels in sparse due to its straightforward union-find integration, and Prim's suits dense, sequential applications like network design.

Parallel and Modern Variants

Borůvka's algorithm, also referred to as Sollin's algorithm in contexts, lends itself naturally to parallelization because each phase involves independent computations across connected components, where vertices within a component simultaneously identify their minimum-weight outgoing . This structure allows for efficient distribution of the minimum-edge finding step across multiple processors, with merging of components handled in subsequent . Implementations on shared-memory systems demonstrate significant speedups for sparse graphs, scaling to hundreds of threads with speedups up to 30x while maintaining low overhead. In distributed systems, such as those modeling large-scale networks, the algorithm's phases facilitate message-passing paradigms where components exchange minimal edge information asynchronously, enabling fault-tolerant computations in environments like . Modern variants build on Borůvka's iterative to achieve near-linear time complexities. The by Karger, Klein, and Tarjan integrates Borůvka phases with edge sampling: in each phase, a random of is contracted if they form cycles, reducing the size exponentially with high probability, yielding an expected O(m) runtime for dense graphs. This approach avoids explicit by leveraging sampling to bound the number of phases to O(log n). For a deterministic alternative, Chazelle's algorithm employs soft heaps—an approximate permitting controlled errors—to simulate Borůvka steps within a linking , achieving O(m α(m, n)) time, where α is the inverse , nearly optimal for practical sizes. The soft heap's meld and extract-min operations support efficient linking while correcting errors lazily. Beyond theoretical efficiency, Borůvka's variants find applications in network design, where minimum spanning trees minimize costs, such as in infrastructure to optimize cable layouts or in electrical grids for efficient power distribution. In machine learning, the algorithm supports by constructing MSTs on similarity graphs, enabling scalable affinity-based methods that reveal data groupings without predefined cluster counts, as seen in large-scale datasets for community detection. For VLSI design, parallel Borůvka implementations optimize power and clock networks by computing low-weight spanning trees amid dense constraints, reducing wire lengths and in chip layouts. These uses highlight the algorithm's for big data graphs, where parallel phases handle billions of edges in distributed environments. Post-2000 developments emphasize and streaming adaptability. GPU implementations exploit the algorithm's parallelism by assigning components to thousands of threads for simultaneous minimum-edge searches, achieving up to 50x speedups on graphs with up to 5 million vertices compared to CPU baselines, particularly for MSTs in geometric data. For approximate MST in computation models, where edges are processed in rounds with limited memory per machine, Borůvka-inspired variants use batched contractions and sampling to maintain (1+ε)-approximations with O(n^δ) space per machine, supporting applications like .

References

  1. [1]
    [PDF] Minimum Spanning Trees
    Jun 17, 2010 · The oldest and arguably simplest minimum spanning tree algorithm was discovered by the Czech mathematician Otakar Borůvka in 1926, about a year ...
  2. [2]
    Otakar Borůvka on minimum spanning tree problem Translation of ...
    Apr 28, 2001 · Borůvka presented in 1926 the first solution of the Minimum Spanning Tree Problem (MST) which is generally regarded as a cornerstone of Combinatorial ...
  3. [3]
    BoruvkaMST (Algorithms 4/e)
    If the graph is not connected, it computes a minimum spanning forest, which is the union of minimum spanning trees in each connected component. The weight ...
  4. [4]
    [PDF] Minimum Spanning Trees
    Borůvka's algorithm run on the example graph. Thick red edges are in F ... The next oldest minimum spanning tree algorithm was first described by the.
  5. [5]
  6. [6]
    [PDF] On the history of the minimum spanning tree problem. - UCSD Math
    The recent interest in Algorithm 3 was apparently initiated by its rediscovery by Sollin [Sol61], [So162], although it was not generally known that Algorithm 3.
  7. [7]
    Computational experience with minimum spanning tree algorithms
    The empirical performance of the algorithms of Kruskal, Prim, and Sollin for determining a minimum spanning tree is examined and found to be considerably ...
  8. [8]
    4.3 Minimum Spanning Trees - Algorithms, 4th Edition
    Jan 10, 2025 · A minimum spanning tree (MST) of an edge-weighted graph is a spanning tree whose weight (the sum of the weights of its edges) is no larger than the weight of ...Missing: Florek et 1951 Taxonomia Wroclawska
  9. [9]
    [PDF] Greedy Algorithms / Minimum Spanning Tree - Washington
    Cut property: Let 𝑆 be any subset of nodes (called a cut), and let. 𝑒 be the min cost edge with exactly one endpoint in 𝑆. Then every. MST contains 𝑒. Cycle ...Missing: uniqueness | Show results with:uniqueness
  10. [10]
    [PDF] Randomized Algorithm for Minimum Spanning Trees
    1. If G is empty, return empty forest. 2. Perform three Boruvka's step on G to get G1 with at most n/8 vertices.
  11. [11]
    Minimum Spanning Trees in Java - cs.wisc.edu
    MSTs have many real-world applications including: Network design (e.g., designing road, telecommunications, or electrical networks); Approximation algorithms ...
  12. [12]
    [PDF] Applications of minimum spanning trees
    Clustering. If you want to cluster a bunch of points into k clusters, then one approach is to compute a minimum spanning tree and then drop the k-1 most ...
  13. [13]
    [PDF] 1 Weighted Graph Representation
    Consider a weighted graph G = (V, E,w), w: E → R. The graph can be either directed or undirected. For convenience we define w(u, v) = ∞ if (u, v) 6∈ E. The.
  14. [14]
    [PDF] 6.042J Chapter 5: Graph theory - MIT OpenCourseWare
    These connected pieces of a graph are called its connected components. ... By definition, a connected graph is planar iff it has a planar embedding. So.
  15. [15]
    [PDF] Graph Theory
    NOTE BOOK Page 3 Two vertices are u and v are adjacent if Je e=(u, v) connecting u and v. Notation: |V|= n = #vertices |E|= m = #edges.
  16. [16]
    [PDF] Graph Representation - csail
    Apr 12, 2011 · The two main graph representations we use when talking about graph problems are the adjacency list and the adjacency matrix.
  17. [17]
    Minimum spanning trees - UC Irvine
    The idea is to do steps like Prim's algorithm, in parallel all over the graph at the same time. Boruvka's algorithm: make a list L of n trees, each a single ...<|separator|>
  18. [18]
    None
    ### Extracted Pseudocode for Borůvka's Algorithm
  19. [19]
    [PDF] 25 Minimum Spanning Trees
    25.3 Boruvka's Algorithm​​ The oldest and arguably simplest minimum spanning tree algorithm was discovered by Boruvka in 1926, long before computers even existed ...
  20. [20]
    [PDF] COS 423 Notes on Minimum Spanning Trees Spring 2009 1. The ...
    A pass of Boruvka's algorithm can be performed in O( )m time as follows: using graph search, find the vertex sets of the blue trees. For each edge, determine ...
  21. [21]
    Two linear time algorithms for MST on minor closed graph classes
    This article presents two simple deterministic algorithms for finding the Minimum Spanning Tree in time for any non-trivial class of graphs closed on graph ...Missing: Eppstein | Show results with:Eppstein
  22. [22]
    [PDF] ECE-374-B: Lecture 19 - Minimum spanning trees
    Mar 30, 2023 · If e is a safe edge then every minimum spanning tree contains e. Proof ... Borůvka's Algorithm. Page 65. Borůvka's Algorithm. Simplest to ...
  23. [23]
    [PDF] 1 Some loose ends from last time - Cornell: Computer Science
    Sep 20, 2010 · Boruvka's algorithm is possibly the earliest published algorithm for the minimum spanning tree problem. It was designed for the purpose of ...
  24. [24]
    [PDF] Algorithms for Minimum Spanning Trees
    Oct 27, 2015 · Correctness of Prim's Algorithm. Prim's Algorithm. Pick edge with minimum attachment cost to current tree, and add to current tree. Proof of ...
  25. [25]
    [PDF] 1 Greedy Algorithms - Stanford University
    May 18, 2016 · We prove this below, and this completes the correctness proof of the algorithm. Claim 2. If the weights of the edges in a graph are all ...
  26. [26]
    [PDF] CS174 Lecture 17 - People @EECS
    This technique is called Boruvka's algorithm. It could be ... Recursively applying algorithm MST, compute the minimum spanning forest F2 of the graph.
  27. [27]
    A High-Performance MST Implementation for GPUs
    Nov 17, 2023 · Since Borůvka's algorithm is the most amenable to parllelization of the three classic MST algorithms, many parallel MST codes are based on this ...
  28. [28]
    Comparative of prim's and boruvka's algorithm to solve minimum ...
    The purpose of this study is to find out the Primary electricity distribution network graph model and correct algorithm to determine the minimum spanning tree.
  29. [29]
    [PDF] Parallel Implementation of Bor˚uvka's Minimum Spanning Tree ...
    Boruvka's algorithm: This algorithm, also known as. Sollin's algorithm, constructs a spanning tree in iterations composed of the following steps (organized here ...<|control11|><|separator|>
  30. [30]
    [PDF] A Practical Scalable Shared-Memory Parallel Algorithm for ...
    Borůvka steps can be implemented in Opmq time so that the total running time of Borůvka's algorithm is Opmlog nq. One such implementation is described below.
  31. [31]
    [PDF] A randomized linear-time algorithm to find minimum spanning trees
    In this paper, we describe a randomized algorithm for finding a minimum spanning tree. It runs in O(m) time with high probability in the restricted random- ...
  32. [32]
    [PDF] A Minimum Spanning Tree Algorithm with Inverse-Ackermann
    Abstract. A deterministic algorithm for computing a minimum spanning tree of a connected graph is presented. Its running time is O(ma(m, n)), where a is the ...
  33. [33]
    [PDF] Minimum Spanning Tree (MST): A Comprehensive Survey & Analysis
    such as VLSI, network design[4,], medical images, water supply networks and ... The Borůvka–Chandrasekaran algorithm is a parallel algorithm that combines ...
  34. [34]
    Affinity Clustering: Hierarchical Clustering at Scale - NIPS papers
    In particular, we propose affinity, a novel hierarchical clustering based on Boruvka's MST algorithm. We prove certain theoretical guarantees for affinity (as ...
  35. [35]
    [PDF] Graph Algorithms for VLSI Power and Clock Networks
    ... VLSI sys- tem design issues at each layer of abstraction. In this dissertation, a diverse spectrum of applications of graph theory in the design of VLSI ...
  36. [36]
    Fast minimum spanning tree for large graphs on the GPU
    In this paper, we present a minimum spanning tree algorithm on Nvidia GPUs under CUDA, as a recursive formulation of Borůvka's approach for undirected graphs.
  37. [37]
    Massively Parallel Minimum Spanning Tree in General Metric Spaces
    Instead, we use a novel variation Borůvka's algorithm to build the approximate MST. For each level t = αk, we regard each vertex set of CPt/α as a super ...