Fact-checked by Grok 2 weeks ago

Karger's algorithm

Karger's algorithm is a randomized for finding a in a connected, undirected with non-negative edge weights. Invented by David Karger, it was first published in 1993. The core procedure involves repeatedly selecting a random and contracting its endpoints into a single supernode—merging parallel edges and removing self-loops—until only two supernodes remain; the multiedges between them then define a cut whose size is a candidate for the minimum. In a single run on a with n vertices, the algorithm succeeds in finding a specific with probability at least 1 / \binom{n}{2}, which is asymptotically Ω(1/n²). To achieve high success probability (e.g., at least 1 - 1/n), the algorithm is typically repeated O(n² log n) independent times, yielding an expected running time of O(m n² log³ n) in the basic implementation, where m is the number of edges. This probabilistic approach contrasts with deterministic max-flow based methods, offering simplicity and parallelizability, with the original formulation providing the first randomized NC algorithm for the global problem. An improved variant, the Karger-Stein algorithm developed in 1996, enhances the success probability to Ω(1 / log n) per run by employing a recursive branching strategy that contracts to roughly n / √2 supernodes before recursing on two independent subinstances, reducing the total expected time to O(n² log n) for high-probability success. This variant maintains the paradigm while addressing the low success rate of the original, making it more practical for large . Both algorithms have influenced subsequent work on partitioning, design, and VLSI , underscoring the power of in ; subsequent developments include deterministic near-linear time algorithms as of 2020.

Introduction

Overview

Karger's algorithm is a randomized for finding the in an undirected with non-negative edge weights. The partitions the graph's vertices into two non-empty subsets such that the total weight of edges crossing between the subsets is minimized. This global min-cut problem arises in applications like network design, where identifying weak points in is crucial, and in for analyzing protein interaction networks. The algorithm's core mechanism relies on repeated edge contractions. It begins with the original and iteratively selects a random , contracting it by merging its incident vertices into a single supernode while preserving edge weights as multiedges between supernodes. This process continues until only two supernodes remain, at which point the multiedges connecting them correspond to a candidate cut in the original . A single run of the basic contraction executes in O(n^2) time, where n is the number of vertices, primarily due to the cost of selecting and updating edges across n-2 contractions. Unlike exact deterministic methods, such as the that solves the problem in O(mn + n^2 \log n) time with m edges, Karger's approach trades guaranteed correctness for simplicity and speed, making it particularly suitable for large-scale graphs where repeated runs can achieve high-probability success efficiently. For example, in a 4-vertex with vertices A-B-C-D-A and unit weights, the has size 2, such as separating \{A, B\} from \{C, D\} via the crossing edges B-C and D-A. Contracting non-crossing edges first would preserve this cut until the end.

History and Development

Karger's algorithm emerged from David R. Karger's efforts to address the global problem through randomized techniques, first presented at the Fourth Annual ACM-SIAM Symposium on Discrete Algorithms () in 1993. The seminal paper, titled "Global Min-cuts in RNC, and Other Ramifications of a Simple Min-Cut Algorithm," introduced a straightforward contraction-based approach for weighted, undirected graphs, motivated by the need for efficient, parallelizable approximations to problems lacking fast deterministic solutions. Karger demonstrated that the algorithm identifies a specific minimum cut with probability at least \frac{2}{n(n-1)}, where n is the number of vertices, enabling high-probability success via O(n^2 \log n) repetitions. In 1996, Karger collaborated with Clifford Stein to enhance the method's efficiency, resulting in the Karger-Stein variant detailed in their Journal of the ACM publication, "A New Approach to the Problem." This work reduced the expected runtime to O(n^2 \log^3 n) through recursive contraction phases, while also proving the problem solvable in parallel time O(\log^2 n) with n^2 processors, marking the first such result for minimum cuts. The collaboration built directly on the 1993 foundation, emphasizing the contraction process's versatility for both sequential and parallel settings. This development coincided with the expansion of randomized algorithms, spurred by earlier probabilistic innovations that highlighted randomization's power for graph-related challenges. Although influenced by the broader field pioneered by figures like , whose 1980s contributions to randomized graph connectivity and computation laid groundwork for such methods, Karger's work uniquely targeted global cuts. Post-1996, the core algorithm saw refinements, such as Karger's 2000 extension achieving near-linear time O(m \log^3 n) via sampling and packing techniques, but no fundamental alterations to the contraction mechanism by 2025.

The Minimum Cut Problem

Definition and Formulation

Karger's algorithm addresses the global problem in undirected . The input is an undirected G = (V, E), where V is the set of n = |V| vertices and E is the of m = |E| edges connecting pairs of vertices, each with a non-negative weight. Although the algorithm begins with a simple graph (at most one edge between any pair of vertices), the process introduces multiple edges between vertices, necessitating the use of weighted graphs where the weights of parallel edges are summed. A cut in G is a partition of the vertex set into two non-empty subsets S \subseteq V and V \setminus S, with the crossing edges \delta(S) consisting of all edges having one endpoint in S and the other in V \setminus S. The size of the cut is the total weight of edges in \delta(S), denoted w(\delta(S)). The global minimum cut, or min-cut, is the smallest such size over all non-trivial partitions (where both S and V \setminus S are non-empty), denoted \lambda(G) = \min_{\emptyset \subset S \subset V} w(\delta(S)). This differs from the s-t min-cut, which seeks the minimum size of \delta(S) over partitions where specific vertices s and t are separated into different subsets. Prerequisites for understanding the problem include basic concepts such as adjacency (the presence of at least one edge between two vertices) and, in the context of weighted graphs, edge weights (the capacity or cost associated with each edge). Although the global min-cut problem in undirected graphs is solvable exactly in polynomial time—for instance, via the deterministic running in O(nm + n^2 \log n) time—these methods can be computationally intensive for very large graphs, motivating efficient randomized approaches that find the exact min-cut with high probability.

Significance in Graph Theory

The minimum cut problem in graph theory is fundamentally linked to the max-flow min-cut theorem established by Ford and Fulkerson, which characterizes the minimum capacity of a cut separating two specified vertices s and t as equal to the maximum flow from s to t. However, the global minimum cut, which seeks the smallest cut partitioning the graph into two non-empty subsets without designated sources or sinks, extends this framework by requiring minimization over all possible partitions, rendering source-sink-based flow algorithms insufficient for direct application without enumerating pairs, a computationally prohibitive task. The size of the global minimum cut, denoted \lambda(G), precisely measures the edge-connectivity \kappa'(G) of an undirected G, defined as the minimum number of edges whose removal disconnects the graph. This equivalence underscores the problem's role in assessing robustness, as \lambda(G) provides a quantitative bound on the graph's resilience to edge failures. A key theoretical bound is \lambda(G) \leq \delta(G), where \delta(G) is the minimum degree, since the edges incident to a minimum-degree vertex form a cut of that size; this bound is tight in regular graphs but can be much smaller in sparse or tree-like structures, where \lambda(G) = 1. In contrast, expander graphs achieve \lambda(G) equal to \delta(G), highlighting how the minimum cut captures expansion properties essential for efficient mixing in random walks and . Beyond connectivity, the problem connects to several core challenges in and algorithms. It serves as a building block for the sparsest cut problem, which normalizes the cut size by partition volumes to identify balanced sparse separations, crucial for and metric approximations. Graph partitioning algorithms, such as those minimizing cut edges for balanced subsets, rely on min-cut heuristics to divide networks into , directly influencing community detection methods that model or biological structures as . These links demonstrate the min-cut's centrality in reductions and approximations across . Randomized algorithms like Karger's are pivotal for the global min-cut because exact deterministic solutions, while polynomial-time, can incur high practical costs for massive graphs, whereas randomized approaches enable faster simulations that achieve high-probability exact solutions or constant-factor approximations in near-linear time relative to edges. Recent advances include a deterministic algorithm achieving near-linear time \tilde{O}(m) for weighted graphs. This efficiency has spurred advancements in and for cut problems, where derandomization remains challenging.

The Contraction Algorithm

Algorithm Description and Pseudocode

Karger's contraction algorithm is a randomized for finding a in an undirected connected by iteratively contracting edges until only two supernodes remain, with the edges between these supernodes defining the cut. The algorithm takes as input an undirected G = (V, E) with n = |V| vertices and m = |E| edges, where edges may have multiplicity, and outputs the size of a cut, which is the number of edges crossing between the two final supernodes; the of the original vertices is implicitly given by the supernodes' compositions. The high-level steps of the algorithm are as follows: while the has more than two vertices, select an uniformly at random from the current , contract its two endpoints into a single supernode (merging their incident edges), remove any resulting self-loops, and preserve parallel edges as multiplicities between the updated supernodes. This process reduces the number of supernodes by one at each iteration, transforming the into a with fewer vertices while maintaining the cut properties. The contraction handles multigraphs naturally, as merging vertices can create multiple edges between remaining supernodes, which are retained without alteration except for self-loops. A standard pseudocode implementation of the basic iterative version is given below, where the is represented to support efficient random selection and (e.g., via adjacency lists that track multiplicities).
function Karger(G):  // G is an undirected [multigraph](/page/Multigraph)
    n ← number of vertices in G
    while n > 2:
        select an [edge](/page/Edge) e = (u, v) uniformly at random from current edges
        contract u and v: merge into supernode w, redirect all edges incident to u or v to w
        remove all self-loops at w
        update multiplicities for parallel edges
        n ← n - 1
    return the multiplicity of edges between the two remaining supernodes as the cut value
This pseudocode assumes a representation where contractions update the graph in place, with the final cut value derived directly from the edge multiplicity between the two supernodes. To illustrate, consider a simple cycle graph with 4 vertices A, B, C, D and edges A-B, B-C, C-D, D-A (each of multiplicity 1), which has minimum cut size 2. In one possible run: first, select edge A-B uniformly at random and contract to supernode AB, yielding vertices \{AB, C, D\} with edges AB-C, C-D, D-AB (no self-loops or new parallels). Next, select edge C-D and contract to supernode CD, yielding vertices \{AB, CD\} with two parallel edges AB-CD (from the original B-C and D-A). The algorithm returns a cut value of 2, corresponding to the partition \{A, B\} vs. \{C, D\}. This run succeeds in finding a minimum cut, though other random selections (e.g., contracting B-C first) might yield a larger cut in a single iteration.

Contraction Process

In Karger's algorithm, the contraction process begins by selecting a random in the current and ing it, which merges the two vertices connected by that into a single supernode while preserving the overall structure of the in a way that maintains key properties. Specifically, to an between vertices u and v, these vertices are combined into a new supernode w; all edges incident to either u or v are redirected to connect to w instead, and any self-loops resulting from edges that originally connected u directly to v are removed. This operation preserves the of edges, allowing multiple edges between the same pair of vertices to coexist, which ensures that the total number of edges incident to the new supernode accurately reflects the original connectivity. The random edge for contraction is chosen uniformly at random from all edges present in the current graph, rather than based on vertices, to introduce the necessary randomness that drives the algorithm's probabilistic guarantees. Graph representations such as or are updated iteratively after each contraction: in an , edges from u and v are consolidated under w, with duplicates retained if they exist; in a , the rows and columns for u and v are summed into a new entry for w, and the original entries are eliminated. These updates ensure efficient handling of the evolving as contractions proceed. Each contraction reduces the number of vertices in the by exactly one, starting from an initial with n vertices and continuing for n-2 steps until only two supernodes remain. At this point, the edges between these two supernodes define a cut in the original , and the process terminates. This stepwise reduction systematically shrinks the graph while repeatedly applying the same operation. The operation preserves the sizes of all cuts (S, T) where u and v are on the same side of the (both in S or both in T). Such cuts remain unchanged in the contracted . Cuts that separate u and v are eliminated. Consequently, every cut in the contracted corresponds to a cut of the same size in the original (one that does not separate u and v), ensuring that the final cut found by the algorithm is a valid cut of the original . The minimum cut size in the contracted is thus at least as large as in the original.

Probabilistic Analysis of the Contraction Algorithm

Success Probability Calculation

The basic contraction algorithm finds a with probability at least \frac{2}{n(n-1)}, where n is the number of vertices in the . This bound arises from the key insight that the algorithm succeeds if no edge across the is contracted until exactly two supernodes remain, preserving the partition induced by the cut. At this point, the number of edges between the two supernodes equals the size. To derive this probability, consider the process step by step. Suppose at a stage with k supernodes remaining (starting from k = n and proceeding down to k = 3), the two sides of the have not yet been merged. The algorithm fails at this step if it contracts an edge across the cut. The probability of such a is at most \frac{2}{k}, because in the worst case—one side consists of a single supernode and the other of k-1 supernodes—the probability of selecting a is bounded by the uniform selection over supernode pairs, where the fraction of cross pairs is \frac{2}{k}. Thus, the probability of success at this step (avoiding contraction across the cut) is at least $1 - \frac{2}{k} = \frac{k-2}{k}. The overall success probability is therefore the product of these step-wise probabilities: P(\text{success}) \geq \prod_{k=3}^{n} \frac{k-2}{k} = \frac{1 \cdot 2 \cdot 3 \cdots (n-2)}{3 \cdot 4 \cdot 5 \cdots n} = \frac{(n-2)! / 0! }{n! / 2!} = \frac{2}{n(n-1)}. This simplifies to P(\text{success}) \geq \frac{1}{\binom{n}{2}}, reflecting that there are \binom{n}{2} possible ways to partition the vertices into two supernodes, and each specific minimum cut partition is equally likely to be the final one under the randomness of contractions. This bound assumes a unique minimum cut for tightness in the analysis; if there are multiple minimum cuts (say, \alpha > 1), the probability of finding some minimum cut increases to at least \alpha / \binom{n}{2}, but the lower bound of \frac{2}{n(n-1)} still holds as a conservative estimate. For example, in a with n vertices, there are n minimum cuts of size 2, and the bound is tight for the probability of finding any specific one, though the overall success probability is higher at approximately \frac{2}{n-1}.

Improving Probability through Repetition

To improve the success probability of the basic algorithm, which finds a with probability \Omega(1/n^2), the procedure is repeated multiple times independently, forming a approach. Each run is identical and independent of the others, relying on the in edge selection to sample different contraction sequences. By executing the algorithm r times and selecting the cut with the minimum value among the outputs, the overall success probability—that at least one run identifies a —can be amplified arbitrarily close to 1. The probability that all r independent runs fail to find a minimum cut is (1 - p)^r, where p = \Omega(1/n^2) is the success probability of a single run. Since \binom{n}{2} \approx n^2/2, this failure probability satisfies (1 - 1/\binom{n}{2})^r \leq e^{-r / \binom{n}{2}}. Setting r = O(n^2 \log n) ensures the failure probability is at most $1/n^k for any constant k > 0, achieving success with high probability (typically meaning $1 - 1/\mathrm{poly}(n)). This repetition count scales quadratically with n due to the base success rate, providing a straightforward amplification without altering the core . The output is the minimum cut value observed across all runs; while this Monte Carlo variant does not verify the cut's minimality (potentially returning a near-minimum cut if all runs fail), the high repetition count makes errors negligible in practice. Each single run of the basic takes O(n^2) time in a naive using adjacency matrices for selection and . Thus, the total with O(n^2 \log n) repetitions is O(n^4 \log n), which, though , motivated later optimizations like to reduce runtime while maintaining high success probability.

The Karger-Stein Algorithm

Algorithm Overview and

The Karger-Stein algorithm enhances the basic contraction method by introducing a recursive framework that branches the computation to improve efficiency in finding the of an undirected, connected, . Rather than performing contractions all the way to two vertices in a single run, the algorithm contracts the graph to an intermediate number of vertices t \approx n / \sqrt{2}, where n is the original number of vertices, and then recursively applies the procedure to this contracted graph in two independent branches, selecting the found from both. This recursive branching allows the algorithm to explore multiple contraction paths while limiting the depth of . The recursive structure is defined as follows: for a graph G with n vertices, if n \leq 6, the algorithm falls back to the basic contraction procedure, repeatedly contracting random edges until two supernodes remain and returning the number of edges between them as the cut value. For larger n, it first performs a sequence of random edge contractions until exactly t = \lceil n / \sqrt{2} \rceil + 1 supernodes are left, producing a contracted graph H. It then makes two independent recursive calls on H, computes the minimum cuts from each, and returns the smaller of the two values. This creates a recursion with a branching factor of 2 and depth O(\log n), as the choice of t roughly halves the logarithm of the vertex count per level. The selection of t \approx n / \sqrt{2} is chosen to optimize the balance between the extent of in each phase and the number of recursive branches, ensuring the terminates efficiently without excessive computation at early levels. Here is a representation of the recursive procedure:
function KargerStein(G):
    n = number of vertices in G
    if n <= 6:
        return ContractToTwo(G)  // Basic contraction to 2 vertices, return cut value
    else:
        t = ceil(n / sqrt(2)) + 1
        H = ContractRandomly(G, t)  // Contract edges randomly until t supernodes remain
        cut1 = KargerStein(H)
        cut2 = KargerStein(H)
        return min(cut1, cut2)
where ContractToTwo(G) implements the basic Karger's contraction until two supernodes, and ContractRandomly(G, t) performs random edge contractions preserving multiedges until t supernodes. This approach preserves the minimum cut because each contraction phase maintains the validity of cuts from the original graph, as supernodes represent partitions of vertices, and the edges between supernodes correspond to cuts in the original; the recursive branching simply explores independent contraction sequences on the same intermediate graph H, ensuring that if a minimum cut survives the initial contraction to H, at least one branch is likely to identify it.

Time Complexity and Probability Analysis

The Karger-Stein algorithm improves upon the basic contraction method by employing a recursive structure, where the graph is contracted to approximately n / \sqrt{2} vertices before branching into two independent recursive subproblems. The time complexity of a single recursive execution satisfies the recurrence T(n) = 2 T(n / \sqrt{2}) + O(n^2), where the O(n^2) term accounts for the contractions in the current level. Solving this recurrence yields T(n) = O(n^2 \log n), as each of the O(\log n) levels of recursion performs O(n^2) work overall, following expansion or application of the master theorem adapted for the non-constant branching factor. The success probability of finding a minimum cut in a single recursive run of the Karger-Stein algorithm is \Omega(1 / \log n). This bound arises from the recursive probability analysis, where the probability of preserving the minimum cut during the initial contractions to n / \sqrt{2} vertices is \Omega(1) (at least a constant such as 1/2 for large n), and the overall probability then satisfies a recurrence where the success in the current level depends on survival in the first phase and success in at least one of the two branches, leading to P(n) ≥ c ⋅ (1 - (1 - P(n / \sqrt{2}))^2) for some constant c > 0, which by yields the $1 / \log n lower bound. To achieve a high success probability of at least $1 - 1/n, the algorithm is repeated O(\log^2 n) independent times, resulting in a total of O(n^2 \log^3 n). This represents a significant asymptotic improvement over the basic with naive repetition, which requires O(n^2 \log n) runs each taking O(n^2) time, for a total of O(n^4 \log n). The variant accelerates this by a factor of \Theta(n^2 / \log^2 n), primarily through the recursive branching that boosts the per-run success probability without proportionally increasing the runtime.

Applications and Extensions

Real-World Applications

Karger's algorithm finds practical utility in assessing network reliability, particularly for fault-tolerant designs in communication systems such as telecommunications networks, where it helps compute edge-connectivity to identify the minimum number of edge failures that disconnect the graph, thereby evaluating vulnerability to disruptions. By approximating the minimum cut, the algorithm enables engineers to design robust topologies that maintain connectivity under probabilistic failures, as demonstrated in randomized approximation schemes for all-terminal reliability problems. In graph partitioning tasks, Karger's algorithm serves as an efficient initial approximation for problems in VLSI and load balancing. For VLSI, it aids in dividing circuit components to minimize inter-chip connections, leveraging its randomized contractions to quickly identify balanced cuts in large graphs representing chip layouts. Similarly, in , repeated applications help partition computational graphs to distribute workloads evenly across processors, reducing communication overhead in distributed systems. The algorithm has been adapted for image segmentation by modeling pixels as vertices and similarities between adjacent pixels as edge weights, where finding a minimum cut separates foreground from background regions effectively. This approach treats segmentation as a graph cut problem, with contractions iteratively merging similar regions until distinct segments emerge, providing a probabilistic yet scalable method for computer vision tasks. In social network analysis, repeated invocations of Karger's algorithm facilitate community detection through divisive hierarchical clustering, where successive minimum cuts partition the graph into densely connected subgroups representing communities. By recursively applying contractions to subgraphs, it uncovers structural communities based on edge densities, useful for analyzing user interactions and influence propagation. Software implementations of Karger's algorithm are available in various programming libraries, with heuristics such as early termination and parallel repetitions enabling scalability to graphs with millions of vertices, as seen in distributed partitioning frameworks for large-scale networks. One prominent deterministic alternative to Karger's randomized approach is the Nagamochi-Ibaraki algorithm, which computes the exact minimum cut in an undirected graph with m edges and n vertices in O(mn + n^2 \log n) time by iteratively identifying and contracting non-minimum-cut edges without relying on maximum flow computations. This method achieves the same guarantee as Karger's algorithm but eliminates randomness, making it suitable for scenarios requiring certainty at the cost of higher worst-case time complexity. Karger's algorithm, originally designed for unweighted undirected graphs, extends naturally to weighted graphs by modifying the edge selection step to choose an edge with probability proportional to its weight during contractions, preserving the probabilistic analysis while handling edge capacities directly. For directed graphs, adaptations involve symmetrizing the graph (replacing each directed edge with undirected ones) or integrating contraction with flow-based techniques to approximate global cuts, though these variants often trade off the simplicity of the original for handling directionality. Among faster randomized variants, the Karger-Stein algorithm improves upon the basic contraction process through recursive branching, achieving O(n^2 \log n) time with high success probability; further optimizations, such as those by Karger in 1996, reduce this to O(m \log^3 n) expected time by incorporating sampling and electrical embeddings. Recent advancements in the 2020s have pushed toward nearly linear-time approximations, exemplified by a 2020 algorithm that finds a in \tilde{O}(m) time with high probability using tree-respecting cuts and recursive , enabling to massive graphs. Subsequent works include a 2021 running in m^{1+o(1)} time and a 2024 deterministic near-linear time algorithm for weighted graphs. While classical implementations dominate, streaming and distributed variants address modern data challenges; for instance, semi-streaming algorithms compute global minimum cuts in O(n \log n) space over 2 passes, adapting contraction via sketches and sampling. Distributed adaptations, such as MapReduce-based implementations of Karger-Stein, enable parallel contractions across clusters to approximate min-cuts in O(\log^2 n) rounds with high probability, supporting large-scale processing post-2010. These extensions highlight ongoing incompleteness in handling dynamic or massive datasets, where exact guarantees remain computationally intensive. Future research directions include integrating min-cut variants with for dynamic graphs.

References

  1. [1]
    [PDF] Global Min-cuts in RN C, and Other Rami cations of a Simple Min ...
    Oct 30, 1992 · Abstract This paper presents a new algorithm for nding global min-cuts in weighted, undirected graphs. One of the strengths of the algorithm ...
  2. [2]
    [PDF] A New Approach to the Minimum Cut Problem - Columbia University
    Dec 23, 1996 · This paper studies the minimum cut problem. ... Thus an algorithm that finds all approximately minimum cuts will find the original minimum cut.
  3. [3]
    A new approach to the minimum cut problem | Journal of the ACM
    This paper present a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph's minimum cut form ...Missing: original | Show results with:original<|control11|><|separator|>
  4. [4]
    [PDF] 1 Minimum Cut Problem 2 Karger's Algorithm
    May 24, 2016 · Today, we introduce the minimum cut problem. This problem has many motivations, one of which comes from image segmentation.
  5. [5]
    Minimum cuts in near-linear time | Journal of the ACM
    We give a randomized (Monte Carlo) algorithm that finds a minimum cut in an ... publication date: 20-Jul-2025. https://doi.org/10.3390 ...
  6. [6]
  7. [7]
    [PDF] Approximating s–t Minimum Cuts in ~O(n2) Time - People | MIT CSAIL
    We improve on random sampling techniques for approximately solving problems that involve cuts in graphs. We give a linear- time construction that transforms ...
  8. [8]
    [PDF] Discrete Mathematics and Algorithms - 1 Global Min-Cut
    This intuition is valid; in Karger's original paper1, he outlines a recursive algorithm based on increasingly retrying later contraction steps that runs ...
  9. [9]
    [PDF] Section 9.3. Edge Connectivity
    Feb 13, 2023 · Edge connectivity is the maximum k for which a graph is k-edge-connected, or the least integer k for which a graph has a k-edge cut.
  10. [10]
    [PDF] Lecture 13 1 Global Min-Cut and Edge-Connectivity
    Feb 17, 2011 · This means that the edge-connectivity of a graph is at most the cost of its minimum cut;
  11. [11]
    [PDF] Lecture Notes on Expansion, Sparsest Cut, and Spectral Graph Theory
    (s, t)-min-cut problem for the graph G. 8.1 A Linear Programming relaxation. Another way to formulate the sparsest cut problem is σ(G, H) := |EH|. |EG|. · min.
  12. [12]
    [1305.4974] Community detection and graph partitioning - arXiv
    May 21, 2013 · In this paper we show that two of the most widely used inference methods can be mapped directly onto versions of the standard minimum-cut graph partitioning ...
  13. [13]
    [PDF] An ~O Algorithm for Minimum Cuts - People | MIT CSAIL
    A minimum cut is a set of edges of minimum weight whose removal disconnects a given graph. Minimum cut algorithms historically applied duality with max-.
  14. [14]
    [PDF] 1 Karger's Mincut Algorithm
    Following is the pseudocode for this function: function CONTRACT(G,{u, v}). 1) Replace u and v by a new vertex called uv. 2) Delete all edges {u, v}. 3) For ...
  15. [15]
    [PDF] Lecture 2: Karger's Min Cut Algorithm - cs.Princeton
    Karger's algorithm finds the minimum cut by randomly picking an edge and merging its endpoints into a single node, repeating until two nodes remain.
  16. [16]
    [PDF] A New Approach to the Minimum Cut Problem 1 Introduction
    Aug 6, 1996 · ACM, 1994. [Kar93] David R. Karger. Global min-cuts in RN C and other rami cations of a simple mincut algorithm. In Proceedings of the 4th ...
  17. [17]
    [PDF] Global Min-cuts in RNC, and Other Ramifications of a Simple Min ...
    Abstract. This paper presents a new algorithm for finding global min-cuts in weighted, undirected graphs. One of the strengths of the algorithm is its ...Missing: STOC 1993
  18. [18]
  19. [19]
    A Randomized Fully Polynomial Time Approximation Scheme for the ...
    The classic all-terminal network reliability problem posits a graph, each of whose edges fails independently with some given probability.Missing: telecom | Show results with:telecom
  20. [20]
  21. [21]
    [PDF] Distributed Graph Processing for large-scale Graph Partitioning
    Dec 1, 2016 · The workers use Karger's Algorithm to further partition their subgraphs. Karger's Algorithm is a probabilistic algorithm that computes a ...
  22. [22]
    Extensions of Karger's Algorithm: Why They Fail in Theory and How ...
    Oct 5, 2021 · We first prove that a wide class of natural generalizations of Karger's algorithm cannot efficiently solve the s-t-mincut or the normalized cut ...
  23. [23]
    [PDF] Clustering Relational Data Using Attribute and Link Information
    Karger's algorithm can be used as a divi- sive, hierarchical clustering algorithm by applying the algorithm recursively on each partition until each cluster.
  24. [24]
    karger min cut algorithm using networkX is very slow in running time
    Jul 4, 2016 · Try to use the networkX package to get work done but it seems to have much longer running time than creating a list of list to represent graph.Fixing Karger's min cut algorithm with union-find data structureRandomized Min-Cut, Karger's Algorithm - graph - Stack OverflowMore results from stackoverflow.comMissing: LEDA | Show results with:LEDA
  25. [25]
  26. [26]
    [PDF] A Simple Algorithm for Minimum Cuts in Near-Linear Time - arXiv
    In this paper, we have discussed a simplification to Karger's original near-linear time minimum cut algorithm [23]. In contrast to Karger's original ...
  27. [27]
    [PDF] 5.1 Introduction 5.2 A Randomized Algorithm
    Sep 27, 2004 · The randomized algorithm picks an edge randomly, contracts it, removes self-loops, and outputs the edges between the two remaining vertices.Missing: Rabin | Show results with:Rabin
  28. [28]
    Extending Karger's randomized min-cut Algorithm for a Synchronous ...
    The well known Karger's randomized algorithm for min-cut is a non-flow based method for solving the (global) min-cut problem of finding the min s-t cut over all ...<|control11|><|separator|>
  29. [29]
    A Simple Algorithm for Minimum Cuts in Near-Linear Time - arXiv
    We give a simple algorithm to find a minimum cut that 2-respects (cuts two edges of) a spanning tree T of a graph G. This procedure can be used ...Missing: randomized | Show results with:randomized
  30. [30]
    [PDF] A Simple Semi-Streaming Algorithm for Global Minimum Cuts
    The goal of this paper is to present a simpler and self-contained proof of this fundamental result for minimum cut directly in the semi-streaming model. Theorem ...
  31. [31]
    [PDF] A Distributed Algorithm for Global Min-Cut - Stanford University
    In this paper, we present a distributed algorithm to solve the global min-cut problem based on the Karger-Stein sequential algorithm. Like the Karger-Stein ...
  32. [32]
    [PDF] Filtering: A Method for Solving Graph Problems in MapReduce
    Finally, Karger's algorithm is in RN C but also requires. O(log2 n) ... Algorithm MinCut returns the minimum cut in. G with probability at least 1−o ...
  33. [33]
    Dynamic Graph Clustering Using Minimum-Cut Trees - SpringerLink
    We show that the structure of minimum-s-t-cuts in a graph allows for an efficient dynamic update of minimum-cut trees, and present a dynamic graph clustering ...Missing: integration | Show results with:integration