Fact-checked by Grok 2 weeks ago

Maximum weight matching

In and , the maximum weight matching problem involves finding a matching—a set of edges in an undirected with no two edges sharing a —such that the sum of the weights on the selected edges is maximized. This extends the problem, where all edge weights are equal to one, by incorporating edge weights that can be positive or negative. The problem is particularly significant in bipartite graphs, where it models the of optimally pairing entities (such as workers to tasks) to maximize total value or profit. In this setting, the , developed by in 1955 and based on earlier work by Dénes Kőnig and Jenő Egerváry, solves it using a primal-dual approach that iteratively adjusts dual variables to find an optimal in O(n³) time, where n is the number of vertices. For general (non-bipartite) graphs, the presence of odd-length cycles complicates the search for augmenting paths, requiring more sophisticated techniques. Jack Edmonds introduced the first polynomial-time algorithm in 1965, known as the blossom algorithm, which identifies and contracts "blossoms" (odd cycles) to reduce the graph while preserving optimality. The weighted variant, also by Edmonds, employs a similar structure with linear programming duality to compute maximum weight matchings in O(n³) time via modern implementations, though the original formulation yields O(n⁴). These algorithms have broad applications in network design and resource allocation.

Fundamentals

Matchings in unweighted graphs

In , a matching in an undirected is defined as a set of edges such that no two edges share a common . This ensures that each is incident to at most one edge in the set. The size of a matching is the number of edges it contains, often denoted as the matching number \nu(G) for a G. A matching is termed maximal if no additional edge can be added to it without violating the matching condition, meaning it is not a proper of any larger matching. In contrast, a maximum matching (or ) is a matching of the largest possible size in the graph; every maximum matching is maximal, but not every maximal matching is maximum. For example, in a P_4 with vertices labeled 1-2-3-4 and edges between consecutive vertices, the set of edges \{1-2, 3-4\} forms a maximum matching of size 2, covering all vertices, while \{2-3\} is a maximal matching of size 1 that cannot be extended but is not maximum. Similarly, in a C_4 (a square with vertices A-B-C-D-A), a maximum matching such as \{A-B, C-D\} has size 2, whereas \{A-B\} is maximal but smaller. Algorithms for computing maximum matchings in unweighted s differ based on graph structure. For bipartite graphs, the Hopcroft-Karp algorithm efficiently finds a by identifying multiple augmenting paths in phases, achieving a of O(E \sqrt{V}), where E is the number of edges and V is the number of vertices. In general (non-bipartite) graphs, Edmonds' computes a maximum matching in time by handling odd cycles, known as , through contraction and augmentation techniques. A key theoretical result for bipartite graphs is König's theorem (also known as the König-Egerváry theorem), which states that the size of a maximum matching equals the size of a minimum —a set of vertices that includes at least one endpoint of every edge in the graph. This equivalence highlights the duality between matchings and covers in bipartite settings and underpins many applications in optimization and resource allocation.

Weighted graphs and matchings

A weighted graph extends the concept of an undirected graph by associating a real-valued weight with each , where these weights can be positive or negative but are often assumed to be non-negative in the context of maximum weight matching problems to ensure meaningful optimization. Formally, a weighted graph is denoted as G = (V, E, w), where V is the set of vertices, E is the set of edges, and w: E \to \mathbb{R} is the weight function assigning a to each edge. These weights typically represent quantities such as costs, distances, profits, or similarities, depending on the application. In a weighted , a matching M \subseteq E is a set of edges with no two sharing a common , and its is defined as the w(M) = \sum_{e \in M} w(e). The objective in weighted matching is to select a matching that maximizes this total w(M), shifting the focus from merely pairing to optimizing an aggregate value based on edge properties. Unlike matchings in unweighted graphs, which prioritize the (number of edges) to cover as many vertices as possible, weighted matchings emphasize the cumulative weight, potentially favoring fewer but higher-weighted edges over a larger set of lower-weighted ones. Unweighted matchings arise as a special case when all edges have uniform weight 1, reducing the maximization to . A representative example is the bipartite assignment problem, where the is bipartite with partitions representing agents and tasks, and edge weights denote the or benefit of assigning a particular agent to a task; the maximum weight matching then identifies the optimal to maximize total .

Problem definitions

Maximum weight matching

In , the maximum weight matching problem is defined on an undirected G = (V, E) equipped with a w: E \to \mathbb{R}, where the goal is to find a matching M \subseteq E—a set of edges with no two sharing a —that maximizes the total weight \sum_{e \in M} w(e). The input is typically provided as a representation such as an , where each adjacent pair of vertices includes the corresponding edge weight, and the output is the set of edges comprising the optimal matching. This problem generalizes the , which seeks to maximize the number of edges in M without regard to weights; in weighted graphs, the maximum weight matching may have smaller if higher-weight edges yield a greater total sum. For instance, consider a with vertices \{1, 2, 3, 4\} and edges \{1-2: w=1, 1-3: w=10, 2-3: w=2, 2-4: w=3, 3-4: w=4\}; the maximum weight matching is \{1-3, 2-4\} with total weight 13, whereas a maximum cardinality matching of size 2, such as \{1-2, 3-4\}, has weight only 5. The decision version of the problem, relevant for complexity analysis, asks whether there exists a matching M such that \sum_{e \in M} w(e) \geq K for a given K. Unlike the maximum weight variant, which requires M to saturate all vertices in V, the general maximum weight matching permits unsaturated vertices to maximize the objective.

Maximum weight perfect matching

A maximum weight perfect matching in a weighted undirected G = (V, E) with edge weights w: E \to \mathbb{R} is a M \subseteq E—a set of edges with no shared vertices that covers every vertex in V—such that the total weight \sum_{e \in M} w(e) is maximized. This problem enforces full vertex coverage as a hard constraint, distinguishing it from the unconstrained maximum weight matching. For a maximum weight perfect matching to exist, the graph must have an even number of vertices, as an odd count leaves at least one uncovered in any matching. Moreover, the graph must admit a in its unweighted form; a necessary and sufficient condition is given by Tutte's theorem, which states that for every S \subseteq V, the number of odd-sized components in G - S is at most |S|. Violations of this condition, such as structural barriers creating excess odd components, render the problem infeasible. A representative example arises in the K_{n,n} with partite sets U = \{u_1, \dots, u_n\} and W = \{w_1, \dots, w_n\}, where edges have varying non-negative weights w(u_i, w_j). This setup models the , seeking a \sigma: \to that maximizes \sum_{i=1}^n w(u_i, w_{\sigma(i)}). For n=2 with weights
w_1w_2
u_151
u_214
the maximum weight perfect matching pairs u_1 with w_1 (weight 5) and u_2 with w_2 (weight 4), yielding total weight 9; the alternative pairing sums to 2. If no perfect matching exists in the , the maximum weight perfect matching problem has no solution, whereas the general maximum weight matching relaxes the coverage constraint and always admits a (possibly imperfect) optimum.

Algorithms for bipartite graphs

Hungarian algorithm

The Hungarian algorithm, also known as the Kuhn-Munkres algorithm, was developed by Harold W. Kuhn in 1955 to solve the assignment problem, which involves finding a maximum weight perfect matching in a complete bipartite graph with equal partition sizes. Kuhn named it after the Hungarian mathematicians Dénes Kőnig and Jenő Egerváry, whose earlier work on minimax theorems and dual linear programs provided the foundational ideas, predating the development of linear programming. The algorithm applies specifically to bipartite graphs and guarantees an optimal solution by leveraging a primal-dual framework that simultaneously optimizes the matching (primal) and associated potentials (dual variables). At a high level, the algorithm begins with an initial feasible labeling using potentials and a partial matching in the equality subgraph, where edges satisfy equality in the constraints. It then iteratively attempts to augment the matching by searching for augmenting s relative to the current matching, using a breadth-first or in the equality subgraph. If an augmenting is found, the matching is updated along this . If no such exists, the potentials are adjusted to create new zero-reduced-cost edges, tightening the while preserving feasibility, and the search resumes. This process repeats until a is obtained, at which point optimality is ensured by the complementary slackness conditions of duality. Central to the algorithm is the concept of reduced costs, defined for an from left i to right j as c'(i,j) = u_i + v_j - c(i,j), where c(i,j) is the original weight and u_i, v_j are the potentials ( variables) for the respective partitions. The potentials are maintained such that c'(i,j) \geq 0 for all s, ensuring feasibility, and the equality subgraph consists of s where c'(i,j) = 0. Augmenting paths are sought in this subgraph, and the is to maximize the total weight, which equals the \sum u_i + \sum v_j at optimality. Potential updates occur by subtracting a positive \alpha (the minimum over certain s) from left potentials in the search and adding it to right potentials, which decreases the and potentially introduces new equality s without violating feasibility. The following outlines the core procedure for an n \times n , assuming partitions X and Y:
Initialize feasible potentials u_i = max_j c(i,j) for i in X, v_j = 0 for j in Y
Initialize empty matching M
While |M| < n:
    Build equality subgraph G' with edges where u_i + v_j - c(i,j) = 0
    Pick a free vertex s in X (unmatched)
    Perform BFS or DFS from s to find an augmenting path P in G' relative to M
    If P exists:
        Augment M along P (symmetric difference with P)
    Else:
        Build alternating tree rooted at s: S subset X (reached free/matched), T subset Y (reached)
        Compute α = min {u_i + v_j - c(i,j) | i in S, j not in T}
        Update: u_i -= α for i in S; v_j += α for j in T
Return M
This implementation ensures each augmentation increases the matching size by one, and potential updates maintain progress toward optimality. The time complexity of the Hungarian algorithm is O(n^3) for an n \times n bipartite graph, arising from at most n augmentation phases, each involving O(n^2) work for graph construction, shortest-path search, and potential updates. Example: Consider a 3x3 weight matrix for assigning tasks (rows) to workers (columns), with weights indicating profits:
Col1Col2Col3
Row1927
Row2643
Row3581
Initialize potentials: u = [9, 6, 8], v = [0, 0, 0]. Reduced costs: all \geq 0, equality edges: (1,1), (2,1), (3,2). Start with empty M. Search from Row1: trivial path to Col1 (free), augment M to {(1,1)}. Next, from Row2: equality to Col1 (matched); tree grows to S={2,1}, T={1}, no free Y, α= min slacks (e.g., over (2,2): 6 + 0 - 4 = 2). Actual steps yield updates such as u_1 -=2, u_2 -=2, v_1 +=2, etc. After iterations: augment to include (2,1 weight 6) and (3,2 weight 8) with adjustments leading to reassign (1,3 weight 7). Final M: (1,3 weight 7), (2,1 weight 6), (3,2 weight 8), total 21. This traces two augmentations and dual updates, confirming optimality as dual sum equals primal.

Min-cost flow formulations

The maximum weight matching problem in a bipartite graph can be reformulated as a minimum-cost flow problem in a suitably constructed flow network. Consider a bipartite graph G = (U \cup V, E) with vertex partitions U and V of sizes |U| = n and |V| = m, and edge weights w(e) \geq 0 for e \in E. Introduce a source vertex s connected to all vertices in U by edges of capacity 1 and cost 0, and a sink vertex t connected from all vertices in V by edges of capacity 1 and cost 0. The original edges e \in E are assigned capacity 1 and cost -w(e) to maximize the total weight by minimizing the negated costs. This setup ensures that any integer flow corresponds to a matching, as capacities restrict each vertex to at most one incident flow unit. The minimum-cost flow formulation seeks a flow f that minimizes the total cost \sum_{e \in E'} c_e f_e, where E' includes all edges in the network, subject to flow conservation at each vertex (except s and t), non-negative integer flows $0 \leq f_e \leq 1 for all edges, and a total flow value of \min(n, m) from s to t. For the maximum weight perfect matching case (when n = m), the required flow is exactly n units, and the optimal integer flow selects a set of n disjoint paths from s to t, each carrying one unit of flow, corresponding directly to the matching edges with total weight equal to the negative of the minimum cost. Due to the integral capacities, the optimal solution is integer-valued by the integral flow theorem. This flow problem can be solved using standard minimum-cost flow algorithms, such as the successive shortest paths algorithm, which iteratively augments the flow along shortest paths in the residual graph (using reduced costs to maintain efficiency), or capacity variants that process flows in exponentially increasing scales. The successive shortest paths method, with Dijkstra's algorithm for path finding and potentials for non-negative costs, achieves O(n^2 m) time in general, but for dense bipartite graphs (m = O(n^2)), it runs in O(n^3) time. Capacity scaling algorithms, which handle unit capacities efficiently, offer similar or improved performance in practice for this setting. A key advantage of this formulation is its compatibility with general-purpose minimum-cost flow solvers, allowing seamless integration into broader optimization frameworks without specializing for bipartite structure, and it naturally accommodates unbalanced partitions (n \neq m) by sending \min(n, m) units of flow. Unlike specialized matching algorithms, it also extends to multi-commodity or capacitated variants with minimal modifications. For example, consider a bipartite graph with U = \{u_1, u_2\}, V = \{v_1, v_2\}, and edges u_1v_1 (weight 3), u_1v_2 (weight 1), u_2v_1 (weight 2), u_2v_2 (weight 4). The flow network adds s to u_1, u_2 (cap. 1, cost 0) and v_1, v_2 to t (cap. 1, cost 0), with bipartite edges at cap. 1 and costs -3, -1, -2, -4. The minimum-cost flow of 2 units selects paths s \to u_2 \to v_2 \to t (cost -4) and s \to u_1 \to v_1 \to t (cost -3), yielding total cost -7 and maximum weight 7, corresponding to the perfect matching \{u_1v_1, u_2v_2\}.

Algorithms for general graphs

Blossom algorithm

The blossom algorithm, developed by Jack Edmonds in 1965, is a foundational method for computing a maximum weight matching in general non-bipartite graphs, addressing the challenges posed by odd-length cycles through a structure known as a "blossom." Unlike bipartite matching algorithms, which rely solely on augmenting paths, the blossom algorithm iteratively identifies and contracts blossoms—odd cycles in the graph that obstruct the search for augmenting paths—to transform the problem into a form amenable to augmentation. This contraction process effectively shrinks the blossom into a single supernode, allowing the algorithm to proceed as if in a bipartite-like structure while preserving the original graph's matching properties. At its core, the algorithm operates within an equality subgraph defined by node potentials u_v for each vertex v, where the slack of an edge e = (i, j) with weight w(e) is given by s(e) = u_i + u_j - w(e). Equality edges are those with s(e) = 0, forming the subgraph in which the algorithm searches for augmenting paths relative to a current matching M. The process begins by initializing potentials (often set to zero or based on maximum weights) and building a reduced graph using these equality edges. From free vertices (unmatched in M), the algorithm grows alternating trees via breadth-first or depth-first search to find augmenting paths that can increase the matching size by one. If an odd cycle (blossom) is detected—typically when a vertex is reached via two distinct paths in the forest—the blossom is contracted into a single node, and the search continues in the contracted graph. Upon finding an augmenting path in the contracted graph, the matching is updated by symmetric difference with the path; blossoms are then expanded, and their internal matchings are resolved optimally. Dual variables, including the potentials u_v and blossom-specific adjustments z_b, are updated to maintain feasibility and ensure non-negativity of slacks, tightening the equality subgraph for the next iteration. This phase-dual adjustment guarantees progress toward optimality, with the algorithm terminating when no augmenting path exists, yielding a maximum weight matching. The original formulation of the algorithm has a time complexity of O(n^4), where n is the number of vertices, arising from repeated searches and contractions over O(n) phases. Subsequent implementations, such as those by and , improved this to O(n^3) through efficient data structures for handling blossoms and edge selections, making it practical for moderate-sized graphs. Modern codes like further optimize constants while retaining this cubic bound for dense graphs. Recent scaling variants have achieved improved bounds, such as O(m \sqrt{n} \log(n W)) for graphs with m edges and weights up to W. To illustrate, consider a simple graph with five vertices forming a cycle $0-1-2-3-4-0 (a 5-cycle, or potential blossom) plus additional edges, where an initial matching leaves vertices 0 and 2 unmatched. In the equality subgraph, a search from vertex 0 reaches vertex 2 via two paths (e.g., $0-1-2 and $0-4-3-2), detecting the odd cycle $1-2-3-4-0-1 as a blossom rooted at vertex 1. The algorithm contracts this blossom into a supernode B, searches for an augmenting path in the reduced graph (e.g., finding a path through B to another free vertex), augments the matching, and expands B by optimally rematching its internal edges—such as alternating along the cycle to include one more edge in the global matching—resolving the blossom without violating weight optimality. This contraction-expansion resolves the non-bipartite obstruction, increasing the matching cardinality while respecting weights.

Auction algorithm and variants

The auction algorithm, originally developed by Dimitri P. Bertsekas in 1979 for solving the assignment problem in bipartite graphs, models the maximization of total edge weight as an auction process where unmatched vertices on one side act as bidders competing for edges to vertices on the other side. This approach iteratively adjusts dual variables, interpreted as prices, to converge to an optimal matching while maintaining near-optimality conditions. While primarily designed for bipartite graphs, variants using ε-scaling techniques have been explored for broader network flow problems, though direct extensions to general non-bipartite graphs typically rely on other methods like blossoms to handle odd cycles. In the algorithm's core mechanism for bipartite graphs, vertices on the "bidding" side (initially unmatched) evaluate potential edges to "objects" on the other side, submitting bids that reflect the edge weight minus the object's current price plus a small ε to ensure progress. Prices are updated to the winning bid, and assignments are reassigned if a higher bid displaces the current holder, simulating competitive bidding until all vertices are matched or no further improvements are possible. Convergence relies on ε-complementary slackness, a relaxation of dual feasibility where the reduced costs for assigned edges are within ε of being tight, and unassigned edges do not violate optimality by more than ε. A key computation in each bidding step for an unmatched vertex i considering edge to vertex j is the bid value: b_i(j) = w(i,j) - p_j + \epsilon where w(i,j) denotes the weight of edge \{i,j\}, p_j is the current price (dual potential) of j, and \epsilon > 0 is the scaling parameter ensuring strict improvement in each round. The bidder selects the j maximizing this bid, and if it exceeds p_j, the price updates to b_i(j), potentially triggering reassignments. The ε-scaling variant runs the auction in phases with decreasing ε (e.g., starting from a fraction of the maximum weight and halving or reducing by a factor like 1/2 each phase), using solutions from coarser phases to warm-start finer ones, yielding a time complexity of O(n^3 \log n) for bipartite graphs with n vertices and integer weights up to polynomial in n. This makes it particularly effective for sparse graphs, where practical runtimes scale well. Variants include the successive shortest path auction, which replaces direct bidding with shortest-path computations in the (using reduced costs from potentials) to find multiple augmenting opportunities per , enhancing efficiency. These variants are used in some implementations for bipartite settings and inspire dual update techniques in general algorithms like Blossom . As a representative example in a with {A, C} and {B, D}, and edges A-B: weight 10, A-C: 5 (wait, C is in first partition, adjust: assume edges A-B:10, C-D:8, etc.), assuming \epsilon = 0.1 and initial prices p = 0. Unmatched A bids on B with b_A(B) = 10 - 0 + 0.1 = 10.1, assigning A-B and setting p_B = 10.1. Next, unmatched C bids on D with b_C(D) = 8 - 0 + 0.1 = 8.1, assigning C-D and p_D = 8.1. The matching {A-B, C-D} with weight 18 converges as optimal after no improving assignments.

Complexity and approximations

Exact solvability and runtime

The maximum weight matching problem admits exact -time algorithms for both bipartite and general graphs, placing it firmly within the P. In the bipartite case, the computes an optimal solution in O(n^3) time, where n is the number of vertices, making it strongly since the runtime depends only on the graph size and not on the of the weights. Similarly, formulations as minimum-cost flow problems can be solved in time using capacity scaling or other methods, also achieving strong polynomiality for bipartite instances. For general graphs, Edmonds' seminal provides an exact solution, with the original implementation running in O(n^4) time. Subsequent improvements, such as those by Gabow, reduced the complexity to O(n^3), while more recent variants achieve O(n m \log n) or O(n^2 m) time, where m is the number of edges, incorporating ideas like scaling and efficient blossom shrinking akin to Hopcroft-Karp for bipartite matching. Prior to 2023, these algorithms were but not strongly for arbitrary real weights; however, a breakthrough by Shuai Shao established the first strongly polynomial-time for maximum weight matching as a special case of weighted general factors. While the core problem is in , certain constrained , such as exact matching—where edges are colored and the matching must include an even number of each color—are NP-hard even for simple colorings. In practice, exact algorithms require O(n^2) space to store variables or potentials, limiting dense-graph implementations to around n \approx 1000 vertices on standard hardware due to memory and time constraints. The Blossom V implementation, for instance, handles dense graphs up to n=1000 in 1-10 seconds, but scales to sparse graphs up to n=40{,}000 with m \approx 10n, completing in under 10 seconds. Randomized heuristics or auction-based can accelerate for large sparse instances without sacrificing exactness, though they may increase variance. Benchmarks highlight stark differences between dense and sparse graphs: on dense random graphs with n=1000 and m \approx n^2/2, Blossom V requires about 1-3 seconds, scaling roughly cubically, whereas on sparse instances with n=40{,}000 and m \approx 10n, it finishes in 4-9 seconds, dominated by linear-time initialization rather than augmentation phases.

Approximation algorithms

Approximation algorithms for maximum weight matching are essential in scenarios where exact methods, such as the , are computationally prohibitive due to their higher , particularly for large-scale or settings. These algorithms trade optimality for efficiency, providing guarantees on the solution quality relative to the optimal matching, denoted as \rho- where the weight of the algorithm's matching w(\text{ALG}) \geq \rho \cdot w(\text{OPT}) and $0 < \rho \leq 1. A simple sorts the in decreasing order of weight and iteratively adds the highest-weight that does not share a with previously selected . For graphs with non-negative weights, this yields a \frac{1}{2}-, as each optimal either is selected or can be charged to at most two of at least half its weight. The algorithm runs in O(m \log m) time after , making it suitable for sparse graphs but extendable to denser instances via priority queues. Local search methods start with a or empty matching and iteratively improve it by searching for short augmenting paths or cycles that increase the total weight, restricting path lengths to bound the search space. By considering alternating paths of length at most three, a \frac{2}{3}- can be achieved in linear time for non-negative weights, as optima avoid low-weight augmentations that the optimum might exploit. This approach is particularly effective in practice for moderate-sized graphs, offering better empirical performance than pure while maintaining theoretical guarantees. For graphs with general weights including negatives, direct application of or local search may fail due to unattractive low-weight edges. A standard reduction shifts all edge weights by adding a constant C larger than the of any negative weight, transforming the problem to non-negative weights while preserving the optimal structure up to a fixed additive term; the approximation ratio then applies to the shifted instance, adjustable for the original by subtracting the total shift. This preserves the relative guarantees of the underlying algorithms without altering the matching topology. Advanced techniques inspired by the blossom algorithm simplify the exact method's blossom contraction and augmentation steps to achieve a \frac{3}{4}-approximation for non-negative weights, by iteratively finding approximate augmenting paths and bounding the error from ignored blossoms. These blossom-inspired methods run in linear time and improve upon local search ratios through partial emulation of Edmonds' structure. For denser graphs where m = \Theta(n^2), polynomial-time approximation schemes (PTAS) exist, computing a (1 - \epsilon)-approximate matching in time polynomial in n and $1/\epsilon, leveraging scaling and randomized rounding for small-weight regimes. Such approximations find use in streaming models, where edges arrive sequentially with limited , or in massive graphs from social networks and bioinformatics, where exact solvability is infeasible due to scale, enabling near-optimal matchings for or pairing tasks.

Applications

In s

The represents a classic application of maximum weight matching in s, involving the optimal allocation of n distinct tasks to n s to maximize the total profit, where edge weights in the bipartite graph denote the profit gained from assigning a specific to a specific task. This one-to-one pairing ensures that each receives exactly one task and each task is assigned to exactly one , making it a special case of with weights. The problem arises in various scenarios, such as pairing workers with jobs or vehicles with routes, where the objective is to achieve the highest overall efficiency or value. The is closely linked to through its formulation, where binary decision variables indicate assignments, subject to row and column sum constraints of one. By the integrality theorem for transportation polytopes, the yields integer-optimal solutions, enabling efficient solvability. Its formulation involves potentials for agents and tasks that bound the maximum weight, providing economic interpretations such as prices for reallocations; this duality connects directly to the transportation problem, a allowing unequal supplies and demands across multiple sources and sinks, where the balanced square case reduces to . A practical example occurs in job scheduling, where weights reflect worker skills or productivity levels for different tasks; for instance, in a small firm with three workers and three jobs, the weight matrix might assign values based on historical performance data, such as 10 for Worker A on Job 1, 8 for Worker B on Job 2, and so on, with maximum weight matching selecting the combination yielding the highest total score, like 28, to optimize output without overload. Extensions handle unbalanced scenarios, where the number of agents exceeds or falls short of tasks, by introducing dummy entities with zero weights to restore balance and ensure feasibility. Rectangular matrices, representing unequal partition sizes, are handled by adding dummy entities with zero weights to the smaller partition to balance the graph, enabling the use of standard algorithms for square matrices. In practice, the assignment problem is often formulated as a mixed-integer program and solved using tools like ILOG CPLEX, which leverages branch-and-bound or cutting-plane methods on the variables to find optima efficiently for large instances, integrating seamlessly with modeling languages such as for real-world deployments in and personnel assignment.

In network design and scheduling

In network design, maximum weight matching selects edges in communication graphs to maximize total capacity, with weights corresponding to or link quality, thereby optimizing for efficient data flow and reduced . This technique is applied in communication networks to pair transmitters and receivers, enhancing overall through distributed algorithms that approximate optimal matchings while handling dynamic topologies. A representative example occurs in wireless sensor networks, where maximum weight matching pairs nodes to maximize aggregate link quality, improving data aggregation and energy efficiency in battery-constrained environments. Distributed asynchronous algorithms achieve this by iteratively finding augmenting paths, yielding up to 32% better approximations than prior methods with logarithmic message complexity. In scheduling contexts, maximum weight matching pairs resources such as jobs to machines via compatibility graphs, where edge weights encode processing affinities in bipartite settings to maximize system throughput. This models complex scenarios like datacenter flow scheduling, using matching as a coordination mechanism alongside greedy updates to handle residual capacities and achieve constant-factor approximations efficiently. Economically, maximum weight matching clears markets in compatibility-based systems, exemplified by kidney exchange programs that model donor-recipient pairs as weighted graphs, with weights reflecting transplant success probabilities and medical utilities to prioritize viable cycles. Integer programming formulations compute matchings that maximize total transplants, solvable in time for pairwise exchanges and heuristically for longer cycles. In biological systems, maximum weight matching identifies optimal pairings in protein interaction networks, using domain-domain interaction weights to predict stable es by ensuring domains participate in at most one interaction per . Local expansion techniques refine these matchings, doubling in prediction compared to baseline methods while integrating with clustering algorithms for enhanced .

References

  1. [1]
    [PDF] Maximum matching and a polyhedron with 0,1-vertices
    A matching in a graph C is a subset of edges in C such that no two meet the same node in C. The convex polyhedron C is characterized, where the extreme ...
  2. [2]
    [PDF] The Hungarian method for the assignment problem
    As considered in this paper, the General Assignment Problems asks: Given an n by n il. (i, j = 1, . . . , n) that minimize the sum u1 + . . . + un + v1 + ...Missing: original | Show results with:original
  3. [3]
    The Hungarian method for the assignment problem - Kuhn - 1955
    The “assignment problem” is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible.
  4. [4]
    [PDF] 1 Maximum Weight Matching in Bipartite Graphs
    Definition 1 (Maximum Weight Bipartite Matching) Given a bipartite graph G = (V,E) with bipartition (A, B) and weight function w : E → R find a matching of ...
  5. [5]
    Paths, Trees, and Flowers | Canadian Journal of Mathematics
    Nov 20, 2018 · We describe an efficient algorithm for finding in a given graph a matching of maximum cardinality. This problem was posed and partly solved by C. Berge.
  6. [6]
  7. [7]
    Matching -- from Wolfram MathWorld
    A matching, also called an independent edge set, on a graph G is a set of edges of G such that no two sets share a vertex in common.
  8. [8]
    Matching Number -- from Wolfram MathWorld
    The (upper) matching number nu(G) of graph G, sometimes known as the edge independence number, is the size of a maximum independent edge set.
  9. [9]
    Maximal Independent Edge Set -- from Wolfram MathWorld
    A maximum independent edge set is always maximal, but the converse does not hold. A maximal independent edge set of a graph can be computed in the Wolfram ...
  10. [10]
    Path Graph -- from Wolfram MathWorld
    Path graphs P_n are graceful. The path graph P_n has chromatic polynomial, independence polynomial, matching polynomial, and reliability polynomial given by ...
  11. [11]
    An $n^{5/2} $ Algorithm for Maximum Matchings in Bipartite Graphs
    The present paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to<|control11|><|separator|>
  12. [12]
    Blossom Algorithm -- from Wolfram MathWorld
    While a maximum independent edge set can be found fairly easily for a bipartite graph using the Hungarian maximum matching algorithm, the general case is more ...
  13. [13]
    Bipartite Graph -- from Wolfram MathWorld
    A bipartite graph has vertices in two disjoint sets, where no two vertices in the same set are adjacent. It is a special case of a k-partite graph with k=2.<|control11|><|separator|>
  14. [14]
    [PDF] 1 Linear-Time Approximation for Maximum Weight Matching
    The input is a weighted graph G = (V, E, w) where n = |V| and m = |E| are the number of vertices and edges and w is the edge weight function. If w assigns.
  15. [15]
    [PDF] Weighted Bipartite Matching - CSE IIT KGP
    subset of edges M ⊆ E, such that for all vertices v ∈ V, at most one edge of M is incident on v. Size of the matching M = |M|. Weight of the matching M ...
  16. [16]
    [PDF] Bipartite Matching & the Hungarian Method
    Aug 30, 2006 · In this section we discuss how to find. Maximum-Weight matchings in bipartite graphs, a sit-.
  17. [17]
    [PDF] Maximum Matching in General Graphs Without Explicit ...
    The first polynomial algorithm for the maximum weighted matching problem has been also given by Edmonds [13]. A straightforward implementation of his algorithm ...
  18. [18]
    [PDF] GRAPH THEORY WITH APPLICATIONS
    This book is intended as an introduction to graph theory. Our aim has been to present what we consider to be the basic material, together with a wide.
  19. [19]
    [PDF] 1 Tutte's Theorem - CSE IITM
    Tutte's theorem is the analogue of Halls' Theorem which gives the necessary and sufficient condition for existence of a perfect matching.
  20. [20]
    [PDF] Maximum Weight Matching in Bipartite Graph: Assignment Problem
    An example. ⎛. ⎢. ⎢. ⎢. ⎢. ⎢. ⎢. ⎝. 1 2 ...<|control11|><|separator|>
  21. [21]
    [PDF] Minimum cost flow and weighted bipartite matching - TAU
    The minimum cost maximum flow problem: Given a flow network N = (G, a, c, s, t) with prices, capacities, a source and a sink, find a maximum flow in N of ...
  22. [22]
    Network Flows: Theory, Algorithms, and Applications - Google Books
    Title, Network Flows: Theory, Algorithms, and Applications ; Authors, Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin ; Edition, illustrated, reprint.
  23. [23]
    [PDF] Minimum-Cost Flows
    The successive shortest paths algorithm was initially proposed by Ford and Fulkerson in the mid-1950s, and then later rediscovered by William. Jewell in 1958, ...
  24. [24]
    Minimum-cost flow - Algorithms for Competitive Programming
    Feb 24, 2024 · This is called the minimum-cost maximum-flow problem. Both these problems can be solved effectively with the algorithm of successive shortest paths.Algorithm · Simplest case · Undirected graphs / multigraphs · Complexity
  25. [25]
    [PDF] Blossom V: A new implementation of a minimum cost perfect ...
    Blossom V is a new implementation of Edmonds's algorithm for computing minimum cost perfect matching, using variable dual updates and priority queues.Missing: citation | Show results with:citation
  26. [26]
    An Efficient Implementation of Edmonds' Algorithm for Maximum ...
    A matching on a graph is a set of edges, no two of which share a vertex. A maximum matching contains the greatest number of edges possible.
  27. [27]
    None
    Summary of each segment:
  28. [28]
    New auction algorithms for the assignment problem and extensions
    In this paper, we discuss auction algorithms for solving numerically the classical assignment (aka weighted bipartite matching) problem.
  29. [29]
    A Strongly Polynomial-Time Algorithm for Weighted General Factors ...
    Jan 27, 2023 · In this paper, we present the first strongly polynomial-time algorithm for a type of weighted general factor problems with real-valued edge weights.
  30. [30]
    An FPT Algorithm for the Exact Matching Problem and NP-hardness ...
    May 5, 2024 · The exact matching problem is a constrained variant of the maximum matching problem: given a graph with each edge having a weight 0 or 1 and an ...
  31. [31]
    [PDF] New Approximation Algorithms for the Weighted Matching Problem
    Abstract. Finding matchings of maximum weight is a classical problem in combinatorial optimization. The fastest algorithm known.
  32. [32]
    A simpler linear time 2/3−ε approximation for maximum weight ...
    We present two 2 3 −ε approximation algorithms for the maximum weight matching problem that run in time O (m log 1 ε ) . We give a simple and practical ...
  33. [33]
    [PDF] maximum weight matching in a bipartite graph - cs.Toronto
    Let G be an X, Y-bigraph with n vertices and m edges. Since a'(G) ≤ n/2, we find a maximum matching in G by applying Algorithm 3.2.1 at most n ...Missing: steps | Show results with:steps
  34. [34]
    [PDF] Linear Programming Notes VIII: The Transportation Problem
    The assignment problem is a linear programming problem (with the additional constraint that the variables take on the values zero and one). In general, the ...
  35. [35]
    [PDF] Transportation Problem: A Special Case for Linear Programming ...
    We could set up a transportation problem and solve it using the simplex method as with any LP problem (see Using the Simplex. Method to Solve Linear Programming ...
  36. [36]
    An extension of the Munkres algorithm for the assignment problem ...
    Then the authors develop an extension of this algorithm which permits a solution for rectangular matrices.
  37. [37]
    OPL examples that use CPLEX (Mathematical Programming) - IBM
    These examples illustrate specific features of the OPL modeling language and scripting language for models that use IBM ILOG CPLEX.
  38. [38]
    Improving distributed maximum weighted matchings ... - IEEE Xplore
    Abstract: Matching problem has been attracting a growing attention in network design, especially in wireless communication networks.
  39. [39]
    A distributed and asynchronous approach for optimizing weighted ...
    Apr 1, 2019 · Weighted graph matching is one of the most fundamental graph theoretical problems used in network design, especially in wireless network ...<|control11|><|separator|>
  40. [40]
    [PDF] From Switch Scheduling to Datacenter Scheduling: Matching ...
    Jul 29, 2022 · maximum-weight matching can be used as a “coordination” mech- anism, alongside local and greedy decisions on updating residual weights, to ...
  41. [41]
    Optimal Decisions for Organ Exchanges in a Kidney Paired ... - NIH
    When k is equal to two, the optimization problem could be solved in polynomial time using a maximum weighted matching algorithm, which is an extended version of ...
  42. [42]
    Protein complex prediction based on maximum matching with ...
    The PPI network is modeled as a graph where the nodes represent proteins and the edges represent the interactions. Protein complexes are clusters of proteins ...