Fact-checked by Grok 2 weeks ago

Hungarian algorithm

The Hungarian algorithm, also known as the Hungarian method or Kuhn-Munkres algorithm, is a technique that solves the by finding an optimal one-to-one matching between two finite sets of equal , such as assigning n agents to n tasks to minimize the or maximize the total profit represented in a cost matrix. Developed by American mathematician in 1955 and published in the Naval Research Logistics Quarterly, the algorithm draws its name from the earlier theoretical contributions of Hungarian mathematicians Dénes Kőnig, who in 1931 proved the König-Egerváry theorem relating vertex covers to matchings in bipartite graphs, and Jenő Egerváry, who extended these ideas to weighted graphs. Kuhn's implementation translates these concepts into a practical procedure that anticipates primal-dual methods in , predating their formal development. The method gained renewed interest in the mid-20th century through applications in , including work by and others on personnel assignment. At its core, the algorithm proceeds in phases: it first reduces the cost matrix by subtracting the minimum value from each row and then each column to create zeros; next, it covers all zeros with the minimum number of horizontal and vertical lines; if the number of lines equals n, an optimal assignment can be selected by choosing independent zeros; otherwise, it adjusts the matrix by subtracting the smallest uncovered value from uncovered entries and adding it to doubly covered ones, repeating until optimality is achieved. This process ensures a polynomial-time solution with a worst-case time complexity of O(n3), making it efficient for problems up to moderate sizes (n ≈ 1000), though implementations can vary in practice. The algorithm's duality with linear programming formulations allows it to be viewed as a specialized simplex method for the transportation problem restricted to balanced bipartite graphs. Beyond its foundational role in optimization, the Hungarian algorithm finds extensive applications in , including task scheduling in to minimize production costs, object tracking in via multi-target association, client selection in federated to optimize , and multi-robot coordination in for one-to-one task assignments under linear objectives. Its robustness for small-to-medium bipartite matching problems has led to variants, such as dynamic versions for changing costs and implementations for , though it faces challenges in very large-scale settings due to cubic growth.

The Assignment Problem

Illustrative Example

Consider a simple involving three workers (W1, W2, W3) and three tasks (T1, T2, T3), where the goal is to assign each worker to a unique task to minimize the total cost. The costs for assigning each worker to each task are given in the following :
WorkerT1T2T3
W1254436
W2284140
W3235035
To solve this using the Hungarian algorithm, first subtract the minimum value in each row from all elements in that row, creating at least one zero per row:
  • Row W1 minimum: 25, resulting row: [0, 19, 11]
  • Row W2 minimum: 28, resulting row: [0, 13, 12]
  • Row W3 minimum: 23, resulting row: [0, 27, 12]
The reduced is:
WorkerT1T2T3
W101911
W201312
W302712
Next, subtract the minimum value in each column from all elements in that column:
  • Column T1 minimum: 0, no change
  • Column T2 minimum: 13, resulting column: [6, 0, 14]
  • Column T3 minimum: 11, resulting column: [0, 1, 1]
The further reduced matrix is:
WorkerT1T2T3
W1060
W2001
W30141
This matrix has zeros at positions (W1,T1), (W1,T3), (W2,T1), (W2,T2), and (W3,T1). The zeros can be covered by three lines (one per row and column), indicating an optimal assignment is possible without further adjustments. A valid assignment selects one zero per row and column, such as W1 to T3, W2 to T2, and W3 to T1. The optimal assignment in the original matrix is W1 to T3 (cost 36), W2 to T2 (cost 41), and W3 to T1 (cost 23), yielding a total minimum cost of 100. The marked cost matrix with assignments is:
WorkerT1T2T3
W1254436
W2284140
W3235035

Cost Matrix Formulation

The assignment problem can be formally defined using an n \times n C = (c_{ij}), where c_{ij} represents the of assigning the i-th row (such as an or worker) to the j-th column (such as a task or job), for i, j = 1, \dots, n. The objective is to find a assignment that minimizes the total , formulated as the linear program: \min \sum_{i=1}^n \sum_{j=1}^n c_{ij} x_{ij} subject to the constraints \sum_{j=1}^n x_{ij} = 1 \quad \forall i = 1, \dots, n, \quad \sum_{i=1}^n x_{ij} = 1 \quad \forall j = 1, \dots, n, \quad x_{ij} \in \{0, 1\} \quad \forall i, j, where x_{ij} = 1 if row i is assigned to column j, and x_{ij} = 0 otherwise. These constraints ensure that each row and each column is assigned exactly once, corresponding to a perfect matching in the underlying structure. This formulation assumes a balanced , where the number of rows equals the number of columns (n agents and n tasks), resulting in a square cost matrix. In the unbalanced case, where the numbers differ (e.g., more tasks than agents), the problem can be converted to a balanced one by introducing dummy rows or columns with zero costs to balance the matrix and account for unassigned entities at no additional cost, though the standard Hungarian algorithm targets the balanced scenario. A brute-force approach to solving this problem requires evaluating all n! possible permutations of assignments to identify the minimum-cost one, yielding time complexity that becomes impractical for even moderate n. This motivates the development of polynomial-time algorithms like the Hungarian method to efficiently compute the optimal solution.

Bipartite Matching Formulation

The assignment problem is equivalently formulated as the problem of finding a minimum-weight perfect matching in a complete bipartite graph G = (U \cup V, E), where U and V are disjoint vertex sets each of size n, and E consists of all possible edges between U and V. Each edge uv \in E is assigned a weight c_{uv}, corresponding to the cost of assigning the element from U to the element from V; for maximization objectives, weights can be negated to convert to a minimum-cost problem. The objective is to select a matching M \subseteq E such that every vertex in U \cup V is incident to exactly one edge in M (a , required since the graph is balanced with |U| = |V| = n), minimizing the total weight \sum_{(u,v) \in M} c_{uv}. This graph-theoretic perspective bridges the algebraic cost matrix representation to broader matching algorithms in . The algorithm solving this problem is commonly known as the Hungarian algorithm, or alternatively the Kuhn-Munkres algorithm, named after , who formalized it in 1955, and , who provided a simplified version in 1957; the "Hungarian" moniker honors the foundational contributions of Hungarian mathematicians Dénes Kőnig and Jenő Egerváry to related matching theory.

Core Algorithm Concepts

Feasibility and Dual Variables

In the Hungarian algorithm, which solves the minimum perfect matching problem in a with vertex partitions U = \{u_1, \dots, u_n\} and V = \{v_1, \dots, v_n\} and nonnegative costs c_{uv}, feasible potentials are defined as real-valued labels y_u for each u \in U and z_v for each v \in V such that y_u + z_v \leq c_{uv} for all u \in U, v \in V. These potentials induce reduced costs \hat{c}_{uv} = c_{uv} - y_u - z_v \geq 0 for every , transforming the original into a nonnegative equivalent while preserving optimality of solutions. The equality subgraph associated with a set of feasible potentials includes only those edges (u, v) where the vanishes, i.e., \hat{c}_{uv} = 0 or y_u + z_v = c_{uv}. A within this identifies candidate assignments that are tight with respect to the potentials, forming the basis for primal feasibility in the algorithm's iterative process. Feasible potentials correspond directly to the dual variables of the formulation of the . The problem is \min \sum_{u \in U} \sum_{v \in V} c_{uv} x_{uv} subject to \sum_{v \in V} x_{uv} = 1 \quad \forall u \in U, \quad \sum_{u \in U} x_{uv} = 1 \quad \forall v \in V, \quad x_{uv} \geq 0 \quad \forall u,v. Its is \max \sum_{u \in U} y_u + \sum_{v \in V} z_v subject to y_u + z_v \leq c_{uv} \quad \forall u \in U, v \in V, with y_u, z_v unrestricted in sign; feasible potentials thus provide a feasible solution whose objective value lower-bounds the minimum cost. By the strong for , optimal potentials achieve equality with the optimal assignment cost. A simple initial feasible solution sets y_u = \min_{v \in V} c_{uv} for each u \in U and z_v = 0 for each v \in V, ensuring the dual constraints hold since the minimum over v is at most any specific c_{uv}. This initialization creates a starting equality subgraph with at least one zero per row, facilitating the algorithm's early iterations.

Equality Subgraph and Augmenting Paths

In the Hungarian algorithm for solving the minimum cost on a with partitions U and V, the equality subgraph G_y is constructed using a set of feasible variables or potentials y_u for u \in U and y_v for v \in V, where y_u + y_v \leq c_{uv} for all edges (u,v). This subgraph includes all vertices U \cup V and only those edges where the \hat{c}_{uv} = c_{uv} - y_u - y_v = 0, ensuring the edges represent zero-cost opportunities relative to the current potentials. Within G_y, alternating paths are defined with respect to a partial matching M \subseteq E. An alternating path begins at an unmatched vertex in U and proceeds as a sequence of edges that alternate between non-matching edges (not in M) and matching edges (in M), all contained in G_y. These paths explore the structure of feasible zero-cost edges while respecting the current assignment constraints. An augmenting path is a specific type of alternating path in G_y that terminates at an unmatched in V. Augmenting along such a path involves taking the symmetric difference M \Delta P, which flips the matched and unmatched edges along P, thereby increasing the size of the matching by one without increasing the total cost, as all edges in P have zero . The Hungarian algorithm builds a —and ultimately an optimal —by repeatedly identifying and augmenting along these paths in the equality , adjusting the potentials when no further augmenting paths exist to enlarge the and create new zero-reduced-cost edges, until a is obtained. This process leverages the zero-reduced-cost structure of G_y to ensure feasibility and optimality in the primal-dual framework.

Step-by-Step Algorithm in Bipartite Graphs

Initialization and Potential Adjustment

The Hungarian algorithm for solving the minimum cost initializes the variables, or potentials, to establish a feasible starting point with non-negative reduced costs. The potentials for vertices in the left U are set to y_u = \min_{v \in V} c_{uv} for each u \in U, while the potentials for vertices in the right V are set to z_v = 0 for all v \in V. This ensures that the reduced costs c'_{uv} = c_{uv} - y_u - z_v \geq 0 for every edge (u,v), satisfying the feasibility constraints y_u + z_v \leq c_{uv}. The initial matching M is empty. When no augmenting path is found in the during a , the potentials are adjusted to enlarge the and restore the possibility of finding augmenting paths. The search builds a of reachable vertices from U vertices. The adjustment computes \delta as the minimum over edges from reached (visited) vertices in U to unreached vertices in V. The potentials are then updated by adding \delta to y_u for all reached vertices u \in U and subtracting \delta from z_v for all reached vertices v \in V. This uniform shift tightens edges in the search , making some previously non-tight edges tight (zero ), while preserving non-negativity of reduced costs elsewhere. The consists of tight edges (zero ) under the current potentials. This update mechanism increases the objective \sum y_u + \sum z_v by n \delta (or adjusted for reached sets in some implementations), where n is the number of reached vertices, but ensures progress toward optimality by incorporating the . repeats until a is found.

Finding and Augmenting Along Shortest Paths

In the Hungarian algorithm, finding an augmenting path occurs within the equality , which comprises all edges where the c_{ij} - p_i - q_j = 0, with p_i and q_j denoting the potentials (also called labels) for vertices i \in U and j \in V, respectively. This encodes feasible alternating paths relative to the current matching M, and the search begins from all free (unmatched) vertices in U. To locate the shortest augmenting —defined by the minimal number of edges, as all edges in the equality carry zero —a (BFS) is employed, treating the as unweighted for length purposes. The BFS operates in layers: the initial layer includes all free U vertices, followed by reachable unmatched edges to V vertices, then backward along matched edges to U, and so forth, alternating directions to maintain the path's feasibility. This layered approach handles multiple free vertices simultaneously by initiating the search from all of them as multi-sources, ensuring the first reachable free V vertex yields a shortest without prioritizing one source over others. Alternatively, for implementations emphasizing the weighted structure, a modified can be applied directly on the residual graph with s as edge weights, where potentials serve as node labels to guarantee non-negativity (c_{ij} - p_i - q_j \geq 0 for forward edges and the negated for backward). In this setup, the orders vertices by cumulative distance from the sources (free U vertices), and the algorithm identifies the minimal-cost augmenting . Potential adjustments from prior steps ensure no negative cycles or weights disrupt the search. Upon discovering an augmenting path P from a free U to a free V , augmentation proceeds by : unmatched edges in P are added to M, and matched edges are removed, resulting in a new matching M \Delta P with |M| + 1. This flip operation updates the without altering the total cost under current potentials, as the path's sums to zero. The process repeats in the same equality subgraph until no more augmenting paths exist, at which point potentials are adjusted if necessary.

Progress and Termination Guarantees

The Hungarian algorithm progresses by iteratively identifying and augmenting along shortest augmenting paths in the equality subgraph defined by the current dual variables, with each such augmentation strictly increasing the cardinality of the matching M by one. In a balanced bipartite graph with n vertices per partition, this process performs at most n augmentations before attaining a perfect matching of size n. Multiple augmentations may occur in a single phase with fixed potentials until no more paths are found in the current equality subgraph. When no augmenting path exists in the equality for the current phase, the dual variables y_u for left vertices and z_v for right vertices are updated using the minimum slack \delta from the search tree to maintain feasibility of the reduced costs c_{uv} - y_u - z_v \geq 0 for all edges. This adjustment raises the dual objective \sum y_u + \sum z_v by an amount related to \delta, which is positive and ensures new tight edges are created while preserving nonnegativity elsewhere. The algorithm terminates when no augmenting path exists in the equality subgraph and all vertices are matched. At this point, M is a maximum matching by Berge's lemma, which asserts that a matching in a is maximum if and only if no augmenting path exists with respect to it. Moreover, the dual solution remains feasible, and complementary slackness holds: reduced costs are zero for all edges in M, and there are no zero-reduced-cost edges incident to unmatched vertices (though in perfect matching, none unmatched), equating the primal and dual objectives and confirming M's optimality. Termination occurs after at most n augmentations, with phases bounded by the use of shortest paths to limit phase length. Each phase requires O(n^2) time to detect and trace augmenting paths via in the equality subgraph. This bounded iteration count, combined with progress per phase, guarantees finite completion with an optimal .

Time Complexity and Implementation

O(n^3) Implementation Details

The O(n³) implementation of the Hungarian algorithm operates on a with n vertices on each side, using dual variables (potentials) to maintain feasibility and compute reduced costs for edges. The algorithm consists of up to n phases, each finding a shortest augmenting path in the residual graph with respect to reduced costs and augmenting the matching along it; each phase requires O(n²) time for path finding and updates, yielding the overall O(n³) . Key data structures include arrays for row potentials u and column potentials v (both of size n), a matching array pairU and pairV (size n) to track assignments, and temporary arrays such as way (size n) to reconstruct paths, used (size n) to mark visited columns, and minv (size n) for slack values. The cost matrix c (n × n) stores original edge weights, while reduced costs are computed on-the-fly as c'_{ij} = c_{ij} - u_i - v_j to ensure non-negativity, allowing efficient shortest path computations without explicit adjacency lists for dense graphs. Slack variables in minv track the minimum reduced cost from the current search tree's frontier (unmatched rows) to each column j in the right partition V, updated during the path search to identify the bottleneck delta for potential adjustments. This enables a specialized O(n²) shortest path algorithm akin to Dijkstra, but implemented via iterative minimum selection over slacks rather than a full priority queue, avoiding logarithmic factors. The main loop initializes feasible potentials (e.g., u_i = \min_j c_{ij}, v_j = 0) and an empty matching, then iterates until the matching size reaches n. In each phase, starting from a free row, the algorithm builds an alternating tree in the equality subgraph (edges with zero reduced cost), iteratively adjusting potentials by the minimum slack to expand the tree until an augmenting path to a free column is found, followed by augmentation and potential updates to ensure progress. Potential updates occur after each augmentation by the minimum slack along the path, ensuring the equality subgraph grows and progress is made.

Incremental Addition of Assignments

The incremental variant of the Hungarian algorithm addresses dynamic scenarios where the assignment problem evolves by adding one new job (or worker) at a time to an existing bipartite graph, allowing reuse of the prior optimal matching and dual variables to avoid recomputing the entire solution from scratch. This approach begins with a solved k × k assignment problem, which provides an optimal matching and feasible dual potentials (labels) for the vertices. When adding the (k+1)th row and column—representing the new job and its associated costs to existing workers—the algorithm initializes the dual potentials for the new vertices based on maximum weight differences relative to the existing structure, ensuring initial feasibility. The previous matching remains partially valid, as it saturates the first k vertices on both sides, leaving only the new vertices unsaturated. To update the matching, the algorithm focuses computation on the new elements by constructing the equality subgraph using the reused potentials and searching for a single augmenting that incorporates the new job, typically via a Hungarian tree rooted at the unsaturated new . If no immediate exists, potentials are adjusted by computing the minimum (the smallest difference between sums and costs) over relevant unmatched , which expands the equality subgraph and enables discovery without altering the entire solution. This adjustment preserves the optimality of the prior matching while integrating the new , resulting in a cardinality increase of one. The process re-runs only aspects involving the new column, leveraging the structure of the previous solution to limit exploration. The of this incremental addition is O(n²) per update in the worst case, where n is the current size, achieved through efficient maintenance and growth bounded by the ; over n sequential additions starting from a 1 × 1 problem, the total complexity amortizes to O(n³), matching the standard Hungarian algorithm but offering efficiency for partial updates. Earlier implementations may exhibit O(n W) behavior if using unscaled shortest methods with costs up to W, but polynomial-time techniques, such as those employing with potentials, ensure consistent O(n³) overall without dependence on cost magnitude. An improved variant further reduces average-case time to O(n) for certain configurations by optimizing dual initialization and detection, though it falls back to O(n²) generally. This incremental approach finds applications in online assignment scenarios, such as dynamic in radar tracking systems where targets appear sequentially, or interactive topic modeling where mappings update based on user feedback without full recomputation. By enabling efficient handling of evolving inputs, it supports decision-making in environments like scheduling or matching markets with arriving entities.

Theoretical Connections

Relation to Successive Shortest Paths

The assignment problem can be modeled as a minimum-cost flow problem in a bipartite flow network with unit capacities. A source vertex s is connected to each vertex in the left partition U by an edge of capacity 1 and cost 0; each vertex u \in U is connected to each vertex v \in V by an edge of capacity 1 and cost c_{uv}; and each vertex in the right partition V is connected to a sink vertex t by an edge of capacity 1 and cost 0. The goal is to compute a minimum-cost flow of value n = |U| = |V| from s to t. The successive shortest paths algorithm addresses this min-cost flow instance by beginning with a zero flow and repeatedly identifying the shortest s-t path in the residual graph, where edge costs are adjusted using node potentials to obtain non-negative reduced costs \bar{c}(i,j) = c(i,j) + \pi_i - \pi_j. Flow is augmented by 1 along this path, potentials are updated based on the path distances to preserve reduced cost non-negativity, and the process repeats for n iterations until the maximum flow is achieved. The Hungarian algorithm is equivalent to the successive shortest paths algorithm when specialized to this unit-capacity bipartite network for the . The row and column potentials in the Hungarian method function as the node potentials \pi for computations; the Hungarian reduced costs \bar{c}_{uv} = c_{uv} - u_u - v_v match the flow network's s on forward and backward edges. Furthermore, the equality subgraph in the Hungarian algorithm, formed by edges with zero , corresponds exactly to the of zero-cost edges used to find shortest augmenting paths via . This equivalence highlights how the Hungarian algorithm optimizes the successive shortest paths framework by exploiting the bipartite structure and unit capacities, thereby eliminating the need for general-purpose shortest path computations like and focusing instead on efficient augmenting path searches within the equality subgraph to achieve O(n^3) without broader min-cost flow overhead.

Linear Programming Duality Interpretation

The Hungarian algorithm for solving the can be interpreted through the framework of (LP) duality, where the primal problem corresponds to finding a minimum-cost in a , and the dual provides bounds via node potentials. The primal LP formulation is as follows: \begin{align*} \min &\quad \sum_{i=1}^n \sum_{j=1}^n c_{ij} x_{ij} \\ \text{s.t.} &\quad \sum_{j=1}^n x_{ij} = 1 \quad \forall i = 1, \dots, n, \\ &\quad \sum_{i=1}^n x_{ij} = 1 \quad \forall j = 1, \dots, n, \\ &\quad x_{ij} \geq 0 \quad \forall i,j, \end{align*} where c_{ij} is the cost of assigning row i to column j, and x_{ij} indicates the assignment (with x_{ij} \in \{0,1\} for integral solutions). This LP has an integral optimal solution because the constraint matrix (the incidence matrix of the complete bipartite graph K_{n,n}) is totally unimodular, ensuring that the polytope defined by the constraints has integer vertices. The associated dual LP is: \begin{align*} \max &\quad \sum_{i=1}^n y_i + \sum_{j=1}^n z_j \\ \text{s.t.} &\quad y_i + z_j \leq c_{ij} \quad \forall i,j = 1, \dots, n, \\ &\quad y_i, z_j \in \mathbb{R} \quad \forall i,j, \end{align*} where y_i and z_j represent unrestricted potentials (or labels) for rows and columns, respectively. By weak duality, any feasible primal solution provides an upper bound on the dual objective, and vice versa. The Hungarian algorithm operates as a primal-dual method, maintaining a feasible primal solution (a partial matching, which is integral) and a feasible dual solution (feasible potentials y, z) throughout its execution. It iteratively augments the matching while adjusting the potentials to preserve dual feasibility, ensuring reduced costs c_{ij} - y_i - z_j \geq 0 for all edges. Upon termination with a perfect matching, strong duality holds with zero duality gap, as the primal and dual objectives equalize. This optimality is certified by complementary slackness conditions: for every assigned edge (i,j) with x_{ij} = 1, the dual constraint is tight (y_i + z_j = c_{ij}), and for every dual variable with positive contribution, the corresponding primal constraint is tight. These conditions align the algorithm's potential adjustments with the economic interpretation of dual variables as shadow prices in assignment markets. This duality perspective traces back to foundational work by , whose 1953 analysis of expanding economies in the second edition of Theory of Games and Economic Behavior (co-authored with ) applied similar dual formulations to assignment-like problems in , influencing the development of primal-dual algorithms for .

Matrix-Based Algorithm Description

Initial Reduction Steps

The initial reduction steps of the prepare the cost matrix for the , which involves assigning n workers to n tasks to minimize , represented by an n × n matrix C where c_{ij} denotes the cost of assigning worker i to task j. These steps transform the original matrix into a with non-negative entries and zeros indicating potential optimal assignments, without altering the optimal . In the first step, row reduction is performed by identifying the minimum entry in each row of the cost matrix and subtracting this minimum value from every element in that row. This operation ensures that at least one zero appears in each row, as the smallest entry becomes zero while all others remain unchanged or increase relative to it. For example, consider a cost matrix:
Task 1Task 2Task 3
Worker 1592
Worker 2674
Worker 3831
Subtracting the row minima (2, 4, and 1, respectively) yields:
Task 1Task 2Task 3
Worker 1370
Worker 2230
Worker 3720
This step implicitly sets the initial variables u_i for each row i to the row minimum, contributing to a feasible solution in the formulation of the . The second step, column reduction, applies a similar process to the row-reduced matrix by finding the minimum entry in each column and subtracting it from all elements in that column. This guarantees at least one zero in each column, further refining the . Continuing the example, the column minima are 2, 2, and 0, leading to subtractions that produce:
Task 1Task 2Task 3
Worker 1150
Worker 2010
Worker 3500
Here, the column subtractions set the initial dual variables v_j for each column j to these minima. The resulting has all entries non-negative, with zeros marking positions where the equals zero, representing candidate in the equality subgraph. This initialization establishes feasible dual variables satisfying u_i + v_j \geq c_{ij} for all i, j, with equality at the zeros, thus providing a starting point for the primal-dual optimization that maintains complementary slackness and progresses toward optimality. The total subtracted value equals the sum of all row minima plus column minima, serving as an initial upper bound on the minimum assignment cost.

Covering and Uncovered Zeros

In the Hungarian algorithm, following the initial reduction of the cost matrix to create zeros representing candidate assignments, the next phase focuses on covering these zeros with the minimum number of horizontal (row) or vertical (column) lines. This minimum line cover represents the smallest set of rows and columns that collectively intersect every zero entry, and its size provides a direct indicator of the current maximum matching in the defined by the positions of the zeros. By König's theorem, the size of this minimum equals the size of the maximum matching, ensuring that the covering step efficiently assesses the completeness of the partial assignment without redundant computation. If the minimum number of lines equals the matrix dimension n, a exists among the zeros, allowing the extraction of an optimal by selecting one zero per row and per column such that no two share a row or column. This corresponds to a complete set of non-conflicting positions at zero , yielding the optimal solution for the original problem. The process of identifying such an typically involves greedily choosing zeros or using the structure of the to guide selection, ensuring feasibility and optimality. When the minimum number of lines is less than n, the current zeros support only a partial matching of size k < n, indicating that additional zeros must be created to complete the . This iterative process increases the size of the maximum matching in the zero by one each until the requires n lines, at which point the full optimal is obtained. The construction of the minimum itself relies on the existing partial matching, marking uncovered rows (those without assigned zeros) and then iteratively marking columns connected by zeros to marked rows and rows connected by assignments to marked columns, until stabilization; the final consists of unmarked rows and marked columns. Underlying this covering mechanism is an implicit connection to the problem in , where the lines correspond to tight potentials ( variables u_i for rows and v_j for columns) that satisfy u_i + v_j \geq for all entries, with equality at zeros. The minimum line cover thus embodies a basic optimal solution, with each line representing a non-zero variable that "covers" the constraints at the zeros, ensuring the primal- optimality conditions hold once n lines are reached. This duality perspective, rooted in the algorithm's foundational design, guarantees that the covering step not only tests for completeness but also maintains feasibility throughout the iterations.

Line Adjustment and Resolution

In the Hungarian algorithm, when the minimum number of lines required to cover all zeros in the reduced cost matrix is fewer than n, an adjustment step is performed to modify the matrix and introduce additional zeros, facilitating progress toward a complete . This step begins by identifying an uncovered zero at position (i, j) in the matrix, which indicates the insufficiency of the current covering. The algorithm then locates the minimum entry \delta among all elements not covered by any of the drawn lines (i.e., in the submatrix consisting of uncovered rows and uncovered columns). This \delta is positive, as all zeros are covered by the lines. The adjustment proceeds by subtracting \delta from every uncovered entry in the matrix—those positioned at the intersection of uncovered rows and uncovered columns—and adding \delta to every entry covered by exactly two lines, namely the intersections of covered rows and covered columns. Entries covered by exactly one line remain unchanged. This operation creates a new zero at the location of the original minimum entry \delta, while preserving all existing zeros: uncovered zeros do not exist by construction, single-covered zeros are neither subtracted from nor added to, and the overall reduced costs maintain their non-negativity. The effect is to expand the set of possible assignments without invalidating prior ones. Following the adjustment, the process returns to drawing the minimum number of lines to cover all zeros in the updated . This iterative refinement continues until exactly n lines suffice to cover all zeros, at which point a can be extracted. Each iteration guarantees progress by increasing the size of the maximum matching in the zero , as per the underlying combinatorial structure. In the graph-theoretic of , this line adjustment corresponds to a coordinated shift in the node potentials u_i for rows and v_j for columns. Specifically, the subtractions and additions equate to decreasing potentials on uncovered vertices and increasing them on covered ones in a manner that reduces the length of certain alternating paths, tightening the dual solution without altering the optimality gap.

Final Assignment Extraction

Once the Hungarian algorithm has reached a state where exactly n lines cover all the zeros in the fully , the optimal can be extracted by identifying a set of n independent zeros, ensuring one zero per row and one per column, which corresponds to a of assignments. This selection process guarantees a complete matching without conflicts, as the covering lines confirm the existence of such a perfect set of non-overlapping zeros. The extraction begins by tracing through the zeros in the reduced : start with an arbitrary zero in an uncovered row (or any row if all are covered minimally), then select a zero in a different row and column, proceeding iteratively to mark positions while avoiding any row or column already used. This tracing routine, often implemented as a systematic search for independent positions, ensures the marked zeros form a maximum matching of size n, directly yielding the of rows (e.g., workers) to columns (e.g., tasks). In practice, this step can be computed efficiently using augmenting methods or direct permutation finding on the zero . The total cost of the optimal assignment is computed by summing the entries from the original cost at the selected positions, as the row and column reductions applied during the algorithm cancel out in the final tally, preserving the original values. For instance, if the assigned pairs are (i_k, j_k) for k = 1 to n, the cost is \sum_{k=1}^n c_{i_k j_k}, where c denotes the original . This assignment is verified as optimal because the minimum covering line sum equals the total cost, establishing that no lower-cost assignment exists, per the duality between the primal assignment problem and the dual covering problem. The presence of n independent zeros achieves the theoretical bound from König's theorem, confirming the solution's minimality.

References

  1. [1]
    [PDF] The Hungarian method for the assignment problem
    The purpose of this paper is to develop a computational method that uses this duality in a particu- larly effective manner. One interesting aspect of the ...
  2. [2]
    Hungarian Method - UC Davis Mathematics
    1. Subtract the smallest entry in each row from all the entries of its row. · 2. Subtract the smallest entry in each column from all the entries of its column.
  3. [3]
    [PDF] The Dynamic Hungarian Algorithm for the Assignment Problem with ...
    In this paper, we present the dynamic Hungarian algorithm, applicable to optimally solving the assignment problem in situations with changing edge costs or ...
  4. [4]
    Hungarian Algorithm - an overview | ScienceDirect Topics
    The algorithm has a polynomial time complexity, making it efficient for small to medium-sized problems. However, the Hungarian algorithm may not scale well to ...Introduction · The Assignment Problem and... · Algorithmic Procedure and...
  5. [5]
    [PDF] Linear Programming Formulation of Assignment Problem
    Nov 4, 2009 · Minimise Ž Ž Cij xij. 2=1 J=1. Subject to. Exis. J = 1. •W!> <WB. -1 Vi. Exij= X ;; = 1. 6=1 oor 1. Xij = 0 or. LP relaxation: 05 Xij.
  6. [6]
    [PDF] 7.5 Assignment Problems
    2 If the number of rows and columns in the cost matrix are unequal, then the assignment problem is unbalanced. The Hungarian method may yield an incorrect ...
  7. [7]
    [PDF] A Survey of the Quadratic Assignment Problem, with Applications
    there are n! distinct ways in which n jobs can be assigned to n people. As noted in [25], notice that for large values of n, a brute force approach of ...<|control11|><|separator|>
  8. [8]
    On Kuhn's Hungarian Method—A tribute from Hungary - Frank - 2005
    9 Dec 2004 · [English translation: Algorithm for solution on a problem on maximum flow in a network with power estimation, Sov Math Dokl 11 (1970), 1277–1280] ...
  9. [9]
    [PDF] Linear Assignment Problems and Extensions ∗ - TU Graz
    Kuhn, The Hungarian method for the assignment and transportation problems,. Naval Research Logistics Quarterly 2, 1955, 83–97. [119] E. L. Lawler ...
  10. [10]
    [PDF] Given a weighted complete bipartite graph G = (X ∪ Y,X × Y
    feasible vertex labeling whose equality subgraph contains a complete matching of X to Y . The Kuhn-Munkres Algorithm (also known as the Hungarian method).
  11. [11]
    [PDF] Lecture 8: Assignment Algorithms - Cyberlab Research
    ▫ Hungarian Algorithm (also called Kuhn-Munkres Algorithm). ▫ Easy to ... • Basic outline of JVC algorithm. Step 1: initialization … column reduction.
  12. [12]
    [PDF] A Parallel Shortest Augmenting Path Algorithm for the ... - DTIC
    In Phase 1, an unassigned node ieS is selected and a shortest (with respect to the current reduced costs) augmenting path is found from it to some.
  13. [13]
    Hungarian algorithm for solving the assignment problem
    Dec 13, 2023 · The algorithm was developed and published by Harold Kuhn in 1955. Kuhn himself gave it the name "Hungarian" because it was based on the earlier work by ...Hungarian algorithm · Implementation of the... · Connection to the Successive...Missing: paper | Show results with:paper
  14. [14]
    [PDF] 2 A Faster Algorithm for Minimum-cost Bipartite Perfect Matching in ...
    To find the minimum net-cost augmenting path, the Hungarian algorithm uses a simple Dijkstra- type search procedure called the Hungarian Search. At any stage ...<|control11|><|separator|>
  15. [15]
    [PDF] The Linear Assignment Problem and its dual - User pages
    Optimality of Hungarian method. 11/2/2004 page 1 of 5. The Linear Assignment Problem and its dual: Primal Problem: Dual Problem: ©Dennis L ...
  16. [16]
    [PDF] Bipartite Matching & the Hungarian Method
    Aug 30, 2006 · • So, algorithm always terminates and, when it does terminate M is a perfect matching in E. ℓ so, by. Kuhn-Munkres theorem, it is optimal.
  17. [17]
    Algorithms for the Assignment and Transportation Problems
    A comparison of multiobjective evolutionary algorithms with informed initialization and kuhn-munkres algorithm for the sailor assignment problem ... PDF. View PDF ...
  18. [18]
    [PDF] Improved Algorithm for the Incremental Assignment Problem
    It then essentially performs a single stage of the Hungarian algorithm to find an augmenting path between the two new vertices Un+1 and Vn+1, adjusting dual ...
  19. [19]
    (PDF) Dynamic Sector Processing using 2D Assignment for Rotating ...
    Aug 9, 2025 · ... added to the bipartite graph while moving. to the next sector, an incremental Hungarian Algorithm [9][12], is implemented. The incremental ...
  20. [20]
    [PDF] TopicPanorama: A Full Picture of Relevant Topics - Shixia Liu
    By combining metric learning and feature selection with the incremental Hungarian algorithm [6], we allow users to interactively modify the matching results.
  21. [21]
    [PDF] arXiv:2312.12388v1 [math.OC] 19 Dec 2023
    Dec 19, 2023 · Further, we show that the Hungarian Method leads to an edge walk and is replicated, equivalently, as a circuit augmentation scheme or a primal ...<|control11|><|separator|>
  22. [22]
    [PDF] The assignment problem and totally unimodular matrices
    Corollary 1 The incidence matrix of a bipartite graph is totally unimodular. Proof: Let G = (V,E) be a bipartite graph and let V1,V2 be the partition of the ...
  23. [23]
    [PDF] A Second Course in Algorithms Lecture #9: Linear Programming ...
    Feb 2, 2016 · Linear programming (and duality) were only developed in the late 1940s, and so it was a new subject when Kuhn designed the Hungarian algorithm.
  24. [24]
    [PDF] A Generalization of von Neumann's Reduction from the Assignment ...
    Oct 14, 2025 · Abstract. The equivalence between von Neumann's Minimax Theorem for zero-sum games and the LP Duality. Theorem connects cornerstone problems ...Missing: expanding | Show results with:expanding
  25. [25]
    The Hungarian method for the assignment problem - Kuhn - 1955
    The “assignment problem” is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible.
  26. [26]
    [PDF] How to Use the Hungarian Algorithm - Emory CS
    Mar 30, 2011 · The Hungarian algorithm allows a "minimum matching" to be found. This can be used in instances where there a multiple quotes for a group of ...
  27. [27]
    None
    ### Summary of Line Adjustment Step in Hungarian Algorithm
  28. [28]
    [PDF] The Assignment Problem The Problem Naïve Solution Naïve ...
    The Hungarian Algorithm ... from all uncovered entries. ✦. Add it to all double- covered entries. ✦ ... Subtract and Add. 30 0. 0. 0. 0 30 0. 5. 55 5. 0 10. 0 45 ...