Fact-checked by Grok 2 weeks ago

Floyd–Warshall algorithm

The Floyd–Warshall algorithm is a dynamic programming algorithm that solves the all-pairs for a with n vertices, computing the shortest path distances between every pair of vertices in Θ(n3) time, and it correctly handles graphs with negative edge weights provided no negative-weight cycles are present. Named after its two key developers, the algorithm draws from Stephen Warshall's 1962 work on computing the of a via an efficient method for matrix multiplication, which determines between all pairs of vertices. Shortly thereafter, Floyd published an extension in 1962 that adapts the approach to weighted graphs by replacing operations with minimization and to find actual shortest path lengths rather than mere . This combination of ideas established the algorithm as a foundational technique in and algorithm design, often presented as a prime example of dynamic programming where subproblems are solved incrementally by considering subsets of possible intermediate vertices. The algorithm operates by maintaining an n × n D, initialized such that Di,j holds the weight of the direct from i to j (or if no such exists) and Di,i = 0 for all i. It then iterates over each k from 1 to n as a potential intermediate point, updating the matrix with the recurrence D(k)i,j = min(D(k-1)i,j, D(k-1)i,k + D(k-1)k,j) for all pairs (i, j), using three nested loops over the vertices. This process ensures that after considering all intermediates up to k, D(k)i,j represents the shortest path from i to j using only vertices 1 through k as intermediates, yielding the complete solution in D(n) upon termination. The is Θ(n2) due to the matrix storage, and the algorithm can also detect negative cycles by checking if any diagonal entry D(n)i,i is negative. Beyond theoretical graph analysis, the Floyd–Warshall algorithm finds practical use in areas such as routing to optimize data packet paths in communication systems, for minimizing travel times across road networks, to measure relationship strengths or influence propagation, and for path planning in complex environments. A variant replaces minimization with logical OR and addition with AND to compute transitive closures, directly echoing Warshall's original focus on .

Background and Motivation

Shortest Path Problems

In , the seeks to identify a between two in a weighted that minimizes the sum of the weights along its edges. This applies to both directed and undirected graphs, where edges represent connections with associated costs, such as distances or times, and the goal is to find the path with the least cumulative cost from a starting to a . Shortest path problems are categorized by the scope of computation required. The single-source variant computes the minimum-weight paths from one designated source to all other vertices in the , which is useful when exploring routes originating from a fixed point. The single-destination variant reverses this by finding paths to a single target from all other vertices, often handled by adapting the single-source approach on the 's . The all-pairs variant extends this further by determining shortest paths between every pair of vertices, providing a complete for the . A key challenge in shortest path problems arises with negative edge weights, which can lead to ill-defined solutions if the graph contains a negative-weight reachable from the source, as traversing such a indefinitely would reduce the weight without bound; algorithms thus typically assume the absence of negative cycles to ensure finite shortest paths exist. These problems motivate numerous real-world applications, including optimizing packet forwarding in internet routing protocols to minimize latency and determining efficient transportation routes in logistics networks to reduce fuel consumption or travel time.

All-Pairs Shortest Paths

The all-pairs shortest paths (APSP) problem involves computing the shortest path distances between every pair of vertices in a weighted directed graph. Formally, given a graph G = (V, E, w) with n = |V| vertices and a weight function w: E \to \mathbb{R}, the goal is to produce an n \times n distance matrix D where d_{ij} represents the minimum total weight of any path from vertex i to vertex j. This problem arises in numerous applications, such as network routing, transportation planning, and optimization tasks requiring global distance information. The problem assumes that the graph may contain negative edge weights, which allows modeling scenarios like or cost reductions, but it must not contain negative-weight cycles, as these would permit arbitrarily low path weights by repeated traversal, rendering shortest paths undefined for affected vertices. Graphs are typically represented using an , where the entry at row i and column j stores the weight of the direct from i to j (or if no edge exists), facilitating efficient matrix operations for computations and naturally accommodating the O(n^2) space needed for the output . Computing all-pairs distances directly addresses scenarios with multiple query pairs more efficiently than invoking a single-source shortest paths algorithm n times, particularly in dense graphs where edge sets approach O(n^2). For instance, repeating the Bellman-Ford algorithm—suitable for negative weights—yields O(n \cdot n \cdot n^2) = O(n^4) time for dense inputs, whereas specialized all-pairs methods achieve O(n^3), providing a cubic improvement. Even with optimized single-source variants like Fibonacci-heap Dijkstra for non-negative weights, the total time is O(n^3), underscoring the value of integrated all-pairs approaches for comprehensive queries. The O(n^3) time complexity of standard APSP solutions, such as dynamic programming on the , is generally acceptable for dense graphs with moderate n (e.g., up to a few hundred vertices), where the cubic scaling aligns with the input density and output size. However, for larger n, such as thousands of vertices, the approach encounters significant scaling challenges due to the rapid growth in time and O(n^2) requirements, often necessitating approximations, parallelism, or sparsity exploitation in practical implementations.

History

Origins

The Floyd–Warshall algorithm emerged from independent contributions in 1962 by American computer scientists and Stephen Warshall, building on earlier advancements in and optimization. Floyd introduced a dynamic programming method specifically tailored for computing all-pairs shortest paths in weighted graphs, presenting it as a concise algorithmic procedure in his seminal work. This approach iteratively refines path distances by considering intermediate vertices, enabling efficient solutions to network problems without requiring explicit path reconstruction initially. Warshall, in parallel, developed a related technique focused on the of directed graphs represented by matrices, which determines between nodes in unweighted settings but can be adapted to handle edge weights through analogous operations. These 1962 publications marked a pivotal synthesis of dynamic programming principles, though an essentially identical algorithm for and shortest paths had been proposed earlier by French mathematician Bernard Roy in 1959, whose work in laid foundational groundwork but received limited initial attention outside French-language circles. Roy's formulation emphasized algebraic structures in s, influencing the matrix-based computations central to the later versions. The algorithm's roots trace to mid-20th-century developments in , including George Dantzig's simplex method for (introduced in 1947 and refined through the 1950s), which provided tools for optimizing network constraints and indirectly inspired path-finding techniques. Additionally, L. R. Ford Jr.'s contributions to network flow theory in the 1950s, particularly his 1956 collaboration on maximal flows, advanced the study of directed graphs and shortest paths in transportation and logistics contexts, setting the stage for dynamic programming applications. Initially applied in for solving complex network optimization problems, such as routing in transportation systems and , the algorithm quickly found adoption in early for analysis tasks, including design and optimization, where efficient all-pairs connectivity was essential.

Naming Conventions

The Floyd–Warshall algorithm derives its standard name from the surnames of and Stephen Warshall, who independently published the essential components of the method in 1962. Floyd introduced the algorithm specifically for computing all-pairs shortest paths in weighted graphs in his paper "Algorithm 97: Shortest Path," published in the Communications of the ACM. Warshall presented a closely related approach for finding the of a using matrices in "A on Boolean Matrices," appearing in the Journal of the ACM. The hyphenated nomenclature reflects this dual attribution, a practice prevalent in to recognize multiple independent inventors, as seen in algorithms like Bellman–Ford. In the immediate aftermath of their publications, the algorithm was often cited separately as Floyd's algorithm—emphasizing its application to shortest paths—or Warshall's algorithm, highlighting its use in problems. By the late and early , the combined "Floyd–Warshall" designation emerged in academic literature to unify references and honor both contributions equally, avoiding disputes over precedence despite an earlier, less noticed formulation by Bernard Roy in 1959 for analyzing minimal flows and transitive properties in oriented graphs. Variations in naming persist in some older texts, where it is referred to as "Floyd's all-pairs shortest path " to underscore its optimization for weighted graphs, though the Roy–Floyd or Roy–Warshall forms occasionally appear to include the 1959 precursor. There are no significant controversies surrounding the naming, but discussions of publication priority note that Roy's work, detailed in "Transitivité et connexité" from C. R. Acad. Sci. 249, 216–218 (), laid foundational groundwork yet received limited initial attention outside French-language circles.

Algorithm

Description

The Floyd–Warshall algorithm employs dynamic programming to compute the shortest paths between all pairs of vertices in a with weighted edges that may include positive or negative weights, assuming no negative-weight cycles are present. The approach builds upon the principle of , where shortest paths are constructed incrementally by considering subsets of possible intermediate vertices. The algorithm initializes an n \times n D^{(0)}, derived from the graph's , where D^{(0)}_{ij} equals the weight of the direct from i to j if such an exists, $0ifi = j, and \infty(a sufficiently large value) otherwise; this initial matrix captures shortest paths using no intermediate vertices.[5] It then proceeds iteratively over each [vertex](/page/Vertex)k from &#36;1 to n as a potential intermediate. For every pair of vertices i and j, the entry D^{(k)}_{ij} is updated to reflect the shortest path using only vertices $1throughk$ as intermediates, via the relaxation D^{(k)}_{ij} = \min\left(D^{(k-1)}_{ij}, D^{(k-1)}_{ik} + D^{(k-1)}_{kj}\right). This step ensures that after considering k, all paths in the account for possible detours through the first k vertices. A fundamental insight is that the algorithm's correctness holds regardless of the vertex ordering, as the dynamic programming updates propagate improvements across all relevant subpaths without dependency on specific labels. Upon completion, the final matrix D^{(n)} provides the shortest path lengths between every pair of vertices if no negative cycles exist; negative cycles, if present, can be detected by verifying that no diagonal entry D^{(n)}_{ii} < 0.

Pseudocode

The following pseudocode implements the core Floyd–Warshall algorithm to compute the shortest path distances, assuming the distance matrix dist is an n × n array initialized as described in the previous section (with dist[i][j] set to the edge weight if an edge exists, 0 if i = j, and ∞ otherwise):
for k ← 1 to n
    for i ← 1 to n
        for j ← 1 to n
            dist[i][j] ← min(dist[i][j], dist[i][k] + dist[k][j])
This triple nested loop applies the dynamic programming recurrence to fill the distance matrix. The Floyd–Warshall algorithm can be extended to track shortest paths by maintaining an auxiliary next matrix, which records the next vertex on the shortest path from each source to each destination. This matrix is initialized and updated alongside the distance matrix during the main computation.

Initialization of the next Matrix

The next matrix is an n \times n array where n is the number of vertices. It is initialized as follows:
  • For each pair of vertices i and j, if there is a direct edge from i to j, set \text{next} = j.
  • Otherwise, set \text{next} = \text{NIL} (indicating no direct path).
This setup captures the immediate successors for initial direct connections.

Update in the Main Algorithm

During the triple nested loop of the (over intermediate vertices k, sources i, and destinations j), after checking if the distance can be improved via k—that is, if \text{dist} > \text{dist} + \text{dist}—the update to the distance is performed, and the next matrix is simultaneously updated only if the path improves:
if dist[i][j] > dist[i][k] + dist[k][j] then
    dist[i][j] ← dist[i][k] + dist[k][j]
    next[i][j] ← next[i][k]
end if
This ensures that \text{next} always points to the first vertex after i on the current shortest path to j. The update propagates the correct successor through the intermediate vertex k when a shorter path is found.

Path Reconstruction Pseudocode

Once the next matrix is fully computed, the shortest path from a source vertex s to a target vertex t can be reconstructed iteratively by following the successors stored in the matrix. The following pseudocode implements this extraction:
function extract_path(s, t):
    if next[s][t] = NIL and s ≠ t:
        return "No path exists"
    path ← [s]
    current ← s
    while current ≠ t:
        current ← next[current][t]
        if current = NIL:
            return "No path exists"
        append current to path
    return path
This procedure starts at s, repeatedly appends the next vertex toward t, and terminates upon reaching t. If at any point \text{next}[\cdot] = \text{NIL}, no path exists (assuming no self-loops or direct equality). The resulting path is a list of vertices from s to t in order. For the base case where s = t, the path is simply $$. An equivalent recursive formulation is also possible for clarity in some implementations:
function extract_path(s, t):
    if s = t:
        return [s]
    if next[s][t] = NIL:
        return "No path exists"
    return [s] + extract_path(next[s][t], t)
This recursive version builds the path by prepending s and recursing on the successor until reaching t. Both approaches yield the same result and handle the absence of a path identically.

Example

Setup

To illustrate the Floyd–Warshall algorithm, consider a small directed weighted G = (V, E) with set V = \{A, B, C, D\} and edge set E consisting of the following weighted edges: A \to B with weight 3, A \to C with weight 5, B \to C with weight 2, B \to D with weight -4, and C \to D with weight 1. This includes a negative-weight edge but contains no cycles, ensuring the shortest paths are well-defined without complications from negative cycles. The can be represented using an W, a 4×4 where the entry w_{ij} is the weight of the direct from i to j if it exists, or \infty (a large positive value representing no direct connection) otherwise. The diagonal entries are typically set to 0, indicating zero cost for a to itself, though this is adjusted in the initial for the algorithm.
ABCD
A035
B02-4
C01
D0
The initial distance matrix D^{(0)} for the Floyd–Warshall algorithm is a copy of the adjacency matrix W, with all diagonal entries explicitly set to 0 to represent the trivial shortest path from a vertex to itself. This matrix serves as the starting point for computing all-pairs shortest paths.
ABCD
A035
B02-4
C01
D0
The goal of applying the Floyd–Warshall algorithm to this setup is to compute the shortest path distances between every pair of vertices in V, verifying the final distance matrix against known paths, such as the path A \to B \to D with total weight -1.

Computation Steps

The Floyd–Warshall algorithm computes all-pairs shortest paths by systematically relaxing paths through each intermediate vertex k in sequence, updating the distance matrix D where D represents the shortest path from vertex i to j using only the first k vertices as intermediates. For the example graph with vertices labeled A, B, C, D (indexed 1 to 4 for computation), the initial distance matrix D^(0), derived from the direct edges (A to B: 3, A to C: 5, B to C: 2, B to D: -4, C to D: 1) with infinity (∞) for absent edges and 0 on the diagonal, is as follows:
ABCD
A035
B02-4
C01
D0
In the first iteration (k=1, vertex A), the algorithm updates every pair (i, j) by checking if paths through A improve the distances: D ← min(D, D[A] + D[A]). Since there are no incoming edges to A from other vertices (except itself), no distances are updated, and the matrix remains unchanged after this step. In the second iteration (k=2, vertex B), updates consider paths through B. For instance, the distance from A to D improves via A → B → D: min(∞, D[A][B] + D[B][D]) = min(∞, 3 + (-4)) = -1. The distance from A to C via B is min(5, 3 + 2) = 5 (no change). Similarly, the distance from B to B remains 0, and no other pairs see improvements (e.g., no path from C to B or D to A). The updated matrix D^(2) is:
ABCD
A035-1
B02-4
C01
D0
In the third iteration (k=3, vertex ), paths through C are checked. For A to D, the potential update is min(-1, D[A][C] + D[C][D]) = min(-1, 5 + 1) = -1 (no change). The path B → → D yields min(-4, 2 + 1) = -4 (no change), and other pairs like C to B or D to C remain ∞ due to no relevant edges. Thus, the matrix D^(3) is identical to D^(2). In the final iteration (k=4, vertex D), paths through D are evaluated. For example, A to B via D gives min(3, D[A][D] + D[D][B]) = min(3, -1 + ∞) = 3 (no change). No incoming edges from D to other vertices beyond itself prevent further relaxations, such as for C to A. The matrix D^(4) remains the same as D^(3), yielding the all-pairs shortest paths:
ABCD
A035-1
B02-4
C01
D0
This final matrix shows, for example, the shortest path from A to D as -1 via A → B → D.

Negative Cycles

Detection

The Floyd–Warshall algorithm provides a method to detect negative cycles in a with weighted edges by inspecting the resulting after all iterations have completed. A negative is present if, for any i, the diagonal entry dist < 0, indicating that there exists a path from i back to itself with a total weight less than zero. This detection is reliable because the algorithm systematically considers all possible intermediate vertices, effectively computing the minimum-weight path from each vertex to every other, including itself; a negative self-distance implies the influence of a negative-weight cycle reachable from and returning to i. The detection mechanism leverages the dynamic programming nature of the algorithm, where dist is initialized to 0 (assuming no self-loops with negative weight) and updated only if a shorter path is found through intermediates. If no negative cycles exist, dist remains 0, as any cycle would have non-negative weight under the shortest-path assumption. However, the presence of a negative cycle causes repeated relaxations that drive dist negative, propagating the effect through the matrix. This approach works for both connected and disconnected graphs, as the check applies independently to each vertex. Upon detection of any dist < 0, the algorithm indicates that shortest paths involving vertices reachable from the cycle are undefined, and the execution should report the existence of negative cycles rather than providing path distances. While the full algorithm must typically run to ensure comprehensive detection across all vertices, some implementations optionally check dist after the iteration for intermediate vertex k to identify issues early, though this may not catch all cycles until completion. If negative cycles are found, further implications include the invalidity of computed paths for affected pairs, as detailed in subsequent analyses.

Implications

In graphs containing negative cycles, the shortest path distances between vertices are undefined, as traversing the cycle repeatedly can reduce the path weight arbitrarily close to negative infinity. This unsolvability arises because the presence of a negative cycle allows for unbounded reductions in path costs, rendering the all-pairs shortest path problem ill-posed in such graphs. When the Floyd–Warshall algorithm is applied to a graph with negative cycles, it produces incorrect distance values for pairs of vertices affected by the cycle, typically underestimating the distances to increasingly negative values. In the presence of negative cycles, computed distances can become exponentially large in magnitude (negative), potentially causing arithmetic overflow in finite-precision implementations, as analyzed in theoretical bounds. Specifically, the algorithm's output matrix will exhibit negative entries on the diagonal for vertices involved in or reachable from the negative cycle, indicating the anomaly. In practice, upon detecting a negative cycle—often via the negative diagonal entries—users may discard the computed results, as they no longer represent valid shortest paths, or attempt to trace the cycle's vertices by examining paths from a affected vertex to itself using the algorithm's predecessor matrix. Standard implementations of the Floyd–Warshall algorithm assume input graphs are free of negative cycles to ensure correctness; if negative weights are present, preprocessing with algorithms like on each vertex can verify this assumption before proceeding. Beyond graph theory, negative cycles detected by the algorithm have significant implications in applied domains. In financial networks modeled as graphs—where edges represent currency exchange rates—a negative cycle corresponds to an arbitrage opportunity, allowing risk-free profit through repeated trades. Similarly, in systems of difference constraints (e.g., for scheduling or resource allocation), a negative cycle indicates an inconsistency, implying no feasible solution exists that satisfies all constraints.

Path Reconstruction

Approach

The Floyd–Warshall algorithm computes the shortest-path distances between all pairs of vertices but does not inherently provide the actual paths taken. To enable reconstruction of these paths, an auxiliary data structure known as the next matrix (or predecessor matrix π) is maintained alongside the distance matrix during the algorithm's execution. This matrix, of size n × n where n is the number of vertices, stores for each pair (i, j) the immediate successor vertex on the shortest path from i to j (i.e., the vertex immediately following i), or NIL if i = j or no path exists. The next matrix is initialized such that next = j if there is a direct edge from i to j (indicating j as the immediate successor), and next = NIL (or a sentinel value) otherwise, including when i = j. During the core triple loop of the algorithm—iterating over intermediate vertices k from 1 to n—for each pair (i, j), if the relaxation step d > d + d holds and updates the , the next matrix is accordingly modified by setting next = next. This update rule propagates the successor information from the subpath i to k onto the overall path i to j. Path reconstruction from a source s to a target t then proceeds by iteratively tracing the successors in the next matrix, building the sequence in forward order. Initialize the path list with s and set current = s. While current ≠ t, set current to next[current]; if current = NIL, no path exists; otherwise, append the new current to the path. Repeat until reaching t. If at any point next = NIL for some u ≠ t, no path exists. This iterative traversal efficiently recovers the exact sequence of vertices without recomputing distances. For the case s = t, the path is simply . Maintaining the next matrix incurs an additional space complexity of O(n²), matching that of the primary distance matrix and thus not altering the overall asymptotic space bound of the algorithm.

Pseudocode

The Floyd–Warshall algorithm can be extended to track shortest paths by maintaining an auxiliary next matrix, which records the next vertex on the shortest path from each source to each destination. This matrix is initialized and updated alongside the distance matrix during the main computation.

Initialization of the next Matrix

The next matrix is an n \times n array where n is the number of vertices. It is initialized as follows:
  • For each pair of vertices i and j, if there is a direct edge from i to j, set \text{next} = j.
  • Otherwise, set \text{next} = \text{NIL} (indicating no direct path).
This setup captures the immediate successors for initial direct connections.

Update in the Main Algorithm

During the triple nested loop of the Floyd–Warshall algorithm (over intermediate vertices k, sources i, and destinations j), after checking if the distance can be improved via k—that is, if \text{dist} > \text{dist} + \text{dist}—the update to the distance is performed, and the next matrix is simultaneously updated only if the path improves:
if dist[i][j] > dist[i][k] + dist[k][j] then
    dist[i][j] ← dist[i][k] + dist[k][j]
    next[i][j] ← next[i][k]
end if
This ensures that \text{next} always points to the first vertex after i on the current shortest path to j. The update propagates the correct successor through the intermediate vertex k when a shorter path is found.

Path Reconstruction Pseudocode

Once the next matrix is fully computed, the shortest path from a source vertex s to a target vertex t can be reconstructed iteratively by following the successors stored in the matrix. The following pseudocode implements this extraction:
function extract_path(s, t):
    if next[s][t] = NIL and s ≠ t:
        return "No path exists"
    path ← [s]
    current ← s
    while current ≠ t:
        current ← next[current][t]
        if current = NIL:
            return "No path exists"
        append current to path
    return path
This procedure starts at s, repeatedly appends the next toward t, and terminates upon reaching t. If at any point \text{next}[\cdot] = \text{NIL}, no path exists (assuming no self-loops or direct equality). The resulting path is a list of vertices from s to t in order. For the base case where s = t, the path is simply $$. An equivalent recursive formulation is also possible for clarity in some implementations:
function extract_path(s, t):
    if s = t:
        return [s]
    if next[s][t] = NIL:
        return "No path exists"
    return [s] + extract_path(next[s][t], t)
This recursive version builds the path by prepending s and recursing on the successor until reaching t. Both approaches yield the same result and handle the absence of a path identically.

Complexity

Time Complexity

The Floyd–Warshall algorithm exhibits a time complexity of O(n^3), where n is the number of vertices in the graph. This bound stems from the algorithm's fundamental implementation using three nested for-loops, each ranging over the n vertices. The structure ensures that all possible paths are considered through systematic dynamic programming updates. To derive this complexity, consider the pseudocode's dominant component: an outer loop over intermediate vertices k = 1 to n, a middle loop over source vertices i = 1 to n, and an inner loop over destination vertices j = 1 to n. For each triple (i, j, k), the algorithm executes a constant-time update to the distance matrix, computing \delta(i,j) \leftarrow \min(\delta(i,j), \delta(i,k) + \delta(k,j)), where \delta denotes the shortest-path distances. With exactly n^3 iterations and O(1) work per iteration (involving a minimum and addition), the total time is \Theta(n^3). Prior to the main loops, initializing the n \times n or requires O(n^2) time, such as setting direct edge weights and for non-edges, which is asymptotically negligible compared to O(n^3). Path reconstruction, which traces actual paths using a separate predecessor matrix filled during the updates, requires additional O(n) time per queried pair (by from destination to source) or up to O(n^3) if reconstructing all pairs; however, this phase is distinct from the core distance computation and often performed selectively post-algorithm. The algorithm's time complexity remains \Theta(n^3) uniformly across best-case, average-case, and worst-case scenarios, as it unconditionally executes all n^3 updates regardless of sparsity, weights, or patterns.

Space Complexity

The Floyd–Warshall algorithm utilizes a of size n \times n, where n is the number of vertices, to store and update the shortest path distances between all pairs of vertices, resulting in O(n^2) . The input is represented as an , which also requires O(n^2) space to initialize the with direct weights or otherwise.)) When path reconstruction is required, an additional matrix of size n \times n is employed to track the next vertex on the shortest path for each pair, incurring another O(n^2) space overhead; however, the overall space complexity remains O(n^2) since both matrices are of the same order. The algorithm supports in-place updates to the distance matrix, directly modifying entries during the dynamic programming iterations without needing extra storage for intermediate results, which keeps the space efficient. This in-place approach is valid even with negative edge weights, provided no negative cycles exist, as the updates rely on previously finalized subproblems for each intermediate vertex k. For implementations prioritizing input preservation or added safety, temporary auxiliary matrices can be used during updates, though this increases space usage marginally without altering the asymptotic bound. Although effective for dense graphs, the algorithm's reliance on a full n \times n imposes O(n^2) space regardless of the number of edges m, rendering it inefficient for sparse graphs where m \ll n^2 and adjacency lists would suffice for other all-pairs methods. Despite this, the design assumes a matrix-based representation suitable for scenarios where n is modest or density is high. The space lower bound is \Omega(n^2), as the algorithm must accommodate the output of n^2 shortest path distances, necessitating at least that much storage for the results.))

Applications and Extensions

Practical Uses

The Floyd–Warshall algorithm is widely applied in network routing to compute all-pairs shortest paths in communication and transportation networks, enabling efficient data packet forwarding. In transportation contexts, it optimizes flight paths by minimizing delays and propagation effects across airline networks modeled as weighted graphs, and supports road map route planning by identifying minimal-distance connections between multiple origins and destinations in urban infrastructures. In , the algorithm facilitates by determining pairwise transportation costs between distribution centers, allowing for streamlined in 4.0 environments integrated with mapping tools. It also aids facility location problems, where all-pairs distances are precomputed to evaluate site selections that minimize total weighted distances to demand points in network-based settings. Bioinformatics leverages the algorithm for computing transitive similarities in sequence similarity networks derived from multiple sequence alignments, aiding in the of functional trends across protein superfamilies. It is also used in to estimate inter-residue maps via all-pairs shortest paths. Additionally, enhanced variants have been applied to protein-protein networks to identify lung cancer-related genes. In game development, the algorithm enhances for in dense, static maps of strategy games, precomputing distances to enable rapid query responses for character movements without recomputing paths on-the-fly. Modern implementations extend its utility in geographic information systems (GIS) software for multi-objective , incorporating environmental factors to generate optimized paths in street networks. For large-scale networks, GPU-accelerated variants address computational bottlenecks, achieving speedups on clusters for graphs with thousands of nodes in and optimization scenarios. The algorithm also finds use in for path planning in complex environments.

Generalizations

The Floyd–Warshall algorithm can be applied to undirected graphs by representing them as directed graphs with bidirectional edges, resulting in a symmetric where the shortest path from vertex i to j equals that from j to i. This symmetry allows for optimizations, such as computing only the upper triangular portion of the matrix to halve the number of operations while maintaining the O(n^3) . To handle capacity constraints on edges, the algorithm can be modified to compute the maximum path (widest path) between all pairs of vertices, where the of a path is the minimum along its edges, and the is to maximize this value. This variant replaces the minimization and addition operations with maximization and minimum operations, respectively, enabling all-pairs maximum paths in O(n^3) time; such paths relate to maximum problems in , as the maximum between two vertices is at most the of the widest . For min-cost with capacities, the successive shortest algorithm can incorporate Floyd–Warshall to find all-pairs shortest paths in the residual , ensuring capacity-respecting augmenting paths while minimizing total cost, though this is typically combined with potential reductions for efficiency. Parallel variants extend the algorithm to distributed environments for large-scale graphs. Implementations using distribute the triple-loop computations across multiple nodes, where the outer loop over intermediate vertices is partitioned, and map-reduce phases handle matrix updates; for instance, one such adaptation achieves scalable performance on Hadoop clusters for graphs with millions of vertices, outperforming sequential versions by factors proportional to the number of reducers. These distributed approaches are particularly useful for applications like . Quantum generalizations propose theoretical speedups for the all-pairs . A 2005 quantum algorithm adapts Floyd–Warshall by using Grover's search to accelerate the search for optimal intermediate vertices, achieving a from classical O(n^3) to O(n^{2.5}) in the query model, though it remains impractical due to high requirements and error rates on near-term quantum hardware. The algorithm integrates with computations for in unweighted or binary graphs, where edge weights are treated as 1 (present) or (absent), and the operation checks for existence rather than , yielding a matrix in O(n^3) time—originally formalized as Warshall's algorithm. For dynamic graphs with edge insertions or deletions, a basic handling strategy involves recomputing the entire from scratch after each update, which is simple but inefficient at O(n^3) per change; this approach suits low-update-frequency scenarios but is superseded by specialized dynamic APSP algorithms for frequent modifications.

Implementations

Standard Code

The Floyd–Warshall algorithm is commonly implemented in programming languages using a initialized from the graph's adjacency information, where non-edges are set to a large value representing and self-loops to zero. The core logic employs three nested loops over the vertices (typically 0-based indexing), updating each pair's if a shorter path is found via an intermediate . Input graphs are often provided as edge lists, which must be parsed into the matrix by setting direct weights and filling the rest with ; this ensures compatibility with directed or undirected graphs. Implementations handle potential overflow in languages with fixed sizes by using a less than the maximum (e.g., INT_MAX / 2) for when negative weights are present. In Python, the algorithm can be implemented as a function that takes a 2D list representing the initial distance matrix (with float('inf') for absent edges) and returns the updated shortest-path matrix. Negative weights are supported, but the caller should verify no negative cycles by checking diagonal elements post-execution.
python
import copy
import sys

def floyd_warshall(dist):
    """
    Computes all-pairs shortest paths using Floyd-Warshall.
    
    Args:
    dist: n x n list of lists, initial distances (float('inf') for no edge, 0 on diagonal).
    
    Returns:
    Updated n x n distance matrix.
    
    Raises:
    ValueError: If negative cycle detected (diagonal < 0).
    """
    n = len(dist)
    # Create a mutable copy if needed
    d = copy.deepcopy(dist)
    
    # Triple loop: k, i, j
    for k in range(n):
        for i in range(n):
            for j in range(n):
                if d[i][k] != float('inf') and d[k][j] != float('inf'):
                    if d[i][k] + d[k][j] < d[i][j]:
                        d[i][j] = d[i][k] + d[k][j]
    
    # Check for negative cycles
    for i in range(n):
        if d[i][i] < 0:
            raise ValueError("Graph contains a negative-weight cycle")
    
    return d

# Example usage with edge list parsing
def create_distance_matrix(n, edges):
    """
    Parse edge list to distance matrix.
    
    Args:
    n: Number of vertices (0 to n-1).
    edges: List of tuples (u, v, w) where u, v are 0-based indices, w is weight.
    
    Returns:
    n x n distance matrix.
    """
    INF = float('inf')
    dist = [[INF] * n for _ in range(n)]
    for i in range(n):
        dist[i][i] = 0
    for u, v, w in edges:
        if w < 0:
            # Handle negative weights (algorithm supports, but warn on cycles)
            pass
        dist[u][v] = w
    return dist

# Sample call
n = 4
edges = [(0, 1, 3), (1, 2, 5), (2, 3, -1), (0, 3, 10)]  # Includes negative weight
dist_matrix = create_distance_matrix(n, edges)
shortest_paths = floyd_warshall(dist_matrix)
This Python version uses deep copying to avoid modifying the input and includes basic error handling for negative cycles, which the algorithm detects but does not inherently prevent. A C++ implementation typically uses a 2D vector for the distance matrix, with a constant like INF (e.g., 1e9 for int, to avoid overflow with negatives) for infinity. The code assumes 0-based indexing and processes an adjacency matrix directly; for edge lists, a similar initialization loop is added. Negative weights require careful infinity selection to prevent overflow during addition.
cpp
#include <bits/stdc++.h>
using namespace std;

const int INF = 1e9;  // Sentinel for [infinity](/page/Infinity) (adjust for long long if needed)

void floyd_warshall(vector<vector<int>>& dist, int n) {
    // Triple loop: k, i, j
    for (int k = 0; k < n; ++k) {
        for (int i = 0; i < n; ++i) {
            for (int j = 0; j < n; ++j) {
                if (dist[i][k] < INF && dist[k][j] < INF) {
                    if (dist[i][k] + dist[k][j] < dist[i][j]) {
                        dist[i][j] = dist[i][k] + dist[k][j];
                    }
                }
            }
        }
    }
    
    // Check for negative cycles
    for (int i = 0; i < n; ++i) {
        if (dist[i][i] < 0) {
            // Handle error, e.g., throw or set flag
            cerr << "Negative cycle detected" << endl;
        }
    }
}

// Example initialization from edge list
vector<vector<int>> create_distance_matrix(int n, vector<tuple<int, int, int>> edges) {
    vector<vector<int>> dist(n, vector<int>(n, INF));
    for (int i = 0; i < n; ++i) {
        dist[i][i] = 0;
    }
    for (auto [u, v, w] : edges) {
        dist[u][v] = w;  // Overwrite if multiple edges
    }
    return dist;
}

int main() {
    int n = 4;
    vector<tuple<int, int, int>> edges = {{0, 1, 3}, {1, 2, 5}, {2, 3, -1}, {0, 3, 10}};
    auto dist = create_distance_matrix(n, edges);
    floyd_warshall(dist, n);
    // Output dist matrix
    return 0;
}
This C++ snippet uses <bits/stdc++.h> for convenience in educational contexts and bounds checks for infinity to support negative weights safely. In Java, the implementation leverages 2D arrays or lists with Double.POSITIVE_INFINITY for absent edges, assuming 0-based indexing. The code below uses arrays for efficiency and includes negative cycle detection; edge list parsing sets weights directly while handling potential negatives.
java
import java.util.Arrays;

public class FloydWarshall {
    private static final double INF = Double.POSITIVE_INFINITY;
    
    public static double[][] floydWarshall(double[][] dist) {
        int n = dist.length;
        double[][] d = new double[n][n];
        for (int i = 0; i < n; i++) {
            System.arraycopy(dist[i], 0, d[i], 0, n);
        }
        
        // Triple loop: k, i, j
        for (int k = 0; k < n; k++) {
            for (int i = 0; i < n; i++) {
                for (int j = 0; j < n; j++) {
                    if (d[i][k] != INF && d[k][j] != INF) {
                        if (d[i][k] + d[k][j] < d[i][j]) {
                            d[i][j] = d[i][k] + d[k][j];
                        }
                    }
                }
            }
        }
        
        // Check for negative cycles
        for (int i = 0; i < n; i++) {
            if (d[i][i] < 0) {
                throw new IllegalArgumentException("Graph contains a negative-weight cycle");
            }
        }
        
        return d;
    }
    
    // Example initialization from edge list
    public static double[][] createDistanceMatrix(int n, int[][] edges) {
        double[][] dist = new double[n][n];
        for (double[] row : dist) {
            Arrays.fill(row, INF);
        }
        for (int i = 0; i < n; i++) {
            dist[i][i] = 0;
        }
        for (int[] edge : edges) {
            int u = edge[0], v = edge[1];
            double w = edge[2];
            dist[u][v] = w;
        }
        return dist;
    }
    
    public static void main(String[] args) {
        int n = 4;
        int[][] edges = {{0, 1, 3}, {1, 2, 5}, {2, 3, -1}, {0, 3, 10}};
        double[][] distMatrix = createDistanceMatrix(n, edges);
        double[][] shortestPaths = floydWarshall(distMatrix);
        // Print or process shortestPaths
    }
}
This Java version uses deep copying via System.arraycopy and double precision to accommodate negative weights without overflow issues.

Optimized Variants

One notable optimization for the Floyd-Warshall algorithm in the context of unweighted graphs or boolean semirings involves using bitsets to represent reachability, where bitwise operations replace arithmetic updates to compute the transitive closure. This approach leverages the processor's word-level parallelism, treating rows or columns of the adjacency matrix as bit vectors and performing AND-OR operations for path relaxations, resulting in a time complexity of O\left(\frac{n^3}{w}\right), where w is the machine word size (typically 32 or 64 bits). This technique is particularly effective for dense graphs in competitive programming and constraint satisfaction problems, offering significant practical speedups compared to the standard implementation for n \approx 1000. Parallelization efforts have focused on exploiting multi-core CPUs and GPUs to distribute the triple-loop structure. On multi-core systems, OpenMP directives can parallelize the outer loop over intermediate vertices k or the inner loops over i and j, with careful scheduling to minimize load imbalance and race conditions during in-place updates; experimental results show speedups up to around 8x on 8-core processors for large graphs. For GPUs, CUDA implementations decompose the matrix into blocks and use shared memory to reduce global memory accesses, enabling massive thread parallelism; implementations achieve substantial speedups over single-threaded CPU versions on NVIDIA architectures, such as up to 50x for n = 512 on older hardware like Fermi. To address memory hierarchy constraints for large n, block-wise or tiled computations partition the distance matrix into smaller blocks that fit within cache levels (e.g., L1 or L2), processing updates in a wavefront manner to maximize data reuse and minimize cache misses. This cache-oblivious blocking can improve performance by 2-4x on modern processors for n > 1000, as demonstrated on systems where the standard algorithm suffers from poor locality in the inner loops. Pruning techniques further enhance efficiency by skipping relaxation steps when d = \infty or d = \infty, avoiding unnecessary min operations in sparse or disconnected graphs; this simple conditional check reduces computations by up to 50% in networks with many unreachable pairs, as verified in optimized implementations. Recent developments incorporate vectorized SIMD instructions, such as Intel's , to process multiple distance updates simultaneously within the inner loops, using masked min-add operations on 512-bit vectors. Autotuned implementations combining blocking with SIMD achieve 3-5x speedups over baseline scalar code on AVX-enabled CPUs for dense graphs. Parallel variants for large-scale graphs, such as those using distributed frameworks like on CPU clusters, can reduce runtime from days to hours for graphs with hundreds of thousands of vertices, facilitating computations in various applications including network analysis.

Comparisons

With Single-Source Algorithms

The Floyd–Warshall algorithm addresses the all-pairs shortest path problem directly in O(n^3) time, where n is the number of vertices, whereas single-source algorithms like Dijkstra's and Bellman–Ford can be adapted for the same purpose by executing them once per source vertex. For graphs with non-negative edge weights, repeating Dijkstra's algorithm n times achieves all-pairs shortest paths in O(n(V + E) \log V) time using a binary heap implementation or O(n(E + V \log V)) with a Fibonacci heap, making it superior to Floyd–Warshall for sparse graphs where E \ll n^2. In contrast, for graphs allowing negative edge weights (without negative cycles), repeating Bellman–Ford n times yields O(n^2 E) , which performs worse than Floyd–Warshall's O(n^3) on dense graphs but may compete on very sparse ones. Floyd–Warshall is typically chosen over these repeated single-source approaches for dense graphs or scenarios requiring all-pairs distances upfront, as it avoids multiple invocations and inherently supports negative weights without graph reweighting. A key trade-off is Floyd–Warshall's straightforward handling of negative weights compared to the need for modifications in Dijkstra's case, though its cubic time complexity restricts practicality to graphs with around 500 vertices on standard hardware.

With Other All-Pairs Methods

The Floyd–Warshall algorithm, with its O(n^3) time complexity, is often compared to Johnson's algorithm for all-pairs shortest paths (APSP) in directed graphs with possible negative weights but no negative cycles. Johnson's algorithm combines a single Bellman–Ford run to reweight edges non-negatively, followed by n invocations of Dijkstra's algorithm using Fibonacci heaps, achieving O(n m + n^2 \log n) time, where m is the number of edges. This makes Johnson's superior for sparse graphs where m \ll n^2, as the dependency on m avoids the cubic overhead of Floyd–Warshall, though it requires non-negative weights after reweighting and is more complex to implement. For graphs with non-negative weights, repeated applications of from each source vertex provide another APSP alternative, with time complexity O(n(m + n \log n)) using Fibonacci heaps. This approach is efficient for sparse non-negative graphs, similar to Johnson's but without the initial Bellman–Ford phase, and outperforms Floyd–Warshall when m is small relative to n^2. However, it cannot handle negative weights directly, limiting its applicability compared to Floyd–Warshall. Algebraic methods reduce APSP to fast matrix multiplication, yielding theoretically faster runtimes than Floyd–Warshall's O(n^3). For instance, Fredman's 1976 reduction combined with subsequent advances achieves O(n^\omega) time, where \omega \approx 2.3713 (as of 2025) is the current matrix multiplication exponent from Coppersmith–Winograd variants. These methods, such as Zwick's bridging sets approach, are asymptotically optimal but involve intricate algebraic computations over rings (e.g., min-plus ) and are impractical for real-world use due to high constants and numerical instability. Floyd–Warshall excels in scenarios prioritizing simplicity and correctness over asymptotic speed, particularly for dense graphs (m \approx n^2) where its O(n^3) time matches the input size scale, and for graphs with negative weights, as it dynamically considers all intermediate vertices without reweighting. Its in-place matrix updates also make it space-efficient at O(n^2), ideal for moderate-sized problems. For very large graphs, approximate APSP methods like graph spanners offer trade-offs, constructing sparse subgraphs that (1+\epsilon)-approximate distances in near-linear time, though they sacrifice exactness unlike Floyd–Warshall. Empirical benchmarks on modern hardware demonstrate Floyd–Warshall's practicality for graphs up to n=1000, with optimized CPU implementations completing in seconds and GPU-accelerated versions (e.g., using tensors) achieving sub-second runtimes for dense random graphs, remaining competitive before memory and parallelism limits favor distributed alternatives.

References

  1. [1]
    Floyd–Warshall Algorithm as Memoization - CS@Cornell
    The Floyd–Warshall algorithm is a classic application of dynamic programming. It finds the shortest path between all pairs of nodes in a directed graph in O ( ...
  2. [2]
    FloydWarshall (Algorithms 4/e)
    The FloydWarshall class represents a data type for solving the all-pairs shortest paths problem in edge-weighted digraphs with no negative cycles.
  3. [3]
    A Theorem on Boolean Matrices | Journal of the ACM
    A Theorem on Boolean Matrices. article. Free access. Share on. A Theorem on Boolean Matrices. Author: Stephen Warshall. Stephen Warshall. Computer Associates, ...
  4. [4]
    Algorithm 97: Shortest path | Communications of the ACM
    Algorithm 97: Shortest path. Author: Robert W. Floyd. Robert W. Floyd. Armour Research Foundation, Chicago, IL. View Profile. Authors Info & Claims.
  5. [5]
    [PDF] Lecture 15: The Floyd-Warshall Algorithm
    Step 1: The Floyd-Warshall Decomposition. Definition: The vertices are called the intermediate vertices of the path . •. Let be the length of the shortest ...<|separator|>
  6. [6]
    CS494 Lecture Notes - The Floyd-Warshall Algorithm - UTK-EECS
    You are going to manage a matrix of shortest paths. Call it SP, and SP[i][j], at the end of the algorithm, will contain the shortest path from node i to node j.<|control11|><|separator|>
  7. [7]
    [PDF] Floyd-Warshall Algorithm Pseudocode
    Floyd-Warshall Algorithm Pseudocode. Floyd-Warshall(W) n = W.rows. D(0) = W for k = 1 to n let D(k) = (d. (k) ij ) be a new n × n matrix for i = 1 to n for j ...
  8. [8]
  9. [9]
    [PDF] floyd-warshall - algorithm
    An algorithm for finding the shortest path between all pair of nodes in a directed graph. path between source and destination nodes, Floyd-Warshall takes it a ...
  10. [10]
    [PDF] Lecture 11 All-Pairs Shortest Paths and the Floyd-Warshall Algorithm
    The algorithm can be adapted for use in a number of related applications as well. Transitive Closure: (This was the application Warshall was interested in).
  11. [11]
  12. [12]
    [PDF] Chapter 16 Shortest Paths
    Given a weighted graph G = (V,E,w) and a source vertex s, the single-source shortest path (SSSP) problem is to find a shortest weighted path from s to every ...
  13. [13]
    [PDF] Computing the Shortest Path: A Search Meets Graph Theory
    In this paper we study one of the most common variants of the problem, where the goal is to find a point-to-point shortest path in a weighted, directed graph. ...
  14. [14]
    Dijkstra's Shortest Path Algorithm
    The single-source shortest path problem, in which one needs to find shortest paths from a source vertex v to all other vertices in the graph. Interestingly, ...Missing: definition | Show results with:definition
  15. [15]
    [PDF] All-Pairs Shortest Paths
    All-pairs shortest paths finds the shortest path from every source to every destination, computing the length and second-to-last vertex for each pair.
  16. [16]
    [PDF] Shortest Paths
    vertex s to every other vertex in the graph. This problem is usually solved by finding a shortest path tree rooted at s that contains all the desired shortest.
  17. [17]
    [PDF] CS161 Problem Set #6
    a) (5 points) A negative cycle is a cycle where the sum of the edge weights is negative. Why can't we compute shortest paths in a graph with negative cycles?Missing: challenges | Show results with:challenges
  18. [18]
    Shortest Paths in Directed Graphs
    The shortest path problem has many applications. For example, it is used to determine the best way to forward packets in the internet, and by mapping ...
  19. [19]
    [PDF] DP: All Pairs Shortest Paths, The Floyd-Warshall Algorithm
    It is thus natural to ask whether there is a simple algorithm that solves the all pairs shortest paths; that is, the shortest path between any two arbitrary ver ...
  20. [20]
    ICS 311 #19: All-Pairs Shortest Paths
    Apr 9, 2025 · The all-pairs shortest paths problem finds the shortest path and weight for every pair of vertices in a graph, extending single-source shortest ...
  21. [21]
    Floyd-Warshall - finding all shortest paths - CP-Algorithms
    The key idea of the algorithm is to partition the process of finding the shortest path between any two vertices to several incremental phases.Missing: original | Show results with:original
  22. [22]
    THE LIFE AND TIMES OF THE FATHER OF LINEAR PROGRAMMING
    By the end of that summer, George had developed the simplex method for solving such problems. Also, in June of 1947, the Air Force established a major task ...
  23. [23]
    [PDF] maximal flow through a network - lr ford, jr. and dr fulkerson
    MAXIMAL FLOW THROUGH A NETWORK. L. R. FORD, JR. AND D. R. FULKERSON. Introduction. The problem discussed in this paper was formulated by. T. Harris as follows ...Missing: 1950s | Show results with:1950s
  24. [24]
    [PDF] Chapter 5: Floyd's Algorithm
    The purpose of this chapter is to use a relatively easy problem as a means of introducing point-to-point communication among processes.
  25. [25]
    The Pioneering Work of Bernard Roy in Graph Theory - SpringerLink
    A systolic array algorithm for the algebraic path problem (shortest paths; matrix inversion). Computing 34:191–219. Article Google Scholar. Roy, B. (1958) ...
  26. [26]
    [PDF] CS 473: Algorithms① - University of Illinois Urbana-Champaign
    Aug 31, 2023 · Floyd-Warshall Algorithm: Finding the Paths. A natural question is how to compute the paths in addition to the distances. The idea is as ...
  27. [27]
    [PDF] Lecture 12
    What if there are negative cycles? • Just like Bellman-Ford, Floyd-Warshall can detect negative cycles: • Negative cycle ⇔ ∃ v s.t. there is a path from v to v ...
  28. [28]
    [PDF] CSE 5311 Homework 5 Solution
    How can we use the output of the Floyd-Warshall algorithm to detect the pres- ence of a negative-weight cycle? Answer. Here are two ways to detect negative- ...
  29. [29]
    [PDF] The Floyd-Warshall Algorithm on Graphs with Negative Cycles
    Jan 21, 2010 · The Floyd-Warshall algorithm is a simple and widely used algorithm to compute shortest paths between all pairs of vertices in an edge ...
  30. [30]
    What is the significance of negative weight edges in a graph?
    Sep 10, 2013 · Hence, whenever a negative cycle is present, the minimum weight is not defined or is negative infinity, thus Floyd-Warshall cannot work in such ...Missing: implications | Show results with:implications
  31. [31]
    [PDF] Floyd-Warshall Algorithm (All Pairs Shortest Path Problem)
    We can use a variant of Floyd-Warshall's algorithm for shortest paths to determine the transitive closure of a graph G as follows: FloydWarshall(G). 1) G0 = G.
  32. [32]
    [PDF] CS577: Intro to Algorithms Shortest paths revisited - cs.wisc.edu
    The Floyd-Warshall algorithm can also be used to detect negative cycles – namely by determining whether A[i, i, n] is negative for any node i. Moreover, it can ...Missing: implications | Show results with:implications
  33. [33]
    [PDF] Floyd-Warshall Algorithm For Solving the All-Pairs Shortest Path ...
    Where are the final answers? • How does it handle negative cycles? • Reconstruction is similar to other dynamic programming problems. # Table building.Missing: implications | Show results with:implications
  34. [34]
    [PDF] Shortest Paths with Negative Lengths and DP
    Apr 6, 2021 · Negative cycle detection can be done with one Bellman-Ford invocation. ... Floyd-Warshall Algorithm: Finding the. Paths. Question: Can we find ...
  35. [35]
    [PDF] Applications of SP.pptx - csail
    Theorem. If the constraint graph contains a negative-weight cycle, then the system of differences is unsatisfiable. Theorem. Suppose no negative-weight cycle ...
  36. [36]
    [PDF] Shortest Path Algorithms - Stanford University
    Jun 29, 2015 · Floyd-Warshall Algorithm. 3. Page 4. Floyd-Warshall Algorithm ... ▷ Time complexity depends on the implementation: – Can be O(n2 + m) ...
  37. [37]
    None
    **Summary:**
  38. [38]
    [PDF] Analysis of Algorithms Prerequisites Review 8 ( More Shortest Paths ...
    Running Time = Θ nn3. Space Complexity = Θ nn2. But DD kk depends only on DD kk−1 . Page 62. APSP: Floyd-Warshall's Algorithm. FLOYD-WARSHALL-IN-PLACE ...<|control11|><|separator|>
  39. [39]
    Floyd Warshall Algorithm - GeeksforGeeks
    Jul 23, 2025 · Time Complexity: O(V3), where V is the number of vertices in the graph and we run three nested loops each of size V. Auxiliary Space: O(1) ...Time and Space Complexity of... · Floyd Warshall · Johnson's algorithm for All...
  40. [40]
  41. [41]
    (PDF) Floyd-Warshall Algorithm for Web-Based Route Optimization ...
    Aug 10, 2025 · Transportation network system analysis using Ford-Fulkerson algorithm in Medan city. November 2022 · AIP Conference Proceedings.Missing: flight | Show results with:flight
  42. [42]
    Distribution Route Optimization Using Floyd-Warshall Weighted ...
    In this chapter, a sustainable supply chain optimization tool for determining the shortest routes between all pairs of locations in a path network, ...Missing: facility | Show results with:facility
  43. [43]
    [PDF] The FastMap Pipeline for Facility Location Problems
    Ex- isting APSP algorithms, such as the Floyd-Warshall algorithm, have a poor time complexity, making APSP computations a bottleneck for deploying the heuristic.
  44. [44]
    [PDF] Solving Facility Location Problems via FastMap and Locality ...
    The ILP solver and the PAM algorithms require the pre- computation of the all-pairs shortest-path distances, which can be done via the Floyd-Warshall algorithm.
  45. [45]
    Using Sequence Similarity Networks for Visualization of ...
    We have examined the application of sequence similarity networks for visualizing functional trends across protein superfamilies from the context of sequence ...<|separator|>
  46. [46]
    Hybridized distance- and contact-based hierarchical structure ...
    Predicting the folded and functional 3-dimensional structure of a protein molecule from its amino acid sequence is of central importance to structural biology.
  47. [47]
    [PDF] Pathfinding Architecture Optimizations - Game AI Pro
    All game developers understand that A* is the pathfinding search algorithm of choice, ... The full Roy–Floyd–Warshall data results in very fast pathfinding ...
  48. [48]
    GIS-Based Multi-Objective Routing Approach for Street ... - MDPI
    Jul 21, 2023 · We fed the route variables into a modified Floyd–Warshall algorithm to generate multiple routes. We entered the environmental variables into ...1. Introduction · 1.2. Literature Review · 2.3. Routing Framework
  49. [49]
    [PDF] The Floyd-Warshall Algorithm Re-implemented Using 3D-Tensors ...
    May 19, 2023 · The run-times of the GPU accelerated FW algorithm and R-. Kleene on these heuristically generated graphs are evaluated against each other and to ...
  50. [50]
    [PDF] Scalable All-pairs Shortest Paths for Huge Graphs on Multi-GPU ...
    ABSTRACT. We present an optimized Floyd-Warshall (Floyd-Warshall) algo- rithm that computes the All-pairs shortest path (Apsp) for GPU accelerated clusters.
  51. [51]
    Floyd–Warshall algorithm on undirected graph
    Jun 2, 2014 · Every undirected graph can be represented as directed graph by replacing every edge (i,j) with 2 edges (i,j);(j,i).Can the loops be in any order in the Floyd-Warshall algorithm?Reconstruct graph of N edges from a matrix of shortest pair ...More results from cs.stackexchange.com
  52. [52]
    Optimise Floyd-Warshall for symmetric adjacency matrix
    Jan 10, 2010 · Is there an optimisation that lowers the constant factor of the runtime of Floyd-Warshall, if you are guaranteed to have a symmetric adjacency matrix?Floyd-Warshall algorithm implementation with list of listsWhat is the difference between Floyd-Warshall and matrix ...More results from stackoverflow.com
  53. [53]
    Minimum-cost flow - Algorithms for Competitive Programming
    Feb 24, 2024 · This is called the minimum-cost maximum-flow problem. Both these problems can be solved effectively with the algorithm of successive shortest paths.
  54. [54]
    [PDF] Comparison of MapReduce, MPI and OpenMP for All-Pairs Shortest ...
    This paper presents a comparative works of MapReduce, MPI, and OpenMP for the All-Pairs-Shortest-Path problem. We implemented. Floyd-Warshall algorithm using ...
  55. [55]
    Improved Computing Performance for Floyd-Warshall Algorithm in ...
    Oct 29, 2025 · The idea of this algorithm is to multi Workers to work in parallel by Floyd-Warshall algorithm. There is one Master assigning Map and Reduce.
  56. [56]
  57. [57]
    New Quantum Algorithm Solves Continuous Optimization With ...
    Jul 7, 2025 · A research team in China has developed a quantum search algorithm that extends Grover's quadratic speedup to continuous problems.Missing: Floyd- | Show results with:Floyd-
  58. [58]
    Transitive Closure - SAS Help Center
    The transitive closure of a graph can help efficiently answer questions about reachability. ... Floyd-Warshall algorithm (Cormen, Leiserson, and Rivest 1990).
  59. [59]
    Dynamic all pairs shortest path edge removal
    Dec 16, 2019 · I have already calculated all pairs shortest path with Floyd–Warshall algorithm. Now I want to recalculate APSP with an edge removed. I'm not ...
  60. [60]
    [PDF] Floyd Warshall Algorithm
    The Floyd Warshall algorithm solves the all pairs shortest path ... Here is C++ code. int V[N][N]; int back[N][N]; int main(). { for(int i = 0; i ...
  61. [61]
    Shortest Paths - Algorithms, 4th Edition
    Jan 10, 2025 · It solves the single-source problem in linear time. It handles negative edge weights. It solves related problems, such as finding longest paths.
  62. [62]
    [PDF] Floyd-Warshall's Algorithm - CS 225
    Floyd-Warshall's Algorithm is an alterative to Dijkstra in the presence of negative-weight edges (not negative weight cycles). FloydWarshall(G):. Let d be a adj ...
  63. [63]
    FloydWarshall.java - Algorithms, 4th Edition
    ... implementation uses the Floyd-Warshall algorithm. * The constructor takes &Theta;(<em>V</em><sup>3</sup>) time, * where <em>V</em> is the number of vertices ...
  64. [64]
    Top 10 optimizations 2017- (collectors edition) - Codeforces
    OPTIMIZATION OF FLOYD VARŠAL ALGORITHM TO RUN IN N^2: simply instead of going from 1 to N in third loop we willuse bitset which will visit the remaining nodes ...
  65. [65]
    How can we optimize Floyd Warshall algorithm running time using ...
    Feb 13, 2018 · How can we optimize Floyd Warshall algorithm running time using bitset from O(n^3) to O(n^2)? ... For example, this could be an optimization ...How to run Floyd Warshall's Algorithm for a big road network - QuoraWhy is the order of the loops in Floyd-Warshall algorithm important ...More results from www.quora.com
  66. [66]
    [PDF] Parallelizing the Floyd-Warshall Algorithm on Modern Multicore ...
    Abstract—The well known Floyd-Warshall (FW) algorithm solves the all-pairs shortest path problem on directed graphs. In this work, we.
  67. [67]
    [PDF] A Multi-Stage CUDA Kernel for Floyd-Warshall - arXiv
    Feb 25, 2010 · This optimization will be useful when thread occupancy or the number of blocks as- signed to a multiprocessor is limited by shared memory and.
  68. [68]
    [PDF] Optimizing graph algorithms for improved cache performance
    Sep 1, 2004 · Abstract—In this paper, we develop algorithmic optimizations to improve the cache performance of four fundamental graph algorithms.
  69. [69]
    An Optimized Floyd Algorithm for the Shortest Path Problem
    Aug 6, 2025 · This paper processes the original time series using a coarse-grained method and employs symbolic representation combined with the sliding ...
  70. [70]
    [PDF] OPTIMIZING GENERALIZED FLOYD WARSHALL WITH PROGRAM ...
    The generalized Floyd-Warshall algorithm solves a range of related problems, one of which is the famous all-pairs shortest path problem.Missing: bitset | Show results with:bitset
  71. [71]
    [PDF] Solving All-Pairs Shortest-Paths Problem in Large Graphs Using ...
    Aug 7, 2019 · In our second approach, we adopt the textbook parallel Floyd-. Warshall algorithm based on 2D block decomposition [8]. In this approach, ...
  72. [72]
    How Floyd Warshall algorithm works for N=500 within 1 second?
    Sep 7, 2021 · I found the problem could be solved using Floyd Warshall Algorithm and the time complexity for the algorithm is O(n^3). I am confused because ...Floyd-Warshall algorithm not finding length of shortest paths correctlyFloyd Warshall algorithm with maximum steps allowedMore results from stackoverflow.comMissing: practical | Show results with:practical