Graph (abstract data type)
In computer science, a graph is an abstract data type that models pairwise relationships between objects as a collection of vertices (nodes) connected by edges (links).[1] This structure captures connectivity without specifying an underlying implementation, allowing it to represent diverse relational data.[2] Graphs can be undirected, where edges imply bidirectional connections, or directed (digraphs), where edges have a specific orientation from one vertex to another.[1] Edges may also be weighted, assigning numerical values to represent costs, distances, or capacities, or unweighted for simple connectivity.[2]
The Graph ADT typically provides core operations for construction and manipulation, including creating an empty graph, adding or removing vertices and edges, checking vertex existence, and retrieving adjacent vertices or all vertices.[2] Additional operations often support querying edge weights, listing neighbors, and performing traversals such as depth-first search (DFS) or breadth-first search (BFS) to explore the structure.[1] Representations like adjacency lists or matrices enable efficient implementation of these operations, with choices depending on graph density and required queries.[3]
Graphs are fundamental in algorithm design and find applications across domains, including modeling social networks for friend connections, transportation systems for routes, compilers for data flow analysis and register allocation, and biological networks for protein interactions.[3] They enable solving problems like shortest paths, connectivity, and cycle detection, which underpin real-world systems in networking, optimization, and artificial intelligence.[3]
Fundamentals
Definition and Components
In computer science and mathematics, a graph is an abstract data type formally defined as a pair G = (V, E), where V is a finite set of vertices (also called nodes) and E is a finite set of edges, with each edge in E consisting of a pair of distinct vertices from V. In the simple case without loops, edges connect distinct vertices; more general definitions may allow self-loops.[4] This structure models pairwise relationships between discrete entities without imposing any geometric or ordered arrangement on the elements.
Vertices serve as the fundamental units of the graph, representing entities such as points, objects, or agents in a system. They possess no inherent structure or properties beyond unique identifiers, which are often labels like integers or strings for practical distinction.[5] Edges, in turn, denote connections or associations between these vertices: in undirected graphs, an edge is an unordered pair \{u, v\} indicating a symmetric relation, whereas in directed graphs it is an ordered pair (u, v) to capture directionality, such as from u to v.[4]
The size of a graph is characterized by its cardinality: let n = |V| be the number of vertices and m = |E| the number of edges. For simple undirected graphs—those without self-loops or multiple edges between the same pair of vertices—the edge count satisfies $0 \leq m \leq \frac{n(n-1)}{2}. This definition rests on prerequisite mathematical concepts like sets and binary relations, where the edge set E effectively represents a relation on the vertex set V; for instance, in modeling a social network, vertices could denote individuals and edges their friendships.[4]
Graph Variants
Graphs can be classified into various variants based on the nature of their edges and vertices, extending the basic undirected simple graph structure to model diverse relational systems.
Undirected graphs feature edges that connect two vertices symmetrically, such that an edge between vertices u and v implies the same connection in both directions, often denoted as unordered pairs \{u, v\}. In contrast, directed graphs, also known as digraphs, have edges as directed arcs from one vertex to another, represented as ordered pairs (u, v), allowing asymmetric connections where a link from u to v does not necessarily imply the reverse. For example, undirected graphs model bidirectional road networks where travel is possible in either direction between cities, while directed graphs represent one-way web hyperlinks from one page to another. The concept of directed graphs was discussed in the seminal 1936 textbook on graph theory by Dénes König.[6]
Weighted graphs assign a numerical value, or weight, to each edge via a weight function w: E \to \mathbb{R}, typically representing quantities like distances, costs, or capacities. Unweighted graphs are a special case where all edges have an implicit weight of 1, simplifying analyses in scenarios without varying edge attributes.
Simple graphs prohibit self-loops (edges from a vertex to itself) and multiple edges between the same pair of vertices, ensuring a unique connection per pair. Multigraphs relax this by permitting multiple edges between the same vertices, with the multiplicity denoted for each pair, useful for modeling parallel pathways or repeated relations. Pseudographs extend simple graphs further by allowing self-loops while still forbidding multiple edges between distinct vertices.
Hypergraphs generalize graphs by allowing edges, called hyperedges, to connect any number of vertices greater than or equal to two, such as E = \{\{u, v, w\}\}, enabling the representation of multi-way interactions beyond pairwise connections. This variant was formally introduced by Claude Berge in his 1973 work on combinatorial structures.[7]
Operations
Insertion and Modification Operations
The insertion and modification operations of the graph abstract data type (ADT) enable the dynamic construction and alteration of graph structures by adding or removing vertices and edges, while preserving the graph's integrity and variant-specific properties. These operations are defined at an abstract level, independent of any specific implementation, and typically include preconditions to ensure validity, such as the existence of vertices or absence of duplicates in simple graphs. In an ideal ADT, operations like adding a vertex are designed to execute in constant time, O(1), though actual performance depends on the underlying representation.[8]
The addVertex(v) operation inserts a new vertex v into the graph, increasing the number of vertices |V| by 1 without affecting the edge set E. It requires that v does not already exist in the graph as a precondition; if violated, an exception such as IllegalArgumentException is thrown. This operation maintains the graph's structure by simply extending the vertex set, with no immediate impact on connectivity or degrees.
Conversely, the removeVertex(v) operation deletes vertex v along with all incident edges connected to it, decreasing |V| by 1 and |E| by the degree of v. It presupposes that v exists in the graph and handles cascading deletions by removing all edges adjacent to v, which may affect the degrees of remaining vertices. If v is absent, an exception is raised to prevent invalid modifications. This ensures the graph remains consistent post-removal, though it may disconnect components if v was a bridge.
The addEdge(u, v) operation appends an edge between existing vertices u and v, increasing |E| by 1, or addEdge(u, v, w) in weighted variants to include a weight w. For simple graphs, it checks for duplicates or self-loops (u ≠ v and no prior edge), throwing an exception if these constraints are breached or if u or v is nonexistent. In directed graphs, the edge is oriented from u to v; in undirected, it is bidirectional.
The removeEdge(u, v) operation eliminates the edge between u and v, reducing |E| by 1 while leaving the vertices intact. It requires that u and v exist and the edge is present, with directionality specified for directed graphs; violations trigger an exception. For undirected graphs, it removes the bidirectional connection, and in multigraphs, it may target a specific instance if multiples exist.
Across all operations, the ADT enforces variant properties, such as prohibiting self-loops or parallel edges in simple graphs, and provides error handling for invalid inputs like nonexistent vertices to uphold data integrity. These mechanisms ensure that modifications align with the graph's abstract specification, supporting reliable use in algorithms.[8]
Inspection and Query Operations
Inspection and query operations form the core of read-only access to a graph's structure, enabling examination of vertices, edges, and their relationships without any modifications. These operations are fundamental to the graph abstract data type (ADT), supporting tasks such as verifying connectivity, assessing node centrality, and iterating over components for higher-level analysis. They assume the graph has been constructed through prior insertion operations but do not depend on or invoke them.
A primary query is the adjacency check, implemented as the operation isAdjacent(u, v), which tests for the presence of an edge between vertices u and v. In an undirected graph, this returns true if \{u, v\} \in E; in a directed graph, if (u, v) \in E. For weighted graphs, the operation may extend to return the associated weight rather than a boolean value, facilitating efficient edge property retrieval. This check is crucial for local relationship verification and is typically optimized in implementations to run in constant time for adjacent representations.[9]
Degree queries provide measures of a vertex's connectivity. The operation degree(v) returns the number of edges incident to vertex v, computed as |\{u : \{u, v\} \in E\}| in undirected graphs. Directed graphs distinguish incoming and outgoing connections with inDegree(v) and outDegree(v), returning |\{(u, v) \in E\}| and |\{(v, u) \in E\}|, respectively. These queries quantify local density and are foundational for algorithms assessing network properties like centrality.[9]
Existence checks ensure valid elements within the graph. The containsVertex(v) operation returns true if v \in V, verifying membership in the vertex set. Similarly, containsEdge(u, v) returns true if an edge exists between u and v, aligning with the adjacency check but focused solely on presence. These boolean operations support error handling and precondition validation in graph processing.[10]
Iterator-based operations enable traversal of related elements. The getNeighbors(v) method yields an iterator over all vertices adjacent to v, listing neighbors without exposing the underlying representation. Complementarily, getEdges() provides an iterator over the entire edge set E, allowing comprehensive enumeration. These iterators promote abstract access, decoupling clients from storage details while supporting efficient sequential queries.[9]
Size queries offer global metrics on the graph's scale. The numVertices() operation returns |V|, the total number of vertices, while numEdges() returns |E|, the total number of edges. In directed graphs, |E| counts ordered pairs, potentially doubling undirected equivalents. These counts are inexpensive to maintain and provide essential context for complexity analysis and resource allocation.[9]
Representations
Adjacency-Based Representations
Adjacency-based representations store graphs by maintaining collections of adjacent vertices for each vertex, enabling efficient access to neighbors. These methods are particularly suited for sparse graphs, where the number of edges |E| is much smaller than |V|^2, with |V| denoting the number of vertices.[11]
The adjacency list is a fundamental adjacency-based structure, consisting of an array where each index i corresponds to vertex i and points to a list (such as a linked list or dynamic array) of its neighboring vertices.[12] This representation uses O(|V| + |E|) space, as it allocates space for all vertices plus one entry per edge endpoint.[13] Adding an edge between vertices u and v takes O(1) time by appending v to the list of u (and vice versa for undirected graphs).[14] Checking if two vertices are adjacent requires scanning the list of one vertex, yielding O(\deg(v)) time in the worst case, where \deg(v) is the degree of vertex v.[15]
An adjacency set variant replaces the lists with hash sets (or other set data structures) for each vertex, allowing average-case O(1) time for adjacency checks via hash lookups.[16] This improves query efficiency over plain lists, especially for frequent neighbor checks, but incurs higher space overhead due to hash table constants and may complicate ordered neighbor iteration.[17]
A simpler adjacency-based form is the edge list, which stores the graph as a single list of edge pairs (e.g., (u, v) for each edge). This uses O(|E|) space and is straightforward for edge-centric operations but performs poorly for adjacency queries, requiring O(|E|) time to scan for a specific pair.[15]
These representations excel in memory efficiency for sparse graphs, avoiding the O(|V|^2) space of denser alternatives, and facilitate natural iteration over neighbors for algorithms like traversal.[11] For instance, operations such as retrieving the neighbors of a vertex can be realized in linear time relative to the degree.[12]
Consider an undirected graph with vertices V = \{1, 2, 3\} and edges E = \{\{1,2\}, \{2,3\}\}. Its adjacency list would be:
- 1: [18]
- 2: [1, 3]
- 3: [18]
This captures all connections compactly, with each edge appearing twice.[14]
Matrix-Based Representations
Matrix-based representations of graphs utilize two-dimensional arrays, or matrices, to encode the connections between vertices, making them particularly suitable for dense graphs where the number of edges is comparable to the number of possible pairs of vertices.[19] These representations allow for constant-time access to adjacency information but incur higher space overhead compared to list-based methods, especially for sparse graphs.[20]
The adjacency matrix is the most common matrix-based representation, defined as an n \times n matrix A for a graph with n vertices, where A is true (or 1) if there is an edge from vertex i to vertex j, and false (or 0) otherwise; for undirected graphs, the matrix is symmetric since edges are bidirectional.[19] In weighted graphs, the matrix entries store the edge weights instead of booleans, enabling representation of graphs with varying connection costs. The space complexity of the adjacency matrix is O(n^2), independent of the number of edges, which is advantageous for dense graphs but inefficient for sparse ones.[20]
Common operations on graphs represented by adjacency matrices exhibit efficient time complexities for queries but can be costly for modifications. Checking adjacency between two vertices takes O(1) time by direct matrix lookup, and adding or removing an edge requires O(1) updates to the corresponding entry.[20] However, removing a vertex demands O(n^2) time to shift rows and columns, as the matrix must be resized and adjusted to maintain the square structure.[12]
Another matrix-based representation is the incidence matrix, an n \times m matrix B for a graph with n vertices and m edges, where rows correspond to vertices and columns to edges; B = 1 if vertex i is incident to edge j, and 0 otherwise.[21] For directed graphs, the entry is typically 1 for the target vertex, -1 for the source, and 0 elsewhere, capturing edge orientation.[22] The space complexity is O(|V| \times |E|), which scales with the graph's density and is often larger than the adjacency matrix for dense graphs.[21]
For example, consider an undirected unweighted graph with vertices V = \{1, 2\} and edges E = \{\{1,2\}\}. Its adjacency matrix is:
\begin{bmatrix}
0 & 1 \\
1 & 0
\end{bmatrix}
where the off-diagonal 1s indicate the bidirectional edge, and diagonal 0s reflect no self-loops.[19]
Specialized Representations
Compressed Representations
Compressed representations optimize the storage of sparse graphs by encoding only the non-zero entries of the adjacency matrix, making them particularly suitable for graphs where the number of edges |E| is much smaller than |V|^2, with |V| denoting the number of vertices. These formats extend matrix-based representations by eliminating the need to store zeros, thereby reducing memory usage while maintaining efficient access for common operations like traversal. The primary trade-off involves increased complexity in random access and modifications compared to dense formats.[23]
The Compressed Sparse Row (CSR) format is a widely adopted structure for this purpose, utilizing three arrays to represent the graph: a values array storing the edge weights (or 1 for unweighted graphs), a column indices array recording the target vertex for each non-zero entry, and a row pointers array indicating the starting index in the values and column arrays for each row's non-zeros. The row pointers array has length |V| + 1, with the last entry marking the end of the data. This arrangement allows sequential access to a vertex's neighbors in O(degree) time, where degree is the out-degree of the vertex. The overall space complexity is O(|V| + |E|). Inserting an edge in a static CSR structure typically requires rebuilding the arrays, which is O(|E|) costly, though amortized O(1) insertions are possible in dynamic variants with occasional rebuilds. CSR originated in scientific computing for sparse matrices and has been adapted for graph processing due to its cache efficiency and compatibility with algorithms like breadth-first search.[23][24]
The Compressed Sparse Column (CSC) format mirrors CSR but organizes data in column-major order, storing row indices instead of column indices and using column pointers. It serves as the transpose of a CSR representation, facilitating efficient column-wise operations such as accessing incoming edges in directed graphs. Like CSR, its space complexity is O(|V| + |E|), and it supports fast sequential access to columns but incurs similar costs for modifications in static implementations. CSC is valuable in scenarios requiring transpose graph traversals, such as computing PageRank iterations.[23][25]
The Coordinate List (COO) format offers a simpler alternative, representing the graph as three parallel arrays or a list of triples: row indices, column indices, and corresponding values for each edge. It requires O(|E|) space and is straightforward to build incrementally, making it ideal for initial graph construction from edge lists. However, queries like retrieving all neighbors of a vertex generally take O(|E|) time without additional indexing, though sorting the lists by row can reduce this to O(degree + log |E|) with binary search. COO lacks the compressed row or column structure of CSR or CSC, resulting in potentially higher memory overhead for unsorted data but greater flexibility for conversions to other formats.[25]
These formats provide significant advantages for sparse graphs, achieving space savings of up to 10x or more relative to dense adjacency matrices when |E| << |V|^2, as the dense approach demands O(|V|^2) storage regardless of sparsity. For instance, in a graph with |V| = 1000 and |E| = 5000, CSR or CSC uses roughly 6000 entries versus 1,000,000 in a dense matrix, yielding a 167x reduction. The trade-offs include slower random access—often O(log |V|) for sorted structures versus O(1) in dense matrices—and higher overhead for dynamic updates, prioritizing query efficiency over frequent modifications in many applications. As an illustrative example, consider a 3x3 sparse adjacency matrix with non-zeros at (0,1) and (1,2), both with weight 1; the CSR representation consists of values = [1, 1], column_indices = [1, 2], and row_pointers = [0, 1, 2, 2], where row_pointers = 0 starts row 0, row_pointers[26] = 1 starts row 1 (ending row 0), row_pointers[18] = 2 starts row 2 (ending row 1), and row_pointers[27] = 2 ends the data.[24][25]
Parallel and Distributed Representations
In shared memory models, adjacency lists are adapted for multithreaded environments by incorporating thread-safe mechanisms, such as fine-grained locks or lock-free data structures, to enable concurrent reads and writes without deadlocks.[28] For instance, lock-free transactional adjacency lists based on multi-dimensional lists (MDLists) allow atomic updates to neighbor sets, effectively handling race conditions during operations like adding an edge by ensuring linearizable consistency without blocking threads.[29] Concurrent hash maps can also serve as containers for neighbor lists, providing amortized constant-time insertions and lookups while mitigating contention through techniques like epoch-based reclamation.[30]
Distributed memory models partition the graph across multiple nodes to facilitate scalability, often using algorithms like METIS, which employs multilevel coarsening and recursive bisection to achieve balanced vertex cuts with minimal edge connectivity across partitions.[31] Edge cuts or vertex cuts are preferred for irregular graphs, where the former minimizes data movement by assigning endpoints to different nodes, while the latter replicates vertices to reduce inter-node communication.[32] Message-passing interfaces such as MPI enable coordination, with each node maintaining a local subgraph and exchanging boundary data via point-to-point or collective operations, resulting in per-node space complexity of O(|E|/p) for p processors.[33]
Hybrid approaches combine shared memory parallelism within nodes using NUMA-aware allocations—where data placement respects non-uniform memory access latencies by pinning threads to local sockets—with distributed partitioning across clusters for larger-scale processing.[34] Frameworks like Polymer optimize graph traversal by precomputing NUMA-friendly layouts and integrating MPI for inter-node communication, allowing seamless scaling from multicore machines to distributed systems.[35]
Key challenges in these representations include load balancing to ensure even computational distribution, as power-law degree distributions can lead to hotspots; this is addressed by minimizing communication volume through techniques like ghost vertices, which replicate boundary vertices on adjacent nodes to locally resolve cross-partition edges.[36] For example, in social network graphs, sharding by user communities—grouping densely connected subgraphs—reduces inter-partition traffic while maintaining balance.[37]
Performance gains in parallel and distributed representations are bounded by Amdahl's law, where serial fractions such as global synchronization limit overall speedup, but well-partitioned graphs on p processors can achieve near-linear scaling up to O(p) by parallelizing vertex-centric computations locally.[28]
Applications
Graph Traversal Algorithms
Graph traversal algorithms are fundamental methods for systematically visiting vertices in a graph, enabling exploration of its structure using basic operations like checking adjacency. These algorithms form the basis for identifying connected components, constructing spanning trees, and detecting cycles, without optimizing for specific paths or distances. They operate in linear time relative to the graph's size, making them efficient for most practical graphs.
Breadth-first search (BFS) is a queue-based algorithm that traverses the graph in level order, starting from a designated source vertex and exploring all neighbors before moving to the next level. It uses a queue to manage the order of visitation and a visited array to avoid revisiting vertices, ensuring each vertex and edge is processed exactly once, resulting in a time complexity of O(|V| + |E|), where |V| is the number of vertices and |E| is the number of edges. BFS naturally discovers shortest paths in unweighted graphs, as the level-by-level expansion guarantees the minimal number of edges to each reachable vertex, and it can track parents to reconstruct these paths.
The pseudocode for BFS starting from a source vertex s is as follows:
BFS(G, s):
Create a queue Q
Create a visited array or set, initialized to false
Enqueue s into Q
Mark s as visited
While Q is not empty:
Dequeue u from Q
For each neighbor v of u (using getNeighbors(u)):
If v is not visited:
Enqueue v into Q
Mark v as visited
Set parent[v] = u // Optional, for path reconstruction
BFS(G, s):
Create a queue Q
Create a visited array or set, initialized to false
Enqueue s into Q
Mark s as visited
While Q is not empty:
Dequeue u from Q
For each neighbor v of u (using getNeighbors(u)):
If v is not visited:
Enqueue v into Q
Mark v as visited
Set parent[v] = u // Optional, for path reconstruction
This implementation relies on inspection operations to retrieve neighbors during the loop. As a byproduct, BFS constructs a breadth-first spanning tree of the connected component containing s, with tree edges corresponding to parent links.
Depth-first search (DFS) is a stack-based or recursive algorithm that explores as far as possible along each branch before backtracking, prioritizing depth over breadth. Like BFS, it uses a visited array and processes each vertex and edge once, achieving O(|V| + |E|) time complexity. DFS assigns discovery and finish times to vertices during traversal, which can detect cycles via back edges to ancestors and enable topological sorting in directed acyclic graphs by ordering vertices by decreasing finish times.
The recursive pseudocode for DFS on the entire graph is:
DFS(G):
Create a visited [array](/page/Array), initialized to false
For each [vertex](/page/Vertex) u in G:
If u is not visited:
DFS-Visit(G, u)
DFS-Visit(G, u):
Mark u as visited
Assign discovery time to u
For each neighbor v of u:
If v is not visited:
Set parent[v] = u
DFS-Visit(G, v)
Else if v ≠ parent[u] and v is gray (discovered but not finished):
Report [cycle](/page/Cycle) (back [edge](/page/Edge))
Assign finish time to u
DFS(G):
Create a visited [array](/page/Array), initialized to false
For each [vertex](/page/Vertex) u in G:
If u is not visited:
DFS-Visit(G, u)
DFS-Visit(G, u):
Mark u as visited
Assign discovery time to u
For each neighbor v of u:
If v is not visited:
Set parent[v] = u
DFS-Visit(G, v)
Else if v ≠ parent[u] and v is gray (discovered but not finished):
Report [cycle](/page/Cycle) (back [edge](/page/Edge))
Assign finish time to u
This recursive form simulates a stack, with the main loop handling multiple connected components by initiating searches from unvisited vertices. DFS also yields a depth-first spanning forest, where each tree represents a connected component, and its edges classify the graph's structure into tree, back, forward, and cross edges.
Both BFS and DFS can identify connected components in an undirected graph by running the algorithm from each unvisited vertex, partitioning the graph into disjoint subgraphs. The spanning trees produced serve as minimal connected subgraphs without cycles, useful for further analysis. A variant, bidirectional search, extends BFS by simultaneously expanding from source and target vertices until the searches meet, reducing the effective search space for path discovery in large graphs.
Pathfinding and Optimization Algorithms
Pathfinding algorithms in graphs aim to identify the shortest or optimal paths between vertices, often considering edge weights that represent costs such as distance or time. These algorithms extend basic graph traversal by incorporating optimization criteria, making them essential for problems where efficiency and minimal cost are paramount. Unlike unweighted traversals like breadth-first search, which can be viewed as a special case for uniform costs, pathfinding methods handle varying weights to compute precise routes.[38]
Dijkstra's algorithm, introduced by Edsger W. Dijkstra in 1959, computes the shortest paths from a single source vertex to all other vertices in a graph with non-negative edge weights.[38] It employs a greedy strategy using a priority queue to always expand the least-cost path first, relaxing edges by updating distances if a shorter path is found. With a binary heap implementation, the time complexity is O((|V| + |E|) \log |V|), where |V| is the number of vertices and |E| is the number of edges, making it efficient for sparse graphs. This approach ensures optimality by never revisiting settled vertices once their shortest distance is finalized.
A* search, developed by Peter E. Hart, Nils J. Nilsson, and Bertram Raphael in 1968, extends Dijkstra's algorithm with a heuristic function to guide the search more efficiently toward the goal.[39] The evaluation function is defined as f(n) = g(n) + h(n), where g(n) is the exact cost from the start to node n, and h(n) is an admissible heuristic estimate of the cost from n to the goal, ensuring h(n) \leq true cost to maintain optimality. For grid-based environments, such as urban road maps, the Manhattan distance serves as a common admissible heuristic, calculating the sum of horizontal and vertical distances. A* reduces the search space compared to Dijkstra by prioritizing promising paths, achieving completeness and optimality under admissible heuristics.
The Bellman-Ford algorithm, originating from Richard Bellman's work in 1958, addresses graphs with negative edge weights by iteratively relaxing all edges.[40] It performs |V| - 1 relaxation passes over all edges, updating distances if a shorter path is discovered, followed by an additional pass to detect negative-weight cycles that would render shortest paths undefined. The time complexity is O(|V| \cdot |E|), suitable for graphs with few edges but less efficient for dense ones due to the linear iterations. This method's ability to handle negatives makes it foundational for dynamic programming approaches in routing.
For all-pairs shortest paths, the Floyd-Warshall algorithm, proposed by Robert W. Floyd in 1962, uses dynamic programming to compute distances between every pair of vertices in O(|V|^3) time.[41] It initializes a distance matrix with direct edge weights (or infinity if absent) and iteratively considers each intermediate vertex k, updating the matrix via:
D \leftarrow \min(D, D + D)
for all i, j. This cubic complexity suits moderately sized graphs and detects negative cycles by checking diagonal elements post-computation.
These algorithms find widespread application in network routing, where graphs model connections with weights denoting latency or capacity. In GPS navigation systems, road networks are represented as weighted directed graphs, with edge weights based on travel time or distance, and algorithms like Dijkstra or A* compute optimal routes to guide users efficiently.[42] For instance, real-time traffic adjustments integrate dynamic weights to reroute vehicles, enhancing reliability in urban environments.[43]