Planarity testing
Planarity testing is the algorithmic problem in graph theory of determining whether a given undirected graph admits a planar embedding, meaning its vertices can be represented as points in the plane and its edges as non-intersecting curves connecting those points.[1]
The concept of planarity originates from foundational results in topology and graph theory, with Kazimierz Kuratowski's 1930 theorem providing a complete characterization: a graph is planar if and only if it contains no subdivision of the complete graph K_5 or the complete bipartite graph K_{3,3} as a minor.[1] Early efforts to algorithmically test planarity relied on constructive methods, such as the Auslander-Parter algorithm from 1961, which builds a planar embedding incrementally by adding cycles and achieves a time complexity of O(n^3) for a graph with n vertices.[1]
Breakthroughs in the late 1960s and 1970s led to linear-time algorithms, marking a significant advancement in computational efficiency. The Lempel-Even-Cederbaum (LEC) algorithm, introduced in 1967, uses vertex addition based on st-numberings to test planarity in O(n) time.[1] Independently, Hopcroft and Tarjan developed in 1974 the first explicitly linear-time planarity test using path addition and depth-first search (DFS) techniques to detect embedding conflicts.[2]
Subsequent refinements have produced simpler and more implementable linear-time algorithms, often employing advanced data structures. The Shih-Hsu algorithm from 1999 utilizes PC-trees—generalizations of PQ-trees—to represent partial embeddings and perform vertex addition in O(n) time, noted for its conceptual simplicity.[3] Building on this, the Boyer-Myrvold algorithm of 2004 introduces edge addition with flexible embedding flips via DFS, achieving O(n) time while providing a robust framework for practical implementations in graph drawing libraries.[4] These methods not only test planarity but also construct a combinatorial embedding, essential for applications in VLSI design, network visualization, and geographic information systems.[1]
Fundamentals
Definition of Planar Graphs
A graph in graph theory is a mathematical structure consisting of a set of vertices connected by edges, where an undirected simple graph has no loops or multiple edges between the same pair of vertices.[5] Vertices represent points or nodes, edges represent connections between them, and adjacency indicates the presence of an edge linking two vertices.[6]
A planar graph is an undirected simple graph that can be embedded in the Euclidean plane such that no two edges intersect except possibly at shared vertices.[7] Such an embedding assigns positions to vertices and paths (straight lines or curves) to edges without crossings, preserving the graph's adjacency relations.[8]
Embeddings can be geometric, where edges are straight-line segments between vertex points, or more general, allowing curved paths as long as they do not cross.[9] Combinatorial embeddings, in contrast, abstractly specify the cyclic order of edges around each vertex without reference to specific positions or shapes in the plane.[10]
Basic examples of planar graphs include trees, which have no cycles and thus embed without crossings, and cycles C_n, which form simple closed loops.[7] The complete graph K_4, with four vertices all pairwise connected, is planar and can be drawn as a tetrahedron projection.[11] However, K_5, the complete graph on five vertices, is non-planar and cannot be embedded without edge crossings.[11] For connected planar graphs, Euler's formula relates vertices v, edges e, and faces f via v - e + f = 2.[7]
Importance and Applications
Planarity testing is essential in computational graph theory because it determines whether a graph admits a planar embedding, which avoids edge crossings and simplifies subsequent tasks such as visualization, routing, and optimization in systems where intersections increase complexity or cost.[1] Efficient algorithms for this test enable the construction of such embeddings, facilitating clearer representations and more effective problem-solving in sparse graph structures that bound the number of edges relative to vertices.[1]
Key applications span multiple domains, including very-large-scale integration (VLSI) circuit design, where testing ensures wire layouts minimize crossings to reduce signal interference and fabrication challenges.[12] In printed circuit board (PCB) layouts and network topology mapping, planarity testing supports non-crossing connections that enhance reliability and ease of assembly.[13] Geographic information systems (GIS) leverage it for map drawing, representing regions and borders without overlaps to maintain spatial accuracy and readability.[14] Additionally, in software visualization, such as UML class diagrams, planarity testing aids in generating hierarchical drawings that improve comprehension of system architectures.[15]
The broader impact of planarity testing extends to the study of minor-closed graph classes, as planarity is characterized by the absence of specific forbidden minors (K₅ and K₃,₃), providing a foundational framework for analyzing related properties like bounded genus or treewidth.[1] It serves as a building block for more complex problems, such as crossing number minimization, where non-planar graphs are approximated by removing or rerouting edges to approach planarity while optimizing edge intersections.[1] Historically, interest in planarity testing surged in the 1960s from electrical engineering, driven by needs in circuit layout and early computational algorithms like those by Lempel, Even, and Cederbaum in 1967.[1]
Theoretical Foundations
Planarity Criteria
Planarity criteria provide fundamental mathematical conditions that distinguish planar graphs from non-planar ones, offering quantitative constraints based on the structural properties of vertices, edges, and faces in a planar embedding. These criteria, derived primarily from topological invariants, serve as necessary conditions for planarity and underpin many testing algorithms by allowing quick rejection of obviously non-planar graphs.[16]
A cornerstone of these criteria is Euler's formula, which relates the number of vertices V, edges E, and faces F (including the outer infinite face) in a connected plane graph. For any connected plane graph, V - E + F = 2. This holds for any embedding of the graph in the plane without edge crossings. The formula extends to disconnected plane graphs: if the graph has c connected components, then V - E + F = 1 + c, accounting for the additional components by treating each as a separate "island" in the plane, with the outer face shared.[16]
To derive edge bounds from Euler's formula, consider the handshaking lemma for faces, which states that each edge is incident to exactly two faces (or one if a bridge, but for bounds we assume simple graphs without bridges). More precisely, the sum of the degrees of the faces (where the degree of a face is the number of edges bounding it) equals $2E. In a simple plane graph, every face has degree at least 3, so $2E \geq 3F, or F \leq \frac{2E}{3}. Substituting into Euler's formula for a connected simple plane graph with V \geq 3, we get V - E + F = 2, so F = 2 - V + E. Then, $2 - V + E \leq \frac{2E}{3}, which rearranges to $2 - V + E - \frac{2E}{3} \leq 0, or \frac{E}{3} \leq V - 2, hence E \leq 3V - 6. This bound is tight for maximal planar graphs (triangulations), where all faces are triangles.[16]
For bipartite planar graphs, the bound tightens because all faces must have even degree, and thus at least 4 (no odd cycles). The face degree inequality becomes $2E \geq 4F, or F \leq \frac{E}{2}. Substituting into Euler's formula yields $2 - V + E \leq \frac{E}{2}, so $2 - V + \frac{E}{2} \leq 0, or \frac{E}{2} \leq V - 2, hence E \leq 2V - 4 for connected bipartite plane graphs with V \geq 2. This reflects the sparser structure enforced by bipartiteness.[16]
Whitney's theorem provides a qualitative criterion related to embeddings: every 3-connected planar graph has a unique plane embedding up to reflection and choice of the outer face, implying that planarity testing for such graphs also determines the embedding uniquely. This uniqueness strengthens the theoretical foundation for criteria by linking structural connectivity to embedding properties.
Forbidden Subgraphs and Characterizations
A key structural characterization of planar graphs is provided by Kuratowski's theorem, which states that a finite undirected graph is planar if and only if it contains no subgraph that is a subdivision of K_5 (the complete graph on five vertices) or K_{3,3} (the complete bipartite graph on two parts of three vertices each).[17] This theorem, published in 1930, reduces planarity testing to detecting these specific forbidden configurations.[17]
A subdivision of a graph H is formed by replacing zero or more edges of H with internally vertex-disjoint paths of length at least two, thereby inserting degree-two vertices along those edges while preserving the original vertex degrees and connectivity.[18] Thus, the forbidden subgraphs in Kuratowski's theorem are not necessarily isomorphic copies of K_5 or K_{3,3}, but rather graphs topologically equivalent to them via such edge replacements.
An equivalent formulation, known as Wagner's theorem, characterizes planar graphs using the minor relation instead of subdivisions.[19] It states that a finite undirected graph is planar if and only if it contains neither K_5 nor K_{3,3} as a minor.[19] A graph M is a minor of a graph G if M can be obtained from G by a sequence of operations consisting of edge deletions, vertex deletions, and edge contractions (where contracting an edge merges its endpoints into a single vertex, removing any loops or parallel edges created).[18] Wagner's 1937 result shows that the two characterizations coincide, as subdivisions correspond to a restricted form of minors without contractions.[19]
The necessity part of these theorems—that graphs containing subdivisions or minors of K_5 or K_{3,3} are non-planar—follows from violations of Euler's formula for planar embeddings.[18] For K_5, with 5 vertices and 10 edges, Euler's formula v - e + f = 2 (where f is the number of faces) combined with the face-handshaking lemma $2e \geq 3f (assuming no multiple edges) implies e \leq 3v - 6 = 9, a contradiction.[18] Similarly, K_{3,3} with 6 vertices and 9 edges violates the analogous bound for bipartite planar graphs, e \leq 2v - 4 = 8.[18] Subdivisions and contractions preserve these density violations, ensuring non-planarity. The sufficiency—that absence of such forbidden structures implies planarity—is established via a high-level constructive proof, embedding the graph by iteratively resolving potential crossings based on connectivity properties.[18]
These characterizations highlight that planarity defines a minor-closed family of graphs, meaning if a graph is planar, so are all its minors. The Robertson–Seymour theorem extends this by proving that every minor-closed family of finite graphs has a finite set of forbidden minors, confirming that \{K_5, K_{3,3}\} fully describes planarity without additional exclusions.
Algorithms
Incremental Construction Methods
Incremental construction methods for planarity testing build a planar embedding of a graph by successively adding structural elements—such as paths, vertices, or edges—while maintaining the planarity of the partial embedding at each step. These approaches originated in the mid-20th century and laid the groundwork for efficient planarity algorithms, focusing on sequential embedding to both test planarity and produce a combinatorial representation of the embedding if the graph is planar. By attempting to embed incrementally and detecting impossibilities through crossing attempts or boundary violations, these methods determine non-planarity when an addition cannot be performed without violations.
The path addition method, introduced by Hopcroft and Tarjan in 1974, constructs the embedding by adding paths between vertices on the boundaries of existing faces, ensuring no crossings are introduced. It begins with an initial cycle forming the outer face and uses depth-first search to identify low-degree vertices suitable for sequential embedding. Each path is routed along face boundaries or by splitting adjacent faces, preserving the planar structure. If a path's endpoints lie on non-adjacent faces in a way that prevents non-crossing routing, planarity is violated. This method achieves linear time complexity, O(n), where n is the number of vertices.
A high-level pseudocode outline for the path addition process is as follows:
Initialize the embedding with a cycle as the initial face.
While there are unprocessed edges forming paths:
Identify low-degree vertices u and v via DFS ordering.
Locate the faces incident to u and v in the current embedding.
If u and v are on the same face boundary:
Embed the path inside that face, subdividing it.
Else if on adjacent faces:
Merge or split faces to route the path without crossings.
Update face boundaries and vertex incidences.
If routing causes a crossing or impossible split, report non-planar.
Initialize the embedding with a cycle as the initial face.
While there are unprocessed edges forming paths:
Identify low-degree vertices u and v via DFS ordering.
Locate the faces incident to u and v in the current embedding.
If u and v are on the same face boundary:
Embed the path inside that face, subdividing it.
Else if on adjacent faces:
Merge or split faces to route the path without crossings.
Update face boundaries and vertex incidences.
If routing causes a crossing or impossible split, report non-planar.
This sequential addition ensures the embedding evolves correctly, with each step verifiable in constant amortized time across all operations.
The vertex addition method, proposed by Lempel, Even, and Cederbaum in 1967, incrementally inserts vertices into an existing planar embedding, placing each new vertex inside a face and connecting it to boundary vertices, thereby splitting the face according to the vertex's degree. It relies on a specific vertex ordering, such as an st-numbering, to ensure that connections respect the current face structure and degree constraints, avoiding crossings by confining new edges to the split face. The original formulation runs in O(n²) time, but subsequent optimizations using data structures like PQ-trees reduced it to O(n).[20]
In the edge addition method, edges are added between non-adjacent vertices on the same face boundary, with planarity checked by verifying that the new edge does not cross existing ones within the face cycle. The process starts with a sparse initial embedding and builds up by inserting edges one at a time, updating face cycles to reflect the new divisions. This detects non-planarity if an edge would require crossing or if endpoints are not co-facial. An efficient linear-time implementation was developed by Boyer and Myrvold in 2004, emphasizing edge-by-edge updates for simplicity and speed.
The construction sequence method encompasses these techniques by viewing the graph as built through a series of planar-preserving operations—adding paths, vertices, or edges in a predefined order—and tests planarity by simulating the embedding process step by step. If all additions succeed without violations, the graph is planar, and the final embedding is obtained; otherwise, non-planarity is confirmed at the failure point. In their basic optimized forms, these incremental methods operate in O(n time, providing both testing and embedding capabilities essential for early computational graph theory.[20]
DFS-Based Methods
DFS-based methods for planarity testing leverage depth-first search (DFS) to explore the graph structure, construct a spanning tree, and analyze edge relationships to determine embeddability in the plane while producing a combinatorial embedding if planar. These approaches build on earlier incremental construction techniques by performing a global DFS traversal to classify edges and compute structural properties like lowpoints, enabling efficient detection of non-planarity through forbidden subdivisions such as K_{3,3} or K_5.[1]
The seminal Hopcroft-Tarjan algorithm, developed in 1974, achieves linear-time planarity testing by using DFS to build a spanning tree and classify edges into tree edges, back edges, and fronds (a type of non-tree edge).[21] During the DFS traversal, vertices receive depth-first numbers (dfn), and lowpoints are computed to identify the lowest reachable ancestor via back edges from a vertex or its descendants, facilitating the ordering of edges around vertices.[22] The algorithm embeds the graph by testing left-right planarity conditions on branches of the DFS tree, interleaving fronds and subtrees while checking for conflicts that indicate non-planar configurations, such as improperly nested or crossing back edges forming subdivisions of K_{3,3} or K_5.[21] If no such conflicts arise, it outputs a combinatorial embedding represented as a rotation system, specifying the cyclic order of edges incident to each vertex.[1]
A key component in both Hopcroft-Tarjan and subsequent methods is the lowpoint computation, which guides edge placement and non-planarity detection. The lowpoint low for a vertex v is defined as the minimum of its DFS number dfn, the lowpoints of its children in the DFS tree, and the DFS numbers of targets reachable via back edges incident to v or its descendants.[22] This can be computed recursively during the DFS post-order traversal, as shown in the following pseudocode:
function DFS(v):
dfn[v] = low[v] = counter++
for each neighbor w of v:
if dfn[w] is undefined:
parent[w] = v
DFS(w)
low[v] = min(low[v], low[w])
elif w != parent[v]:
low[v] = min(low[v], dfn[w])
function DFS(v):
dfn[v] = low[v] = counter++
for each neighbor w of v:
if dfn[w] is undefined:
parent[w] = v
DFS(w)
low[v] = min(low[v], low[w])
elif w != parent[v]:
low[v] = min(low[v], dfn[w])
This structure allows interleaving of edges around vertices based on lowpoint values to build the embedding without crossings.[1]
The Boyer-Myrvold algorithm, introduced in 2004, refines the DFS-based approach with a simplified embedding strategy that processes biconnected components separately to avoid numbering conflicts and streamline edge interleaving.[4] It performs DFS numbering and lowpoint computation similarly to Hopcroft-Tarjan but uses a data structure for biconnected components (bicomps) that supports constant-time flips of embeddings, enabling efficient handling of edge additions within components.[1] Non-planarity is detected by attempting to embed paths for back edges; failure to find a conflict-free path reveals a K_{3,3} or K_5 subdivision.[4] Like its predecessor, it produces a rotation system as output if the graph is planar, achieving O(n) time overall through optimized traversal and local adjustments.[1]
These DFS-based methods offer advantages in simplicity relative to earlier techniques and high efficiency, with implementations demonstrating practical performance on large graphs due to their linear time bounds and straightforward recursive structure.[4]
Dynamic and Parallel Algorithms
Dynamic algorithms for planarity testing enable efficient handling of graph modifications, such as insertions and deletions of vertices and edges, while maintaining the planarity status and a corresponding embedding. Early contributions in the late 1980s introduced on-line planarity testing for incremental updates in streaming graphs, supporting edge additions one at a time with planarity checks at each step.[23] These methods use O(n) space and achieve O(log n) amortized time per edge insertion or test, alongside O(log n) worst-case time for vertex insertions.[23] By the 1990s, fully dynamic variants extended this to deletions, with applications in maintaining planar representations under arbitrary sequences of operations.
Such algorithms rely on dynamic embeddings that preserve rotation systems, which encode the cyclic ordering of edges incident to each vertex to represent the embedding combinatorially.[1] Key techniques involve incremental updates through local re-embeddings, where changes propagate only to affected substructures using auxiliary data like SPQR-trees to perform flips—reversals of embedding orientations—in affected biconnected or triconnected components without rebuilding the entire embedding.[24] Modern fully dynamic approaches preprocess the initial graph in O(n) time and handle subsequent edge insertions or deletions in amortized O(log³ n) time, detecting non-planarity in O(1) time via compatibility checks on pending edges.[24]
Examples include the 1990s on-line testing frameworks for streaming scenarios, which process edges sequentially while ensuring the graph remains planar.[1] A 2024 algorithm for dynamic single-source upward planar digraphs supports edge insertions and deletions in O(log² n) worst-case time while maintaining an upward combinatorial embedding, with queries in O(log² n) worst-case time.[25] Limitations persist, as worst-case updates can demand O(n) time for extensive re-embeddings in highly connected components.[1]
Parallel algorithms accelerate planarity testing by distributing computations across processors, building on sequential methods like Boyer-Myrvold for concurrent execution. A 2024 thesis presents parallel variants of Boyer-Myrvold for multicore systems using a layer-based approach, achieving O(n) total work and O(n) span in the work-span model, with practical speedups of 2.4–2.7x on planar graphs using 16 cores.[26] These use parallel DFS to build spanning trees and resolve embedding conflicts via coordinated merging of sub-embeddings.[26]
Central techniques include parallel lowpoint computation—assigning the smallest discovery time reachable from a subtree—via pointer doubling to simulate upward traversals in logarithmic rounds, enabling concurrent identification of back edges and cycles.[26] Conflict resolution proceeds by parallelizing walk-up and walk-down phases across biconnected components, flipping orientations as needed to embed edges without crossings.[26] However, these algorithms are not fully work-efficient, often incurring O(n log n) total work due to synchronization overheads, and worst-case parallel time can degrade to O(n) in dense graphs with heavy contention.[26]
Complexity and Implementations
Time and Space Complexity
The development of planarity testing algorithms began with polynomial-time methods in the early 1960s, such as the O(V^3) algorithm by Auslander and Parter in 1961 and the O(VE) approach by Lempel, Even, and Cederbaum in 1967, which equates to O(V^2) for dense graphs.[1] These were followed by improvements in the late 1960s and early 1970s, but the breakthrough came in 1974 with the Hopcroft-Tarjan algorithm, which achieved the first linear-time complexity of O(V) using a path-addition scheme based on depth-first search.[21] Subsequent refinements, including those by Booth and Lueker in 1976 using PQ-trees and Boyer and Myrvold in 2004 with a simplified DFS-based method, have maintained this O(V) upper bound, establishing it as the best-known time complexity for exact planarity testing.
Space complexity for planarity testing algorithms is typically O(V), as adjacency lists require O(V + E) storage, which simplifies to O(V) for sparse graphs common in planar contexts, and embedding representations like rotation systems add another O(V).[1] DFS-based methods, such as Hopcroft-Tarjan and its variants, utilize O(V) stack space for recursion or iteration during traversal and conflict detection.[21]
A fundamental lower bound for exact planarity testing is Ω(V), arising from the need to read the entire graph input in the standard model, as the adjacency representation requires at least linear size.[27] No sublinear-time algorithms exist for exact testing due to this input constraint, though sublinear-time approximations are possible in the property testing framework, where the goal is to distinguish planar graphs from those far from planarity with high probability by querying a small fraction of edges.[28]
Runtime can be influenced by preprocessing steps, such as decomposing the graph into biconnected components via DFS, which adds O(V) time but enables modular testing on each component.[1]
| Algorithm Type | Time Complexity | Space Complexity |
|---|
| Incremental (e.g., vertex-addition) | O(V) | O(V) |
| DFS-based (e.g., Hopcroft-Tarjan) | O(V) | O(V) |
| Dynamic (e.g., on-line insertions) | O(V + U) | O(V) |
This table summarizes key categories, where U denotes the number of updates in dynamic settings; all achieve optimal linear bounds for static cases.[23][1]
Practical Implementations and Software
The Boost Graph Library (BGL), a C++ template library for graph algorithms, provides an implementation of the Boyer-Myrvold planarity testing and embedding algorithm, which supports undirected graphs and outputs a combinatorial embedding if the graph is planar.[29] The Open Graph Drawing Framework (OGDF), another C++ library focused on graph drawing, includes implementations of the Booth-Lueker algorithm (using PQ-trees) and Boyer-Myrvold, along with extensions for constrained planarity testing such as cluster-planarity.[30][31] In Python, the NetworkX library offers a basic planarity testing function via check_planarity, which is based on the Left-Right Planarity Test and returns either a planar embedding or a Kuratowski subgraph counterexample for non-planar graphs.[32]
Empirical benchmarks on large planar graphs with up to 1,000,000 vertices demonstrate that modern implementations achieve near-linear runtimes in practice, with experimental variants like Haeupler-Tarjan outperforming Boyer-Myrvold by 70-80% on connected and biconnected graphs of density up to 3n-6 edges.[33] A ported Boyer-Myrvold implementation in OGDF runs 35% faster than the library's native version, making it suitable for graphs exceeding 10^5 vertices, though embedding computation adds overhead compared to testing alone.[33] Evaluations from a 2024 thesis confirm that parallel variants of Boyer-Myrvold, implemented in C++ using ParlayLib, scale effectively on multi-core processors, achieving 2.4-2.7x speedups on 16 cores for graphs with 500,000 to 1.3 million vertices and up to 22x on non-planar instances due to early termination.[34]
Practical challenges include efficiently handling disconnected graphs, which algorithms address by testing each connected component separately before combining embeddings, and dense graphs near the planar density bound, where memory for adjacency structures becomes a bottleneck despite O(n + m) time complexity.[1] Output formats often include combinatorial embeddings that can be converted to visual representations, such as SVG for planar drawings in OGDF, facilitating integration with visualization tools for exporting non-crossing layouts.[35]
Recent developments feature experimental parallel C++ implementations tailored for high-performance computing environments, enabling planarity testing on massive graphs up to 90 million vertices with multi-core speedups.[34] OGDF's modular design also supports extensions for advanced applications, such as level-planar layouts in hierarchical graphs.[35]