Fact-checked by Grok 2 weeks ago

Convex hull algorithms

Convex hull algorithms are computational procedures in the field of designed to compute the of a of points in , defined as the smallest containing all the points, which in two dimensions forms the boundary of a enclosing the points. These algorithms are fundamental for solving problems such as , , and geographic information systems, where identifying the minimal enclosing convex shape is essential. The convex hull problem was among the first studied in computational geometry, with early algorithms focusing on efficiency in two dimensions before extensions to higher dimensions. Notable algorithms include the gift-wrapping algorithm (also known as Jarvis's march), introduced in 1973, which iteratively selects the next hull vertex by finding the point that makes the smallest angle with the previous edge, achieving O(nh) time complexity where n is the number of points and h is the number of hull vertices, making it output-sensitive but potentially slow for large n. In contrast, the Graham scan, proposed in 1972, sorts the points by polar angle around a reference point and then performs a linear scan to build the hull, running in O(n log n) time due to the sorting step and establishing an optimal worst-case bound proven by Yao in 1981 using a quadratic decision tree model. Other influential approaches include Andrew's monotone chain algorithm, a variant that sorts points by x-coordinate and constructs upper and lower hulls in O(n log n) time, and divide-and-conquer methods like those by Preparata and Shamos, which recursively merge hulls of subsets for the same complexity. For higher dimensions, algorithms such as extend these ideas, adapting to 3D and beyond, though increases significantly, often to O(n log n) in 3D under randomized incremental constructions. Modern variants, including output-sensitive and parallel implementations, address practical challenges like degeneracy handling and large datasets, underscoring the enduring importance of computation across theoretical and applied contexts.

Fundamentals

Definition and Properties

The convex hull of a finite set S of points in \mathbb{R}^d is defined as the smallest containing S. Equivalently, it is the of all s that contain S, or the set of all s of points from S, where a is a with nonnegative coefficients summing to 1. By construction, the convex hull inherits the property of convexity: for any two points in the hull, the connecting them lies entirely within the hull. The of the in two dimensions forms a , while in higher dimensions it forms a . The vertices of this , known as extreme points, are the points in S that cannot be expressed as convex combinations of other points in S. Supporting hyperplanes play a key role in characterizing the hull: a hyperplane is supporting if the entire hull lies on one side of it and intersects the at least at one point. Carathéodory's theorem states that any point in the of S can be represented as a convex combination of at most d+1 points from S. Visually, in two dimensions, the convex hull can be imagined as the shape formed by stretching a around the points in S, snapping tight to the outermost points. In three dimensions, for a set of four non-coplanar points, the convex hull is a , the simplest convex , enclosing the points as its vertices. The convex hull coincides with the convex closure of S, the minimal convex set containing it. In contrast, alpha shapes provide a parameterized generalization, allowing for concavities in the boundary when the parameter \alpha is finite, reducing to the convex hull as \alpha approaches infinity.

Complexity Measures and Lower Bounds

In the analysis of convex hull algorithms, the primary input size is denoted by n, the number of points in the input set, while the output size is captured by h, the number of vertices (or equivalently, edges) on the resulting convex hull. This distinction is crucial because h can vary significantly—from O(1) in degenerate cases where all points are collinear to O(n) when no three points are collinear—allowing for output-sensitive measures that reflect the intrinsic complexity of the problem instance. Time complexity for convex hull computation is often expressed in worst-case terms as O(n \log n), which arises from the need to sort or order points angularly or radially around a reference point. Output-sensitive bounds, such as O(n \log h), provide tighter guarantees when h \ll n, emphasizing efficiency for sparse hulls. These notations distinguish between deterministic worst-case performance and expected-time analyses, where randomized algorithms may achieve the same bounds with high probability but avoid adversarial inputs. A fundamental lower bound for comparison-based convex hull algorithms in the plane is \Omega(n \log n), established in the algebraic decision tree model by reducing the problem to : given n numbers to sort, map them to points (x_i, x_i^2), whose vertices reveal the sorted order in O(n) additional time. This bound holds even for merely identifying hull vertices, without constructing edges. For output-sensitive variants, an \Omega(n \log h) lower bound applies in the , derived from generalizations of element uniqueness problems and verified via algebraic techniques, confirming the optimality of algorithms achieving this runtime. Space complexity for standard convex hull algorithms is typically O(n), as the input requires \Theta(n) storage, and auxiliary structures like sorted lists or stacks use at most linear additional space. Time-space tradeoffs exist, but optimal implementations balance both at O(n) without exceeding this threshold. Degenerate cases, such as sets with collinear or coplanar points, are handled without inflating asymptotic complexity by adjusting predicates in routines—for instance, using non-strict inequalities (\geq) in cross-product tests to exclude interior collinear points from the while preserving O(1) per-test time. This ensures that algorithms like maintain their O(n \log n) bound even under degeneracies, treating collinear segments as boundaries only at endpoints.

Planar Algorithms

Incremental and Wrapping Methods

The incremental and wrapping methods represent some of the earliest and most intuitive approaches to computing the of a set of points in the , relying on local checks for convexity rather than global ordering. These algorithms build the hull either by iteratively "wrapping" around the points starting from an or by adding points one by one while maintaining a current hull approximation. While simple to understand and implement, they typically exhibit quadratic or output-dependent time complexities, making them suitable for small to moderate-sized inputs or when the number of hull vertices is small. The gift wrapping algorithm, also known as Jarvis's march, begins by identifying an initial hull point, such as the leftmost point in the set, and then iteratively selects the next hull vertex by finding the point that forms the smallest counterclockwise angle with the current edge. This process continues around the boundary until returning to the starting point, effectively "wrapping" a rubber band around the points. The time complexity is O(nh), where n is the number of input points and h is the number of vertices on the convex hull, as each hull edge requires scanning up to n points to find the next vertex. To determine the next point, the algorithm uses an angular sweep, often implemented via the orientation test based on the cross product of vectors. For three points p1 = (x1, y1), p2 = (x2, y2), and p3 = (x3, y3), the orientation is computed as the sign of the expression x1(y2 - y3) + x2(y3 - y1) + x3(y1 - y2); a positive value indicates a left turn (counterclockwise), which guides the selection of the next hull point. Pseudocode for the core loop, assuming points are stored in an array P of size n and starting from the leftmost point at index 0, is as follows:
i = 0  // Start with leftmost point
hull = [P[0]]
do {
    next = (i + 1) % n
    for j = 0 to n-1 {
        if j != i and orientation(P[i], P[j], P[next]) > 0 {
            next = j
        }
    }
    hull.append(P[next])
    i = next
} while i != 0
This method was introduced by in 1973 as a straightforward way to identify the without requiring . In contrast, the brute-force incremental algorithm first sorts the points by increasing x-coordinate (and by y-coordinate for ties) to ensure a natural order for addition, then constructs the by adding points one at a time while checking and restoring convexity. Naively, for each new point, the algorithm verifies its position relative to all existing hull edges, potentially backtracking to remove non-convex vertices, leading to O(n^3) time in the worst case due to repeated triple checks. An optimized version maintains the current as a and only checks the last few edges for convexity violations upon insertion, removing popped vertices until the turn is left, achieving O(n^2) time overall since each point is added once and each may be removed at most once. The same cross-product test is used to check turns at potential violation points: \text{orientation}(p_1, p_2, p_3) = x_1(y_2 - y_3) + x_2(y_3 - y_1) + x_3(y_1 - y_2) > 0 for a left turn confirming convexity. This approach assumes points are added in the sorted order to avoid crossing edges. Both methods offer advantages in simplicity and ease of implementation, requiring only basic geometric primitives like the orientation test, and they naturally handle degenerate cases (e.g., collinear points) through symbolic perturbation or careful sign handling in the cross product. For instance, perturbations can slightly adjust coordinates to resolve ties without altering the hull topology. However, their time complexities exceed the Ω(n log n) lower bound for convex hull computation in the worst case, limiting practicality for large n.

Sorting-Based Methods

Sorting-based methods for computing the convex hull of a planar point set achieve O(n log n) worst-case primarily through an initial step, followed by linear-time scans to construct the hull. These algorithms balance preprocessing costs with efficient identification, leveraging sorted order to ensure monotonic progress and avoid redundant checks. Unlike incremental approaches that may require time in the worst case, introduces the logarithmic factor necessary to match known lower bounds for the problem. The , introduced by in 1972, exemplifies this paradigm. The algorithm begins by selecting the point with the lowest y-coordinate (breaking ties by x-coordinate) as the pivot p_0. All other points are then sorted by increasing polar angle relative to p_0, with ties resolved by increasing distance from p_0 to ensure points are ordered from nearest to farthest. To compare polar angles without like , the sorting uses the of vectors formed by pairs of points relative to p_0: for points p_i and p_j, compute (p_i - p_0) × (p_j - p_0); a positive value indicates p_i precedes p_j in counterclockwise order, zero signifies (resolved by distance), and negative reverses the order. This orientation test, equivalent to checking left turns, enables efficient angular in O(n log n) time using standard comparison-based sorts. Following sorting, the scan phase builds the hull using a initialized with p_0 and the next two points. For each subsequent point p_k (k from 3 to n, wrapping to p_1 and p_2 if needed), while the last three points on the form a right turn ( ≤ 0, indicating non-convexity or ), pop the middle point. Then push p_k. For points, the ≤ 0 condition excludes intermediate points, retaining only the farthest to form the strict ; to include all collinear points on the boundary, modify to < 0 for popping only right turns. The process completes in O(n) time, yielding the hull vertices in counterclockwise order. Overall complexity is O(n log n) dominated by . Andrew's monotone chain algorithm, published in 1979, refines the sorting-based approach by using Cartesian coordinates for simplicity. Points are sorted lexicographically by increasing x-coordinate, then by increasing y-coordinate for ties, in O(n log n) time. The hull is constructed in two linear passes: first for the lower hull from left to right, then for the upper hull from right to left. A stack maintains candidate hull points; for the lower hull, start with the first two points, and for each subsequent point p, while the stack has at least two points and the turn from the last two to p is not a left turn (cross product ≤ 0), pop the second-last point, then push p. Repeat similarly for the upper hull, excluding the endpoints to avoid duplication. The full hull combines both chains, removing the duplicate starting and ending points. This method handles collinear points by the ≤ 0 test, excluding intermediates, and relies on the same orientation primitive as for turn checks.
Pseudocode for Monotone Chain (Lower Hull Construction):
sort points by increasing x, then y
lower = empty stack
for each point p in sorted order:
    while len(lower) >= 2 and cross(lower[-2], lower[-1], p) <= 0:
        pop lower[-1]
    push p to lower
// Upper hull similarly, iterating reversed sorted list
// Combine: hull = lower[:-1] + upper[:-1]
where cross(o, a, b) = (a.x - o.x)(b.y - o.y) - (a.y - o.y)(b.x - o.x)
The resulting hull is output-sensitive in practice but fixed O(n log n) worst-case. Quickhull, developed by Barber, Dobkin, and Huhdanpaa in 1996, introduces a randomized divide-and-conquer variant of sorting-based methods, emphasizing practical efficiency. It begins by finding extreme points (minimum and maximum x-coordinates) to form an initial edge. Points are partitioned into those above and below this edge. For each side, the point farthest from the edge (maximizing distance) is selected as a new hull vertex, forming two sub-edges. Recursively, unprocessed points are assigned to outside sets for these sub-edges via orientation tests, trimming interior points by checking visibility against new edges. The trimming procedure merges adjacent non-convex edges post-recursion, using a centroid to verify convexity and report wide facets for potential restarts. Collinear points are handled by selecting the farthest during distance maximization, excluding others unless explicitly required. Expected time is O(n log n) under random input assumptions, due to balanced partitioning akin to quicksort, though worst-case is O(n^2) for degenerate configurations; in practice, it outperforms deterministic methods on large datasets.

Output-Sensitive and Optimal Algorithms

Output-sensitive algorithms for computing the convex hull of a set of n points in the plane aim to achieve running times that depend on both the input size n and the output size h, the number of vertices on the hull, particularly when h \ll n. These methods improve upon the standard O(n \log n) bound by leveraging the typically small value of h in practical scenarios, such as scattered point sets where most points are interior. The seminal O(n \log h) time complexity represents a theoretical optimum under common computational models, as it matches established lower bounds for identifying hull vertices. An early foundation for such approaches is the divide-and-conquer algorithm presented by , which achieves O(n \log n) time in the worst case but lays the groundwork for sensitivity extensions through its recursive structure. The algorithm divides the point set into two roughly equal subsets, typically by a vertical line at the median x-coordinate; recursively computes the convex hulls of each subset; and merges them by identifying upper and lower bridges—tangent lines connecting the hulls—using techniques like binary search on the hull boundaries, which takes O(\log n) per bridge or O(n) in linear scans. This merge step constructs the final hull by chaining the relevant portions of the sub-hulls, ensuring balance in the recursion tree for the logarithmic factor. While the original formulation is not output-sensitive, its paradigm of efficient merging via bridges has been adapted in later works to incorporate h-dependence, such as by pruning during recursion based on estimated hull sizes. The Kirkpatrick–Seidel algorithm, introduced in 1986, realizes the first worst-case O(n \log h) bound using a sophisticated divide-and-conquer strategy called "marriage-before-conquest." It begins by randomly sampling a small subset of points (constant size, e.g., three) to form an initial triangle, then iteratively eliminates interior points through pruning phases. A key innovation is the use of conflict graphs: for each sampled edge, a conflict list tracks points that "conflict" with it (i.e., lie on the wrong side relative to the potential hull edge). In each recursive level, the algorithm prunes points proven interior by checking conflicts against the current partial hull, reducing the active set size by a constant fraction with high probability due to random sampling. The recursion divides the remaining points, solves subproblems, and merges using tangent-finding similar to bridges, ensuring the total work across \log h levels is O(n \log h). This approach eliminates nearly all interior points early, making it highly efficient when h is small. Building on these ideas, Chan's algorithm from 1996 provides a simpler O(n \log h) method that combines elements of the Jarvis march (gift-wrapping) and Graham scan without relying on randomization. It proceeds in phases by guessing an upper bound h^* on h, starting from small values and doubling exponentially (e.g., h^* = 2^{2^i}) until success, with the total cost dominated by the final phase where h^* \leq h^2. In each phase: first, partition the n points into k = \lceil n / h^* \rceil groups of size at most h^*, and compute the convex hull of each group using Graham's scan in O(h^* \log h^*) time per group, for a total of O(n \log h^*); second, perform a Jarvis march on these k mini-hulls to find the full hull, where each of the h wrapping steps finds the next hull vertex by computing tangents to all k mini-hulls via binary search in O(\log h^*) time each, selecting the one with the extremal turn angle, for O(h k \log h^*) = O(n \log h^*) time. The "ultimate" point in Jarvis-inspired steps bounds the angular search, ensuring no redundant computations. Summing over phases yields O(n \log h), as earlier failed guesses contribute negligibly. These algorithms are theoretically optimal because any convex hull computation requires \Omega(n \log h) time in models like the linear decision tree, as proven by reducing from the problem of sorting h elements or identifying extremal points among n, where examining fewer comparisons fails to distinguish hull configurations when h \ll n. For instance, constructing a hull of size h necessitates at least \log (n choose h) decisions in the worst case, leading to the \Omega(n \log h) bound when h is sublinear. Kirkpatrick and Seidel established this matching lower bound, confirming that their and subsequent algorithms like Chan's achieve the tightest possible performance for output-sensitive planar convex hull computation.

Heuristics for Large Datasets

Heuristics for large datasets in convex hull computation often involve preprocessing steps to eliminate interior points or approximate the hull, trading worst-case guarantees for practical speed on massive inputs with many interior points. These methods are particularly useful when the number of points n is very large, as in or , where exact optimal algorithms may be too slow despite their theoretical efficiency. By reducing the effective input size or using randomization, these heuristics achieve near-linear or expected logarithmic time in practice, enabling computation on datasets with millions of points. The Akl–Toussaint heuristic, introduced in 1978, serves as a simple preprocessing technique to discard points unlikely to lie on the convex hull boundary. It begins by identifying four extreme points: the ones with minimum and maximum x-coordinates and minimum and maximum y-coordinates. These points form a convex quadrilateral, and any point strictly inside this quadrilateral is eliminated, as it cannot contribute to the hull. This elimination requires O(n) time using four counterclockwise orientation tests per point. In practice, for uniformly distributed points, the heuristic reduces the candidate set to O(√n) points on average, after which a standard O(m log m) hull algorithm (with m << n) can be applied, yielding overall expected time close to O(n). The method is exact in its preservation of the hull but heuristic in its selection, and it performs best when points are not clustered near the boundary. Randomized incremental construction offers another effective heuristic for exact convex hulls on large datasets by adding points in random order and maintaining the current hull with auxiliary structures. The algorithm starts with a small initial hull and, for each new point, uses a conflict list or graph to identify the visible edges from that point to the current hull, then updates the hull by removing invisible parts and adding new facets. In 2D, finding the tangent edges takes O(log h) expected time per insertion, where h is the current hull size, leading to an overall expected runtime of O(n log n). This approach, popularized in low-dimensional settings, benefits from backward analysis showing that randomization avoids worst-case degeneracies, making it robust for large n without preprocessing. Approximation algorithms provide heuristics that trade exactness for even greater speed, especially in streaming or high-volume settings, by computing an ε-approximate hull or using coresets to reduce the input size. A coreset is a small subset of k points (k = O(1/ε) in 2D for width approximations) whose convex hull ε-approximates the original in Hausdorff distance, allowing hull computation in O(k log k) time after O(n) preprocessing. For streaming data, where points arrive sequentially with limited memory, algorithms maintain an approximate hull by merging small buffers and discarding points inside approximate layers, achieving O(1/ε polylog n) space and update time. These methods are crucial for large-scale applications, ensuring scalability while bounding approximation error. In practical settings like GIS, these heuristics handle noise and massive datasets by combining preprocessing with robust variants; for instance, the Akl–Toussaint step filters noisy interior points before incremental updates, enabling real-time hulls for spatial queries on terabyte-scale point clouds. Such adaptations ensure reliability in noisy environments, where exact methods might amplify outliers, and have been integrated into libraries like for large-scale geographic computations.

Higher-Dimensional Algorithms

Generalizations of Planar Techniques

The planar techniques for computing convex hulls extend to higher dimensions by adapting concepts like wrapping, sorting, and incremental addition to handle hyperplanes and polytopes, but these generalizations face increased computational demands due to the exponential growth in hull structure complexity. In three dimensions, the convex hull forms a convex polyhedron, which can be represented using triangular faces for algorithmic simplicity, allowing planar methods to be applied locally to each facet during construction. The gift wrapping algorithm, first proposed for the plane, generalizes to d dimensions by starting with an initial (d-1)-dimensional facet and iteratively finding the next facet through supporting hyperplanes. Specifically, from a current ridge (a (d-2)-face), the algorithm identifies the point that minimizes the dihedral angle or solves for the extreme point in the direction normal to the ridge, effectively "wrapping" the hull facet by facet until closure. This requires scanning all remaining points for each new facet, leading to a time complexity of O(n f), where n is the number of points and f is the number of facets on the hull. Since f can reach Θ(n^{⌊d/2⌋}) in the worst case, the overall complexity is O(n^{⌊d/2⌋ + 1}). The method was introduced as the first general algorithm for d-dimensional convex polytopes. A direct generalization of the Graham scan to higher dimensions attempts to sort points by their direction from an interior reference point and then build the by stacking in that order, but this approach fails without significant modifications because angular sorting lacks a natural total order in d > 2, leading to inconsistencies in hull construction and potential misses of facets. Modifications, such as Seidel's shelling method, address this by using a linear ordering of simplices to incrementally build the while ensuring boundary consistency, achieving output-sensitive time O(f (log n + log f)) where f is the output size, though worst-case remains tied to the hull's combinatorial size. Incremental construction methods, common in planar algorithms, extend to 3D by maintaining a current polyhedral hull and adding points one at a time, using tests to identify and remove facets visible from the new point before connecting it to the boundary horizon (the ridge of visible facets). Each addition involves O(n) work in the worst case for conflict detection and retriangulation, yielding an overall of O(n²) for n points. This approach highlights the shift from edge-based updates in 2D to facet-based ones in . Key challenges in these generalizations include the output size, which has a lower bound of Ω(n^{⌊d/2⌋}) facets for the convex hull of n points in d dimensions, necessitating output-sensitive algorithms to avoid inefficiency when the hull is large (e.g., quadratic in 4D). Facet enumeration becomes particularly demanding, as verifying supporting hyperplanes requires linear programming-like tests per candidate, amplifying the combinatorial explosion beyond planar cases.

Beneath-Beyond and Dual Methods

The beneath-beyond method is an incremental technique for constructing the of a point set in higher dimensions, starting from an initial formed by d+1 non-degenerate points and adding the remaining points one by one in arbitrary order. For each new point p, the algorithm determines whether p lies beneath the current hull (i.e., in its interior or on its ) or beyond it (i.e., outside). If beneath, no is needed; if beyond, the algorithm identifies the visible facets from p—those for which p lies strictly on the opposite side of the —and deletes them, then forms new facets by connecting p to the edges of the visible region, known as the horizon ridge. This process maintains the complete facial lattice of the hull and handles non-simplicial updates efficiently by propagating changes through adjacent faces. The method was first described in an unpublished manuscript by M. Kallay in 1981 and achieves a worst-case of O(n^{\lfloor d/2 \rfloor + 1}), which specializes to O(n^2) in due to the need to scan up to O(n) facets per insertion in the worst case. The beneath-beyond paradigm originated conceptually in the 1970s through studies of polytope constructions, with algorithmic refinements emerging in the early 1980s to address computational efficiency for point sets. Clarkson and Shor's 1989 work on randomized incremental construction provided a variant that uses random insertion order to achieve expected O(n \log n + n^{\lceil d/2 \rceil}) time, improving practical performance by reducing worst-case degeneracies, though the deterministic version remains O(n^2) in 3D. In 3D specifically, the orientation predicate to classify a point relative to a facet—determining if it is above, below, or coplanar—is computed via the sign of the scalar triple product, equivalent to the determinant of the 4x4 matrix formed by the facet's three vertices, the origin, and the test point, generalizing the 2D cross-product for robust geometric decisions. This determinant-based test extends naturally to higher dimensions, where the orientation of a point with respect to a (d-1)-flat is the sign of the determinant of the (d+1) x (d+1) matrix of homogeneous coordinates, enabling efficient visibility checks without explicit hyperplane equations. Quickhull adapts the beneath-beyond framework for practical 3D computation by incorporating randomized selection of extreme points and concavity tests to prune interior regions efficiently. It begins with initial facets from extreme points along coordinate axes, then recursively processes the furthest point from each facet's supporting plane, partitioning the remaining points into "inside," "outside," and "coplanar" sets based on signed distances; concavity is addressed post-recursion by merging non-convex edges using a test against the average of adjacent vertices (centrum). This yields an expected O(n \log n) time in 3D for worst-case inputs but achieves expected linear time O(n) for uniformly distributed random points in a convex domain, as the recursion depth and partition sizes remain bounded with high probability. Developed by C.B. Barber, D.P. Dobkin, and H.T. Huhdanpaa in 1996, Quickhull's efficiency stems from minimizing full scans via linked facet structures inherited from beneath-beyond. Dual space methods transform the primal convex hull problem into an equivalent arrangement computation under point-hyperplane duality, where each d-dimensional point p = (p_1, \dots, p_d) maps to the hyperplane h_p: x_{d+1} = p_1 x_1 + \dots + p_d x_d + 1, and the lower envelope of these hyperplanes in (d+1)-space corresponds to the upper convex hull in the primal. The vertices of this envelope are intersection points of d hyperplanes, dual to the facets of the primal hull, obtained by solving linear systems from the arrangement; edges and higher faces follow analogously. This duality, formalized in the early , facilitates output-sensitive algorithms by exploiting envelope properties, such as O(n^{\lfloor (d+1)/2 \rfloor}) complexity for the lower envelope. It links directly to Delaunay triangulation in higher dimensions via the paraboloid lifting transform, where the Delaunay triangulation of primal points yields the convex hull of lifted points, computable as the lower convex hull in dual space.

Optimal Algorithms in Fixed Dimensions

In fixed dimensions d \geq 3, optimal convex hull algorithms achieve the worst-case \Omega(n \log n + n^{\lfloor d/2 \rfloor}), matching known lower bounds derived from the maximum output size and comparison-based sorting requirements. These algorithms represent the theoretical pinnacle for deterministic and randomized constructions, though their practical adoption is limited by structural complexity. Bernard Chazelle's 1993 algorithm computes the of n points in \mathbb{R}^d (with d fixed) in deterministic O(n \log n + n^{\lfloor d/2 \rfloor}) time, using a hierarchical into simplicial partitions and graphs to efficiently resolve point-facet s. The approach begins with a coarse partitioning of the point set, recursively refining it while maintaining a dynamic graph that tracks potential hull contributors, ensuring balanced computation across levels. This method is asymptotically optimal and extends prior techniques like beneath-beyond as subroutines for local hull updates. Complementing Chazelle's deterministic solution, the randomized incremental by Kenneth Clarkson and achieves expected O(n \log n) time in fixed d through backward , which evaluates the probability of structural changes during random insertion order. By incrementally adding points and using random sampling to prune non-hull candidates, the expected cost per insertion remains logarithmic, yielding overall optimality with high probability. This approach avoids worst-case pitfalls by relying on permutation randomness rather than adversarial inputs. The lower bound of \Omega(n \log n + n^{\lfloor d/2 \rfloor}) stems from topological considerations: the convex hull can have up to \Theta(n^{\lfloor d/2 \rfloor}) facets in the worst case, necessitating at least linear time in that quantity for output, augmented by an \Omega(n \log n) term from reductions to sorting or element uniqueness problems. These bounds, tight for even and odd d, confirm the optimality of both Chazelle's and Clarkson-Shor's algorithms. Despite their elegance, implementing these optimal algorithms faces significant challenges, including high constant factors from recursive decompositions and the need for exact arithmetic in geometric predicates (e.g., tests) to ensure robustness against floating-point errors. Chazelle's method, in particular, involves intricate structures like multilevel conflict graphs, rendering it primarily of theoretical interest with no widespread practical implementations. Post-2000 developments have introduced no major breakthroughs for general fixed d, but refinements for d=3 include optimal in-place algorithms that compute the hull in O(n \log n) expected time without auxiliary space, leveraging randomized incremental techniques with careful memory management.

Special Cases and Variants

Online and Dynamic Maintenance

Online convex hull algorithms process points arriving sequentially, updating the convex hull after each insertion while maintaining efficiency. A seminal approach achieves amortized O(log n) time per insertion following an initial O(n log n) preprocessing step, utilizing layered structures to manage hull boundaries and facilitate rapid tangent finding and point integration. This method builds on static output-sensitive techniques by incrementally refining the hull structure, ensuring asymptotic optimality for sequential inputs. Fully dynamic maintenance extends this to support both insertions and deletions, preserving the under arbitrary updates. The structure by Brodal and Jacob employs balanced trees to represent hull edges, enabling O(log^{1+ε} n) amortized time per update in while supporting queries like extreme points or tangents in O(log n) time. A later improvement by achieves O(log n) amortized time per update for any fixed ε > 0, using a combination of shallow cuttings and to handle rebalancing efficiently. These data structures rely on to bound rebuilding costs across sequences of operations. In streaming settings, where points arrive in a single pass and space must be minimized, algorithms compute approximate convex hulls known as ε-hulls. These maintain a whose approximates the true hull within a factor of (1+ε) in width, using O(1/√ε log(1/ε)) space in 2D and constant passes, with update time polylogarithmic in the stream length. Such approximations are sufficient for many applications, as exact maintenance often requires prohibitive space. These techniques find applications in tracking moving points, such as in kinetic data structures for , and in database systems for efficient range queries leveraging hull approximations. Historically, online methods emerged in the with initial O(log² n) per-operation structures, while dynamic advancements in the refined bounds toward logarithmic efficiency through sophisticated balancing mechanisms.

Convex Hulls of Simple

Computing the convex hull of a simple polygon, which is a non-self-intersecting closed chain of line segments, benefits from the ordered structure of its vertices along the , enabling algorithms that run in linear time O(n), where n is the number of vertices. Unlike the general case for unordered point sets requiring O(n log n) time in the worst case, the boundary ordering allows traversal-based methods to identify hull vertices without . These algorithms typically the boundary once, maintaining candidate hull points while eliminating interior or non-extreme vertices through local convexity tests, such as orientation checks to determine left or right turns. A fundamental approach divides the polygon boundary into an upper chain and a lower chain by identifying the leftmost and rightmost vertices, ensuring each chain is x-monotone (non-decreasing in x-coordinates due to the simple polygon property). The convex hull is then formed by computing the convex upper chain and convex lower chain separately and connecting them at the endpoints. For each chain, a stack-based scan processes vertices sequentially from left to right: starting with the first two vertices on the stack, each subsequent vertex is tested for a left turn with the top two stack vertices; if a right turn occurs, the top vertex is popped until convexity is restored, ensuring amortized constant time per vertex. This two-pointer-like advancement (one implicit pointer via the stack top and the sequential input pointer) builds the monotone convex chain in O(n) total time, yielding the full hull upon combination. (Note: Using a representative university lecture note as secondary description, primary from Sedgewick's authoritative text.) The vertices on these chains are precisely those where the maintains left turns (for counterclockwise orientation), skipping reflex chains—sequences of vertices where internal angles exceed 180 degrees and form pockets. Bridges, which are diagonals connecting non-consecutive hull vertices, span these pockets, effectively replacing reflex chains with straight-line hull edges. Identifying bridges occurs implicitly during the stack maintenance, as popped vertices delineate the start and end of skipped pockets. The resulting hull thus comprises a subset of the original edges plus these bridges, enclosing the entire . Melkman's algorithm provides an elegant variant for constructing the of a simple polygonal (applicable to closed polygons by treating the as a ), achieving time using a (deque) to maintain the two tangent chains of the current hull. As vertices are added sequentially, the deque stores hull candidates, with insertions and deletions at both ends to preserve the lower and upper tangents from the new to the existing hull; local tests ensure only extreme points remain, avoiding full rebuilds. This deque structure efficiently handles the online addition, making it suitable for streaming boundary data, and outputs the hull upon completion. Historically, prior to linear-time methods, computing the convex hull of a simple polygon's vertices treated them as an unordered set, requiring O(n log n) algorithms like Graham scan or Jarvis march; the first O(n) algorithm was introduced by McCallum and Avis in 1979 using dual stacks for boundary scanning and region-based classification of vertices to backtrack non-hull points. Melkman's 1987 work advanced this to an online setting. These developments exploit polygon structure for efficiency. While the primary focus is hull computation, the identified bridges and reflex pockets facilitate decomposition of the polygon into convex components; for instance, each pocket can be triangulated or trapezoidalized in linear time using visibility from the bridge endpoints, yielding a convex partition overall.

Parallel and Randomized Approaches

Parallel approaches to convex hull computation leverage multi-processor architectures to reduce runtime, often achieving logarithmic time complexities in the PRAM model. A notable example is the parallelization of the Graham scan algorithm, which begins with parallel sorting of points by polar angle around a reference point, executable in O(log n) time using n processors on a CREW PRAM. Subsequent steps involve parallel prefix scans to identify the convex chain, enabling the full algorithm to run in O(log n) time with O(n log n) work overall. This adaptation maintains the sequential algorithm's simplicity while distributing the sorting and scanning phases across processors. Randomized incremental construction offers another avenue for parallelism, particularly in higher dimensions, by adding points in random order and parallelizing . In the CRCW PRAM model, this yields an expected runtime of O(n log n / p + log² n) on p processors for constant-dimensional cases, with the dependence graph exhibiting O(log n) depth with high probability. For , the approach incorporates Bentley-Ottmann-style sweeping to handle edge conflicts efficiently during incremental updates. These methods ensure work-efficiency matching sequential counterparts while exploiting randomness for balanced load distribution. In distributed settings, adaptations of facilitate scalability across multiple machines by partitioning the point set. Building on the sequential algorithm, which recursively partitions points into regions defined by extreme facets, the distributed variant evenly divides points across m machines, each computing local extrema and farthest points from bounding lines. A driver coordinates global extrema broadcasts and , akin to dividing into strips along coordinate axes, with communication costs of O(m h) where h is the size. This yields a total work of O(n m h / p) and logarithmic depth, suitable for environments without explicit but extensible to such frameworks via key-based partitioning. Recent advancements include GPU-accelerated variants of , which offload interior point culling to the GPU for massive parallelism. The hybrid approach uses GPU kernels to filter points against an incremental pseudo-hull, processing up to 85 million points per second and reducing candidates by two orders of magnitude, followed by CPU-based exact computation. On GTX 580 hardware, this achieves 13–27× speedups for static sets and up to 46× for deforming ones compared to CPU . In high dimensions, a 2025 employs iterative sampling via to build an exact hull reference set, converging in expected polynomial time O(n^{p+2}) (with p ≈ 4 for interior-point solvers), independent of dimension and leveraging randomness for initialization. Computational models like and CRCW PRAM underpin these algorithms, with CRCW allowing concurrent writes for faster conflict resolution in randomized settings, as in 3D hulls achieving O(log log n) time with optimal work. Tradeoffs arise in communication volume versus parallelism: PRAM assumes unlimited , but distributed implementations incur O(h) messages per round, balancing scalability against network costs in multi-core or cluster deployments.

References

  1. [1]
    [PDF] Computational Geometry: Convex Hulls
    • Definitions. • Algorithms. Convex Hull. Definition: Given a finite set of points P={p1,…,pn}, the convex hull of P is the smallest convex set C such that P⊂C.
  2. [2]
    Convex Hull - Algorithms, 4th Edition
    The convex hull problem in three dimensions is an important generalization. Graham's algorithm relies crucially on sorting by polar angle.
  3. [3]
  4. [4]
    [PDF] the convex hull of a finite planar set - UCSD Math
    THE CONVEX HULL OF A FINITE PLANAR SET. R.L. GRAHAM. Bell Telephone Laboratories, Incorporated. Murray Hill, New Jersey, USA. Received 28 January 1972 convex ...Missing: original | Show results with:original
  5. [5]
    [PDF] Randomized Incremental Convex Hull is Highly Parallel - Julian Shun
    The randomized incremental convex hull algorithm is one of the most practical and important geometric algorithms in the litera- ture. Due to its simplicity, and ...<|control11|><|separator|>
  6. [6]
    [PDF] An Optimal Convex Hull Algorithm in Any Fixed Dimension
    This paper provides a simple algorithm for computing the convex hull of n points in d-space deterministically in optimal O(n4/25) time, for d> 3. This result ...
  7. [7]
    A Lower Bound to Finding Convex Hulls | Journal of the ACM
    Published: 01 October 1981 Publication History. 112citation1,594Downloads ... YAO, A C A lower bound to finding convex hulls. Tech Rep. STAN-CS-79-733 ...
  8. [8]
    [PDF] Lecture 3 Convex Hulls: Lower Bounds and Output Sensitivity
    The proof is a generalization of the proof that sorting a set of n numbers requires Ω(nlog n) comparisons.
  9. [9]
    [PDF] Convex Hull (chapter 1)
    ▷ Degeneracy 2: collinear points. ▷ Handling: treat as left turn (replace > with ≥). ▷ The interior points are not on the hull.
  10. [10]
  11. [11]
    [PDF] The Quickhull Algorithm for Convex Hulls - UF CISE
    Recent work on convex hulls and Delaunay triangulations has focused on variations of a randomized, incremental algorithm that has optimal ex- pected performance ...
  12. [12]
    Convex hull construction - Algorithms for Competitive Programming
    Oct 13, 2024 · Graham's scan Algorithm¶ · The algorithm first finds the bottom-most point · Next, all the other points are sorted by polar angle in clockwise ...Missing: paper | Show results with:paper
  13. [13]
    Another efficient algorithm for convex hulls in two dimensions
    Another efficient algorithm for convex hulls in two dimensions. Author links ... View PDFView articleView in Scopus Google Scholar. [3]. J.L. Bentley ...
  14. [14]
    [PDF] Lecture 4: Convex Hulls in Higher Dimensions
    Sep 8, 2005 · This paper also summarizes the ”beneath-beyond method” and gift-wrapping methods for construction. Clarkson, Mehlhorn, and Seidel discuss ...<|control11|><|separator|>
  15. [15]
    [PDF] Convex Hulls (3D)
    Gift-Wrapping. • Divide-and-Conquer. Page 14. Gift-Wrapping. Initialization: Find a triangle on the hull. Iteratively: Until the hull closes, pivot around a ...
  16. [16]
    [PDF] University of California at Berkeley Jonathan Shewchuk
    Kallay's Beneath−Beyond Algorithm for Incremental Update of Convex Hulls. Page 65. v w. Kallay's Beneath−Beyond Algorithm for Incremental Update of Convex Hulls ...
  17. [17]
    The quickhull algorithm for convex hulls - ACM Digital Library
    Dec 1, 1996 · This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond ...Missing: 1983 | Show results with:1983
  18. [18]
    New Lower Bounds for Convex Hull Problems in Odd Dimensions
    While it has been known for several years that d-dimensional convex hulls can have Ω ⁡ ( n \ f l o o r d / 2 ) facets, the previously best lower bound for these ...
  19. [19]
    [PDF] 26 CONVEX HULL COMPUTATIONS - CSUN
    Output-size sensitive algorithms for constructive problems in computational geometry. Ph.D. thesis, Dept. Comput. Sci., Cornell Univ., Ithaca, 1986. [Sei91].
  20. [20]
    Optimal in-place algorithms for 3-D convex hulls and 2-D segment ...
    We describe the first optimal randomized in-place algorithm for the basic 3-d convex hull problem (and, in particular, for 2-d Voronoi diagrams).<|control11|><|separator|>
  21. [21]
    Timothy M. Chan's Publications: Convex hulls
    We describe the first optimal randomized in-place algorithm for the basic 3-d convex hull problem (and, in particular, for 2-d Voronoi diagrams).Missing: notable | Show results with:notable
  22. [22]
    Dynamic planar convex hull operations in near ... - ACM Digital Library
    Updates take O(log1+εn) amori tzed time and queries take O (log n time each, where n is the maximum size of P and ε is any fixed positive constant. For some ...
  23. [23]
    [1712.04564] Approximate Convex Hull of Data Streams - arXiv
    Dec 12, 2017 · Existing streaming algorithms for computing an \epsilon-hull require O(\epsilon^{-(d-1)/2}) space, which is optimal for a worst-case input.Missing: ε ε^
  24. [24]
    A linear algorithm for finding the convex hull of a simple polygon
    Dec 16, 1979 · A linear algorithm for finding the convex hull of a simple polygon☆. Author links open overlay panel. Duncan McCallum , David Avis.
  25. [25]
    [PDF] on-line construction of the convex hull of a simple polyline - IME-USP
    Apr 20, 1987 · In the special case of a simple polygon our algorithm produces the convex hull without first identifying two of the hull vertices, as was re-.Missing: original | Show results with:original
  26. [26]
  27. [27]
    [PDF] Randomized Incremental Convex Hull is Highly Parallel
    In this paper, we provide a strong theoretical anal- ysis showing that the standard incremental algorithm is inherently parallel. In particular, we show that ...
  28. [28]
    [PDF] CONVEX HULL - PARALLEL AND DISTRIBUTED ALGORITHMS
    DISTRIBUTED QUICKHULL. The QuickHull algorithm is a good candidate when con- sidering the distributed version, because the communication costs involved are ...Missing: MapReduce | Show results with:MapReduce
  29. [29]
    None
    ### Summary of GPU-Accelerated Convex Hull Using Quickhull
  30. [30]
    [PDF] An Algorithm for Computing the Exact Convex Hull in High ... - arXiv
    Furthermore, once convergence is achieved for all points, the exact convex hull is obtained. 1. arXiv:2508.14407v1 [cs.CG] 20 Aug 2025. Page 2 ...
  31. [31]
    [PDF] Parallelism in Randomized Incremental Algorithms - Julian Shun
    In this paper, we show that many sequential randomized incremental algorithms are in fact parallel. We consider algorithms for several problems including ...