Quickhull
Quickhull is a practical algorithm for computing the convex hull of a finite set of points in n-dimensional space, combining a two-dimensional divide-and-conquer procedure with a general-dimensional beneath-beyond method to efficiently identify the smallest convex set containing all input points.[1] Developed by C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa, it was published in 1996 in the ACM Transactions on Mathematical Software.[1]
The algorithm operates incrementally, starting with an initial simplex and iteratively selecting the point farthest from the current hull to partition the remaining points into regions visible to new facets, discarding interior points and constructing the hull by connecting these facets.[1] This approach draws inspiration from quicksort, using randomization to achieve expected time complexity of O(n log r) in dimensions d ≤ 3 (where n is the number of input points and r is the number of points on the hull) and O(n f_r / r) in higher dimensions d ≥ 4 (with f_r denoting the maximum number of facets incident to any hull vertex).[1] Quickhull is output-sensitive, performing faster on inputs with many non-extreme points compared to algorithms like gift wrapping, while requiring less memory and handling floating-point precision issues through "thick" facets that tolerate small errors.[2]
A key implementation of Quickhull is the Qhull software package, which extends the algorithm to compute Delaunay triangulations, Voronoi diagrams, and halfspace intersections, and has been applied in fields such as layered manufacturing, spectrometry, and computer graphics.[1] The algorithm's robustness and efficiency have made it a standard tool in computational geometry libraries, influencing subsequent variants and adaptations for specific domains like 3D modeling.[3]
Introduction
Overview
Quickhull is an algorithm designed to compute the convex hull of a finite set of points in n-dimensional space.[4] The convex hull represents the smallest convex set that contains all the given points, analogous to the shape formed by stretching a rubber band around the outermost points in two dimensions.[4]
At its core, Quickhull adopts a divide-and-conquer paradigm inspired by the QuickSort sorting algorithm, which involves recursively partitioning the input points relative to extreme points and the furthest points from the lines connecting those extremes.[4] This approach efficiently narrows down the candidate points for the hull boundary by eliminating interior points in each subdivision.[4]
Quickhull is particularly practical for two-dimensional inputs as well as higher dimensions, where it performs well on typical point distributions without excessive computational overhead.[4]
History
Quickhull was developed by C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa while working at the Geometry Center, University of Minnesota, with an initial technical report published in 1993.[4] The algorithm emerged as part of efforts to implement an efficient convex hull computation tool, initially motivated by the need for robust handling of point sets in low dimensions.[1]
An initial formal description of Quickhull appeared in the 1993 technical report "The Quickhull Algorithm for Convex Hulls" (GCG53).[5] The peer-reviewed publication followed in 1996 in the ACM Transactions on Mathematical Software.[1] This work built on the randomized incremental construction paradigm introduced by Kenneth L. Clarkson and Peter W. Shor in their 1988 paper, adapting it to prioritize efficiency in practice through a divide-and-conquer strategy.[6][1]
Originally focused on two-dimensional convex hulls, the method was extended to higher dimensions by integrating it with the beneath-beyond algorithm, enabling general applicability.[1] Empirical evaluations in the paper highlighted Quickhull's advantages over gift-wrapping and Jarvis march algorithms, especially on inputs with numerous non-extreme points, where it processed far fewer interior points and required less memory—for instance, handling 10,000 points in three dimensions by examining only 153 versus 387 in randomized counterparts.[1] These studies established Quickhull as a superior choice for real-world datasets dominated by interior points.[1]
Algorithm Description
High-Level Overview
Quickhull operates through three primary phases to compute the convex hull of a set of points in the plane: initial hull construction, recursive decomposition, and hull refinement. In the initial phase, the algorithm identifies extreme points, such as the leftmost and rightmost points in the set, to establish a foundational base line that anchors the subsequent partitioning process. This base line serves as the starting point for dividing the input points into manageable regions, ensuring that the algorithm begins with a simple, boundary-defining structure.[1]
The core process unfolds in the recursive decomposition phase, where the algorithm partitions the points relative to the current base line. It selects the point farthest from this line to form a triangle with the base endpoints, effectively splitting the point set into subregions on either side of the new edge. Recursion then applies the same partitioning strategy to these subregions, progressively building the hull by incorporating the most distant points and refining the boundaries iteratively. This divide-and-conquer approach mirrors quicksort's partitioning, enabling efficient exploration of the point set without exhaustive examination.[1]
Hull refinement occurs throughout via the "prune-and-search" concept, which discards points lying strictly inside the triangles formed during decomposition, thereby eliminating interior points early and focusing computation on potential hull vertices. The recursion distinctly constructs the upper and lower hulls separately: for the upper hull, it processes points above the base line in a counterclockwise manner, while the lower hull handles points below in a clockwise direction, ensuring a complete and non-overlapping final convex boundary.[1]
Detailed Steps
The Quickhull algorithm begins by identifying two initial extreme points from the input set of points in the plane, typically the points with the minimum and maximum x-coordinates, which form the initial base line or edge of the convex hull.[4] These points serve as the starting hull segment, and all other points are considered relative to this line.
Next, the remaining points are partitioned into two subsets: those lying above the base line and those below it, based on their orientation relative to the line segment.[4] This classification uses the cross-product to determine the side of the line on which each point resides, effectively dividing the problem into two independent subproblems for each side.
For each subset (starting with the side containing points), the algorithm selects the point P that is farthest from the base line, measured by the perpendicular distance.[4] If no points exist in the subset, the base line segment is confirmed as part of the convex hull, and processing for that side terminates.
With point P identified, a triangle is formed using P and the two endpoints of the base line.[4] All points lying strictly inside this triangle are discarded, as they cannot be part of the final convex hull.
The process then recurses on the two new line segments created by connecting P to each endpoint of the base line, using the remaining points not discarded in the previous step.[4] This recursion continues independently for each segment and side, building the hull incrementally.
Collinear points are handled by including them on the hull if they lie on the boundary edges; during distance calculations, points exactly on the line are treated appropriately to avoid excluding valid hull vertices.[4] Degenerate cases, such as when there are fewer than three points or all points are collinear, result in a line segment or single point as the hull, with no further recursion needed.[4]
The algorithm terminates for a given segment when no points remain outside it, meaning all relevant points have been incorporated into the hull or discarded.[4] The full convex hull is obtained by collecting all confirmed edges from the recursive calls.
Implementation
Pseudocode for 2D
The 2D Quickhull algorithm processes an input set of points in the plane to compute their convex hull through a divide-and-conquer approach. The implementation begins with a main function that identifies the extreme points (minimum and maximum x-coordinates) after sorting the points by x-coordinate, then initiates the recursive construction of the upper and lower hulls. A helper function computes the perpendicular distance from a point to the line defined by two base points using the cross-product magnitude normalized by the line segment length. The core recursive function partitions points relative to the current hull edge, selects the furthest point on the relevant side, and recurses on the subregions, handling base cases where no points lie outside the current edge or the input has fewer than three points. This structure ensures the hull vertices are accumulated correctly without redundant processing.[1]
The following pseudocode outlines the 2D Quickhull algorithm, assuming points are represented as pairs (x, y) and the hull is stored in a list hull. All operations use floating-point arithmetic for coordinates. For the lower hull, the direction is reversed to maintain consistent counterclockwise orientation.
pseudocode
function distance_to_line(P, A, B):
# Compute | (B - A) × (P - A) | / ||B - A||
dx1 = B.x - A.x
dy1 = B.y - A.y
dx2 = P.x - A.x
dy2 = P.y - A.y
cross = abs(dx1 * dy2 - dy1 * dx2)
length = sqrt(dx1^2 + dy1^2)
if length == 0:
return 0
return cross / length
function build_hull(points, A, B, side, hull):
# side: 1 for upper (left of A->B), -1 for lower (right of A->B, but reverse direction for consistency)
# Find point C furthest from line AB on the given side
max_dist = 0
C = None
for P in points:
dist = distance_to_line(P, A, B)
# Determine side using [cross product](/page/Cross_product) sign: positive for left of A->B
cross = (B.x - A.x)*(P.y - A.y) - (B.y - A.y)*(P.x - A.x)
side_condition = (side == 1 and cross > 0) or (side == -1 and cross < 0)
if side_condition and dist > max_dist:
max_dist = dist
C = P
if C is None:
# No points outside, add endpoints to hull
hull.append(A)
hull.append(B)
return
# For lower hull, to maintain counterclockwise, reverse direction if side == -1
if side == -1:
# Swap A and B for lower recursion to go from max to min
temp = A
A = B
B = temp
# Flip points list? But for simplicity, adjust cross signs by multiplying by -1
# Recurse with flipped side logic, but here we adjust partitions
# Partition points into two sets: left of AC and left of CB (for counterclockwise)
left_points = []
right_points = []
for P in points:
if P == C:
continue
# For AC: left of A->C
cross_AC = (C.x - A.x)*(P.y - A.y) - (C.y - A.y)*(P.x - A.x)
# Adjust for side: for lower, we want "right" which is flipped
if (cross_AC > 0) == (side == 1): # For side=1 >0, for side=-1 <0 (flipped)
left_points.append(P)
# For CB: left of C->B
cross_CB = (B.x - C.x)*(P.y - C.y) - (B.y - C.y)*(P.x - C.x)
if (cross_CB > 0) == (side == 1):
right_points.append(P)
# Recurse with same side (adjustments handle flip)
build_hull(left_points, A, C, side, hull)
hull.append(C)
build_hull(right_points, C, B, side, hull)
function quickhull(points):
if len(points) < 3:
return points # Degenerate cases
# Sort points by x-coordinate (stable sort if needed for ties)
sort(points, key=x_coordinate)
# Find extremes: min and max x
min_point = points[0]
max_point = points[-1]
# Build upper and lower hulls separately to preserve order
upper_hull = []
build_hull(points, min_point, max_point, 1, upper_hull) # Upper hull: min to max
lower_hull = []
build_hull(points, max_point, min_point, -1, lower_hull) # Lower hull: max to min
# Combine: upper from min to max, lower from min to max by reversing lower
# Remove duplicate min and max
full_hull = upper_hull[:-1] + lower_hull[::-1][1:]
return full_hull
function distance_to_line(P, A, B):
# Compute | (B - A) × (P - A) | / ||B - A||
dx1 = B.x - A.x
dy1 = B.y - A.y
dx2 = P.x - A.x
dy2 = P.y - A.y
cross = abs(dx1 * dy2 - dy1 * dx2)
length = sqrt(dx1^2 + dy1^2)
if length == 0:
return 0
return cross / length
function build_hull(points, A, B, side, hull):
# side: 1 for upper (left of A->B), -1 for lower (right of A->B, but reverse direction for consistency)
# Find point C furthest from line AB on the given side
max_dist = 0
C = None
for P in points:
dist = distance_to_line(P, A, B)
# Determine side using [cross product](/page/Cross_product) sign: positive for left of A->B
cross = (B.x - A.x)*(P.y - A.y) - (B.y - A.y)*(P.x - A.x)
side_condition = (side == 1 and cross > 0) or (side == -1 and cross < 0)
if side_condition and dist > max_dist:
max_dist = dist
C = P
if C is None:
# No points outside, add endpoints to hull
hull.append(A)
hull.append(B)
return
# For lower hull, to maintain counterclockwise, reverse direction if side == -1
if side == -1:
# Swap A and B for lower recursion to go from max to min
temp = A
A = B
B = temp
# Flip points list? But for simplicity, adjust cross signs by multiplying by -1
# Recurse with flipped side logic, but here we adjust partitions
# Partition points into two sets: left of AC and left of CB (for counterclockwise)
left_points = []
right_points = []
for P in points:
if P == C:
continue
# For AC: left of A->C
cross_AC = (C.x - A.x)*(P.y - A.y) - (C.y - A.y)*(P.x - A.x)
# Adjust for side: for lower, we want "right" which is flipped
if (cross_AC > 0) == (side == 1): # For side=1 >0, for side=-1 <0 (flipped)
left_points.append(P)
# For CB: left of C->B
cross_CB = (B.x - C.x)*(P.y - C.y) - (B.y - C.y)*(P.x - C.x)
if (cross_CB > 0) == (side == 1):
right_points.append(P)
# Recurse with same side (adjustments handle flip)
build_hull(left_points, A, C, side, hull)
hull.append(C)
build_hull(right_points, C, B, side, hull)
function quickhull(points):
if len(points) < 3:
return points # Degenerate cases
# Sort points by x-coordinate (stable sort if needed for ties)
sort(points, key=x_coordinate)
# Find extremes: min and max x
min_point = points[0]
max_point = points[-1]
# Build upper and lower hulls separately to preserve order
upper_hull = []
build_hull(points, min_point, max_point, 1, upper_hull) # Upper hull: min to max
lower_hull = []
build_hull(points, max_point, min_point, -1, lower_hull) # Lower hull: max to min
# Combine: upper from min to max, lower from min to max by reversing lower
# Remove duplicate min and max
full_hull = upper_hull[:-1] + lower_hull[::-1][1:]
return full_hull
This pseudocode faithfully captures the recursive partitioning and furthest-point selection central to Quickhull, with the distance computation enabling efficient side determination and prioritization. The base cases ensure termination when subproblems are trivial, directly adding edge endpoints to the hull. Adjustments for the lower hull maintain consistent orientation.[1]
Complexity Analysis
The Quickhull algorithm for computing the convex hull of a set of n points in 2D exhibits a time complexity that mirrors the partitioning behavior of QuickSort. In the worst case, it requires O(n^2) time, which occurs when the partitions are highly unbalanced, such as when all points lie near the convex hull, leading to recursive calls that process nearly all remaining points at each level.[1] This degradation happens because the selection of the furthest point from the current edge may consistently result in minimal point discards, forcing the algorithm to examine a large portion of the input repeatedly.
In the average case, assuming a balanced partitioning similar to QuickSort's expected behavior, the time complexity is O(n \log n). This arises from the recursive division of the point set into subregions, where each level of recursion processes the points in O(n) time overall, and the depth of recursion is O(\log n) on average.[1] The algorithm's efficiency is enhanced by early discarding of points inside triangular regions formed by hull edges and the furthest point, which reduces the effective input size in subsequent calls.
Several factors influence the practical performance of Quickhull in 2D. The distribution of input points plays a key role; when many points are non-extreme (interior to the hull), they are quickly eliminated via distance tests relative to edges, accelerating the process.[1] Empirical evaluations in the original implementation demonstrate that Quickhull outperforms the Graham scan algorithm on random point sets, highlighting its speed advantage under typical conditions.[1]
A randomized variant of Quickhull, where the next point is selected randomly rather than as the furthest, transforms it into a randomized incremental algorithm with expected O(n \log n) time complexity.[1] This approach draws from the broader framework of Clarkson and Shor's randomized incremental construction for convex hulls, adapting their use of conflict lists to outside sets for efficiency.[1]
Regarding space complexity, Quickhull requires O(n) space in 2D to store the input points and maintain the recursion stack, with the stack depth reaching O(n) in the worst case due to unbalanced partitions but averaging O(\log n).[1] Auxiliary space is also O(n) for temporary lists of points associated with hull edges during partitioning.[1]
Extensions and Variants
Higher Dimensions
The extension of the Quickhull algorithm to n-dimensional space was developed by Barber, Dobkin, and Huhdanpaa in 1996, integrating the 2D Quickhull approach with the general-dimensional beneath-beyond paradigm.[7] This paradigm constructs the convex hull incrementally by adding points one at a time, classifying each new point as either inside (beneath) the current hull or outside (beyond) a specific facet. For points beyond a facet, the algorithm removes that facet and its adjacent ones, then forms new facets connecting the point to the exposed boundary (the horizon), ensuring the hull remains convex throughout the process.[7]
In three dimensions, Quickhull initializes the hull with a tetrahedron formed by four non-coplanar points, then recursively processes the remaining points by identifying visible facets—those where a point lies on the outer side, determined by a positive signed distance to the facet's plane.[7] Visible facets are deleted, and new triangular facets are created from the added point to the horizon ridges (the boundary edges not deleted). To efficiently track relations between points and facets, the algorithm maintains outside sets for each facet, listing points that are visible to it; these sets function similarly to a conflict graph, updating dynamically as facets are modified to avoid redundant visibility checks.[7] This structure allows selective processing of the furthest visible point per facet, reducing unnecessary computations.
For general n dimensions, the algorithm replaces 2D line distances with signed distances to (n-1)-dimensional hyperplanes, computed via the inner product of the point vector with the facet normal plus an offset.[7] Construction proceeds incrementally, starting from an initial simplex and adding points in a randomized order to promote efficiency, as this tends to delay the incorporation of non-extreme points that contribute little to the hull.[8] High dimensions pose challenges due to the exponential growth in the number of facets—scaling as O(n^{⌊d/2⌋}) for input size n and dimension d—which can lead to memory and time issues beyond 8-10 dimensions for practical inputs.[8] These are mitigated by skipping points inside the current hull early and using randomized selection to focus on extreme points, ensuring average-case performance remains favorable. The Qhull software library implements this n-dimensional Quickhull, supporting convex hull computations up to 8 dimensions effectively, with practical limits around 10 dimensions depending on input size and hardware.[3]
Other Variants
QuickhullDisk is an adaptation of the Quickhull algorithm specifically designed for computing the convex hull of a set of disks in the plane, where each disk is defined by a center and a radius. Developed by Chanyoung Song, Joonghyun Ryu, and Deok-Soo Kim in 2019, it extends the divide-and-conquer strategy of Quickhull by operating primarily on the centers of the disks while accounting for their radii to determine the boundary of the hull. The algorithm begins by identifying two extreme disks along a baseline, such as the leftmost and rightmost centers, and partitions the remaining disks into subsets based on their orientation relative to this baseline—specifically, those on the non-positive and non-negative sides. An apex disk, selected as the one farthest from the baseline (adjusted for radius), is then used to pivot: tangent lines from the apex to the current partial hull are computed to filter out non-contributing disks and define new sub-baselines for recursion on the subsets. This process ensures that the convex hull boundary consists of circular arcs from extreme disks connected by common tangent segments, with the hull represented by these extreme disks and their supporting tangents. In terms of performance, QuickhullDisk achieves an average time complexity of O(n log n) for n disks and demonstrates practical speedups, being approximately 2.6 times faster than traditional incremental algorithms for random disk sets and 1.2 times faster even in cases with clustered disks.[9]
The randomized variant of Quickhull introduces stochastic elements to enhance efficiency, particularly by incorporating random permutations of the input points inspired by the randomized incremental constructions of Clarkson and Shor. In this approach, instead of deterministically selecting the farthest point from a facet as in the original Quickhull, points are processed in a random order, which helps avoid worst-case scenarios and leads to better expected performance. This randomization draws from Clarkson and Shor's 1989 framework for random sampling in computational geometry, adapting it to Quickhull's structure to ensure that the expected number of structural changes during incremental updates remains low. For the planar case, the algorithm, as detailed by R. Wenger in 1995, achieves an expected running time of O(n log h), where h is the number of points on the convex hull, making it output-sensitive and efficient for inputs where h is small relative to n.[10] In higher dimensions, this randomized permutation strategy applied to Quickhull's beneath-beyond method provides robustness against degenerate inputs while maintaining the divide-and-conquer essence.
Parallel Quickhull variants leverage multi-core architectures by exploiting the independence of recursive subproblems in the divide-and-conquer process. A notable implementation framework, proposed by Stefan Näher and Daniel Schmitt in 2008, parallelizes Quickhull by representing subregions as jobs in a queue: after partitioning the point set via an initial baseline and selecting a farthest point to create independent subsets (e.g., above and below the pivot line), these subsets are assigned to separate threads for concurrent recursion. Synchronization occurs through mutex-protected merging of partial hulls at leaf nodes (small subsets solved sequentially) and during job completion, with primary threads handling deep recursions and secondary threads focusing on partitioning to balance load. This approach is particularly effective for large inputs, achieving speedups of up to 3.5 times on a quad-core processor for one million points distributed on or near a circle, though gains are more modest (around 1.5 times with two cores) for uniformly random distributions due to overhead in the root partition. Such parallelizations are well-suited for shared-memory systems and have been extended in later works to vectorized single-core optimizations before multi-threading.[11]
Applications and Comparisons
Applications
Quickhull finds widespread application in computational geometry for tasks requiring efficient computation of convex hulls, particularly in scenarios involving large point sets where bounding shapes facilitate proximity queries. In computer graphics and robotics, it is employed for collision detection by generating convex enclosures around objects, enabling rapid checks for intersections or overlaps during simulations and path planning. For instance, the algorithm supports the creation of bounding volumes that approximate complex shapes, reducing computational overhead in real-time environments.[12]
In geographic information systems (GIS) and computer-aided design (CAD), Quickhull computes boundaries for point clouds derived from sensors, laser scans, or design models, aiding in the delineation of spatial extents and feature extraction. This is particularly useful for generating minimal enclosing polygons that represent clusters of geospatial data or 3D models, streamlining visualization and analysis workflows.[13]
Within scientific computing, Quickhull serves as a preprocessing step for Delaunay triangulation, which is the dual of the convex hull and essential for simulations in fields like finite element analysis and molecular modeling. By first computing the hull, it ensures robust triangulation of scattered points, handling numerical stability through merged facets.[3]
The algorithm is implemented in the open-source Qhull library, released in the 1990s, which provides robust handling of floating-point errors and supports higher dimensions. Qhull is integrated into major software ecosystems, including MATLAB's convhulln function for N-dimensional hulls, SciPy's spatial module for Python-based geometric computations, and CGAL for advanced C++ geometric algorithms. In niche areas like pattern recognition, convex hull extremes computed via Quickhull aid outlier detection by identifying boundary points that deviate from the core data distribution.[3][14][15][16][17][18]
Comparison with Other Algorithms
Quickhull provides a simpler implementation than Graham scan, as it avoids the polar angle sorting step central to the latter, making it easier to code and extend to higher dimensions. However, Quickhull's worst-case time complexity is O(n²), which can occur on degenerate inputs like points in convex position, whereas Graham scan guarantees O(n log n) performance regardless of input distribution. In practice, Quickhull performs better on average for point sets with sparse hulls—those with many interior points—due to its divide-and-conquer strategy that quickly discards irrelevant points, achieving expected O(n log n) time similar to Graham scan but with fewer operations on non-extreme points.[7][19]
Compared to Jarvis march (also known as gift wrapping), Quickhull scales more favorably for large inputs, with average O(n log n) time versus Jarvis march's O(n h), where h is the number of hull vertices; this makes Quickhull preferable when h is much smaller than n, as in typical random distributions. Jarvis march excels for very small h, such as h ≈ log n, where its simplicity and low constant factors can yield faster runtimes without the logarithmic overhead of Quickhull's partitioning. Empirical tests show Quickhull outperforming Jarvis march by orders of magnitude on datasets with thousands of points and modest h, due to reduced per-step computations.[7][19]
Quickhull exhibits output-sensitive behavior in practice, processing roughly O(n log h) time for hull size h in low dimensions, but lacks theoretical guarantees, unlike Chan's algorithm, which achieves the optimal O(n log h) bound through a sophisticated combination of Jarvis march and Graham scan on subsets. Chan's approach ensures worst-case optimality for dimensions up to 3, making it suitable for adversarial inputs, while Quickhull's reliance on furthest-point selection can lead to unbalanced partitions and quadratic slowdowns. Despite this, Quickhull often matches or exceeds Chan's practical speed in 2D and 3D on random data, thanks to its minimal memory footprint and avoidance of multiple hull computations.[7]
As a practical simplification of randomized incremental algorithms like Clarkson-Shor, Quickhull replaces random point insertion with deterministic furthest-point selection, which reduces the number of processed points (e.g., 153 versus 387 for 10,000 points in 3D) and memory usage (proportional to output facets rather than all created ones), leading to faster execution in low dimensions without needing full randomization for expected performance. Clarkson-Shor provides stronger probabilistic guarantees, with expected O(n log n) time in 3D, but incurs higher overhead from conflict graph maintenance; Quickhull trades this for simplicity and speed on non-degenerate inputs.[7]
Quickhull is particularly recommended for 2D and 3D applications with expected random or clustered point distributions, where many points lie inside the hull, leveraging its efficiency on interior-heavy sets; for scenarios requiring guaranteed worst-case or output-sensitive bounds, such as high-dimensional or adversarial data, alternatives like Chan's or Clarkson-Shor algorithms are preferable.[7]