Fact-checked by Grok 2 weeks ago

Quickhull

Quickhull is a practical for computing the of a of points in n-dimensional , combining a two-dimensional divide-and-conquer procedure with a general-dimensional beneath-beyond to efficiently identify the smallest containing all input points. Developed by C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa, it was published in 1996 in the ACM Transactions on Mathematical Software. The algorithm operates incrementally, starting with an initial simplex and iteratively selecting the point farthest from the current hull to partition the remaining points into regions visible to new facets, discarding interior points and constructing the hull by connecting these facets. This approach draws inspiration from quicksort, using randomization to achieve expected time complexity of O(n log r) in dimensions d ≤ 3 (where n is the number of input points and r is the number of points on the hull) and O(n f_r / r) in higher dimensions d ≥ 4 (with f_r denoting the maximum number of facets incident to any hull vertex). Quickhull is output-sensitive, performing faster on inputs with many non-extreme points compared to algorithms like gift wrapping, while requiring less memory and handling floating-point precision issues through "thick" facets that tolerate small errors. A key implementation of Quickhull is the Qhull software package, which extends the algorithm to compute Delaunay triangulations, Voronoi diagrams, and halfspace intersections, and has been applied in fields such as layered manufacturing, , and . The algorithm's robustness and efficiency have made it a standard tool in libraries, influencing subsequent variants and adaptations for specific domains like .

Introduction

Overview

Quickhull is an algorithm designed to compute the of a of points in n-dimensional space. The represents the smallest that contains all the given points, analogous to the shape formed by stretching a around the outermost points in two dimensions. At its core, Quickhull adopts a divide-and-conquer paradigm inspired by the , which involves recursively partitioning the input points relative to extreme points and the furthest points from the lines connecting those extremes. This approach efficiently narrows down the candidate points for the hull boundary by eliminating interior points in each subdivision. Quickhull is particularly practical for two-dimensional inputs as well as higher dimensions, where it performs well on typical point distributions without excessive computational overhead.

History

Quickhull was developed by C. Bradford Barber, David P. Dobkin, and Hannu Huhdanpaa while working at the Geometry Center, , with an initial published in 1993. The algorithm emerged as part of efforts to implement an efficient computation tool, initially motivated by the need for robust handling of point sets in low dimensions. An initial formal description of Quickhull appeared in the 1993 technical report "The Quickhull Algorithm for Convex Hulls" (GCG53). The peer-reviewed publication followed in 1996 in the ACM Transactions on Mathematical Software. This work built on the randomized incremental construction paradigm introduced by and in their 1988 paper, adapting it to prioritize efficiency in practice through a divide-and-conquer strategy. Originally focused on two-dimensional convex hulls, the method was extended to higher dimensions by integrating it with the beneath-beyond algorithm, enabling general applicability. Empirical evaluations in the paper highlighted Quickhull's advantages over and algorithms, especially on inputs with numerous non-extreme points, where it processed far fewer interior points and required less memory—for instance, handling 10,000 points in three dimensions by examining only 153 versus 387 in randomized counterparts. These studies established Quickhull as a superior choice for real-world datasets dominated by interior points.

Algorithm Description

High-Level Overview

Quickhull operates through three primary phases to compute the convex hull of a set of points in the plane: initial hull construction, recursive , and hull refinement. In the initial phase, the algorithm identifies extreme points, such as the leftmost and rightmost points in the set, to establish a foundational base line that anchors the subsequent partitioning process. This base line serves as the starting point for dividing the input points into manageable regions, ensuring that the algorithm begins with a simple, boundary-defining structure. The core process unfolds in the recursive phase, where partitions the points relative to the current base line. It selects the point farthest from this line to form a with the base endpoints, effectively splitting the point set into subregions on either side of the new edge. then applies the same partitioning strategy to these subregions, progressively building the by incorporating the most distant points and refining the boundaries iteratively. This divide-and-conquer approach mirrors quicksort's partitioning, enabling efficient exploration of the point set without exhaustive examination. Hull refinement occurs throughout via the "prune-and-search" concept, which discards points lying strictly inside the triangles formed during , thereby eliminating interior points early and focusing computation on potential vertices. The distinctly constructs the upper and lower separately: for the upper hull, it processes points above the base line in a counterclockwise manner, while the lower hull handles points below in a direction, ensuring a complete and non-overlapping final .

Detailed Steps

The Quickhull algorithm begins by identifying two initial extreme points from the input set of points in the , typically the points with the minimum and maximum x-coordinates, which form the initial base line or edge of the . These points serve as the starting hull segment, and all other points are considered relative to this line. Next, the remaining points are partitioned into two subsets: those lying above the base line and those below it, based on their orientation relative to the . This classification uses the cross-product to determine the side of the line on which each point resides, effectively dividing the problem into two independent subproblems for each side. For each subset (starting with the side containing points), the algorithm selects the point P that is farthest from the base line, measured by the perpendicular distance. If no points exist in the subset, the base line segment is confirmed as part of the convex hull, and processing for that side terminates. With point P identified, a triangle is formed using P and the two endpoints of the base line. All points lying strictly inside this triangle are discarded, as they cannot be part of the final convex hull. The process then recurses on the two new line segments created by connecting P to each endpoint of the base line, using the remaining points not discarded in the previous step. This recursion continues independently for each segment and side, building the incrementally. Collinear points are handled by including them on the if they lie on the edges; during calculations, points exactly on the line are treated appropriately to avoid excluding valid hull vertices. Degenerate cases, such as when there are fewer than three points or all points are collinear, result in a or single point as the , with no further recursion needed. The algorithm terminates for a given segment when no points remain outside it, meaning all relevant points have been incorporated into the hull or discarded. The full convex hull is obtained by collecting all confirmed edges from the recursive calls.

Implementation

Pseudocode for 2D

The 2D Quickhull algorithm processes an input set of points in the plane to compute their convex hull through a divide-and-conquer approach. The implementation begins with a main function that identifies the extreme points (minimum and maximum x-coordinates) after sorting the points by x-coordinate, then initiates the recursive construction of the upper and lower hulls. A helper function computes the perpendicular distance from a point to the line defined by two base points using the cross-product magnitude normalized by the line segment length. The core recursive function partitions points relative to the current hull edge, selects the furthest point on the relevant side, and recurses on the subregions, handling base cases where no points lie outside the current edge or the input has fewer than three points. This structure ensures the hull vertices are accumulated correctly without redundant processing. The following pseudocode outlines the 2D Quickhull algorithm, assuming points are represented as pairs (x, y) and the hull is stored in a list hull. All operations use for coordinates. For the lower hull, the direction is reversed to maintain consistent counterclockwise orientation.
pseudocode
function distance_to_line(P, A, B):
    # Compute | (B - A) × (P - A) | / ||B - A||
    dx1 = B.x - A.x
    dy1 = B.y - A.y
    dx2 = P.x - A.x
    dy2 = P.y - A.y
    cross = abs(dx1 * dy2 - dy1 * dx2)
    length = sqrt(dx1^2 + dy1^2)
    if length == 0:
        return 0
    return cross / length

function build_hull(points, A, B, side, hull):
    # side: 1 for upper (left of A->B), -1 for lower (right of A->B, but reverse direction for consistency)
    
    # Find point C furthest from line AB on the given side
    max_dist = 0
    C = None
    for P in points:
        dist = distance_to_line(P, A, B)
        # Determine side using [cross product](/page/Cross_product) sign: positive for left of A->B
        cross = (B.x - A.x)*(P.y - A.y) - (B.y - A.y)*(P.x - A.x)
        side_condition = (side == 1 and cross > 0) or (side == -1 and cross < 0)
        if side_condition and dist > max_dist:
            max_dist = dist
            C = P
    
    if C is None:
        # No points outside, add endpoints to hull
        hull.append(A)
        hull.append(B)
        return
    
    # For lower hull, to maintain counterclockwise, reverse direction if side == -1
    if side == -1:
        # Swap A and B for lower recursion to go from max to min
        temp = A
        A = B
        B = temp
        # Flip points list? But for simplicity, adjust cross signs by multiplying by -1
        # Recurse with flipped side logic, but here we adjust partitions
    
    # Partition points into two sets: left of AC and left of CB (for counterclockwise)
    left_points = []
    right_points = []
    for P in points:
        if P == C:
            continue
        # For AC: left of A->C
        cross_AC = (C.x - A.x)*(P.y - A.y) - (C.y - A.y)*(P.x - A.x)
        # Adjust for side: for lower, we want "right" which is flipped
        if (cross_AC > 0) == (side == 1):  # For side=1 >0, for side=-1 <0 (flipped)
            left_points.append(P)
        # For CB: left of C->B
        cross_CB = (B.x - C.x)*(P.y - C.y) - (B.y - C.y)*(P.x - C.x)
        if (cross_CB > 0) == (side == 1):
            right_points.append(P)
    
    # Recurse with same side (adjustments handle flip)
    build_hull(left_points, A, C, side, hull)
    hull.append(C)
    build_hull(right_points, C, B, side, hull)

function quickhull(points):
    if len(points) < 3:
        return points  # Degenerate cases
    
    # Sort points by x-coordinate (stable sort if needed for ties)
    sort(points, key=x_coordinate)
    
    # Find extremes: min and max x
    min_point = points[0]
    max_point = points[-1]
    
    # Build upper and lower hulls separately to preserve order
    upper_hull = []
    build_hull(points, min_point, max_point, 1, upper_hull)   # Upper hull: min to max
    
    lower_hull = []
    build_hull(points, max_point, min_point, -1, lower_hull)  # Lower hull: max to min
    
    # Combine: upper from min to max, lower from min to max by reversing lower
    # Remove duplicate min and max
    full_hull = upper_hull[:-1] + lower_hull[::-1][1:]
    return full_hull
This faithfully captures the and furthest-point selection central to Quickhull, with the distance computation enabling efficient side determination and prioritization. The base cases ensure termination when subproblems are trivial, directly adding edge endpoints to the hull. Adjustments for the lower hull maintain consistent .

Complexity Analysis

The Quickhull algorithm for computing the of a set of n points in exhibits a that mirrors the partitioning behavior of . In the worst case, it requires O(n^2) time, which occurs when the partitions are highly unbalanced, such as when all points lie near the , leading to recursive calls that process nearly all remaining points at each level. This degradation happens because the selection of the furthest point from the current edge may consistently result in minimal point discards, forcing the algorithm to examine a large portion of the input repeatedly. In the average case, assuming a balanced partitioning similar to QuickSort's expected behavior, the time complexity is O(n \log n). This arises from the recursive division of the point set into subregions, where each level of recursion processes the points in O(n) time overall, and the depth of recursion is O(\log n) on average. The algorithm's efficiency is enhanced by early discarding of points inside triangular regions formed by hull edges and the furthest point, which reduces the effective input size in subsequent calls. Several factors influence the practical performance of Quickhull in . The of input points plays a key role; when many points are non-extreme (interior to the hull), they are quickly eliminated via distance tests relative to edges, accelerating the process. Empirical evaluations in the original implementation demonstrate that Quickhull outperforms the algorithm on random point sets, highlighting its speed advantage under typical conditions. A randomized variant of Quickhull, where the next point is selected randomly rather than as the furthest, transforms it into a randomized incremental with expected O(n \log n) . This approach draws from the broader framework of Clarkson and Shor's randomized incremental construction for convex hulls, adapting their use of conflict lists to outside sets for efficiency. Regarding , Quickhull requires O(n) space in to store the input points and maintain the , with the stack depth reaching O(n) in the worst case due to unbalanced partitions but averaging O(\log n). Auxiliary space is also O(n) for temporary lists of points associated with hull edges during partitioning.

Extensions and Variants

Higher Dimensions

The extension of the Quickhull algorithm to n-dimensional space was developed by , Dobkin, and Huhdanpaa in 1996, integrating the 2D Quickhull approach with the general-dimensional beneath-beyond paradigm. This paradigm constructs the incrementally by adding points one at a time, classifying each new point as either inside (beneath) the current hull or outside (beyond) a specific facet. For points beyond a facet, the algorithm removes that facet and its adjacent ones, then forms new facets connecting the point to the exposed boundary (the horizon), ensuring the hull remains convex throughout the process. In three dimensions, Quickhull initializes the hull with a formed by four non-coplanar points, then recursively processes the remaining points by identifying visible facets—those where a point lies on the outer side, determined by a positive signed to the facet's . Visible facets are deleted, and new triangular facets are created from the added point to the horizon ridges (the edges not deleted). To efficiently track relations between points and facets, the algorithm maintains outside sets for each facet, listing points that are visible to it; these sets function similarly to a conflict graph, updating dynamically as facets are modified to avoid redundant checks. This structure allows selective processing of the furthest visible point per facet, reducing unnecessary computations. For general n dimensions, the algorithm replaces 2D line distances with signed distances to (n-1)-dimensional hyperplanes, computed via the inner product of the point vector with the facet normal plus an offset. Construction proceeds incrementally, starting from an initial simplex and adding points in a randomized order to promote efficiency, as this tends to delay the incorporation of non-extreme points that contribute little to the hull. High dimensions pose challenges due to the exponential growth in the number of facets—scaling as O(n^{⌊d/2⌋}) for input size n and dimension d—which can lead to memory and time issues beyond 8-10 dimensions for practical inputs. These are mitigated by skipping points inside the current hull early and using randomized selection to focus on extreme points, ensuring average-case performance remains favorable. The Qhull software library implements this n-dimensional Quickhull, supporting convex hull computations up to 8 dimensions effectively, with practical limits around 10 dimensions depending on input size and hardware.

Other Variants

QuickhullDisk is an adaptation of the Quickhull algorithm specifically designed for computing the of a set of disks in the plane, where each disk is defined by a center and a radius. Developed by Chanyoung Song, Joonghyun Ryu, and Deok-Soo Kim in 2019, it extends the divide-and-conquer strategy of Quickhull by operating primarily on the centers of the disks while accounting for their radii to determine the boundary of the . The algorithm begins by identifying two extreme disks along a , such as the leftmost and rightmost centers, and partitions the remaining disks into subsets based on their relative to this —specifically, those on the non-positive and non-negative sides. An disk, selected as the one farthest from the (adjusted for radius), is then used to : tangent lines from the to the current partial are computed to filter out non-contributing disks and define new sub-baselines for on the subsets. This process ensures that the boundary consists of circular arcs from extreme disks connected by common segments, with the represented by these extreme disks and their supporting . In terms of performance, QuickhullDisk achieves an average of O(n log n) for n disks and demonstrates practical speedups, being approximately 2.6 times faster than traditional incremental algorithms for random disk sets and 1.2 times faster even in cases with clustered disks. The randomized variant of Quickhull introduces elements to enhance efficiency, particularly by incorporating random of the input points inspired by the randomized incremental constructions of Clarkson and Shor. In this approach, instead of deterministically selecting the farthest point from a facet as in the original Quickhull, points are processed in a random order, which helps avoid worst-case scenarios and leads to better expected performance. This randomization draws from Clarkson and Shor's framework for random sampling in , adapting it to Quickhull's structure to ensure that the expected number of structural changes during incremental updates remains low. For the planar case, the algorithm, as detailed by R. Wenger in , achieves an expected running time of O(n log h), where h is the number of points on the , making it output-sensitive and efficient for inputs where h is small relative to n. In higher dimensions, this randomized applied to Quickhull's beneath-beyond provides robustness against degenerate inputs while maintaining the divide-and-conquer essence. Parallel Quickhull variants leverage multi-core architectures by exploiting the of recursive subproblems in the divide-and-conquer . A notable implementation framework, proposed by Stefan Näher and Daniel Schmitt in , parallelizes Quickhull by representing subregions as jobs in a : after partitioning the point set via an initial baseline and selecting a farthest point to create independent subsets (e.g., above and below the pivot line), these subsets are assigned to separate threads for concurrent . Synchronization occurs through mutex-protected merging of partial hulls at leaf nodes (small subsets solved sequentially) and during job completion, with primary threads handling deep recursions and secondary threads focusing on partitioning to balance load. This approach is particularly effective for large inputs, achieving speedups of up to 3.5 times on a quad-core for one million points distributed on or near a , though gains are more modest (around 1.5 times with two cores) for uniformly random distributions due to overhead in the root partition. Such parallelizations are well-suited for shared-memory systems and have been extended in later works to vectorized single-core optimizations before multi-threading.

Applications and Comparisons

Applications

Quickhull finds widespread application in for tasks requiring efficient computation of convex hulls, particularly in scenarios involving large point sets where bounding shapes facilitate proximity queries. In and , it is employed for by generating convex enclosures around objects, enabling rapid checks for intersections or overlaps during simulations and path planning. For instance, the algorithm supports the creation of bounding volumes that approximate complex shapes, reducing computational overhead in environments. In geographic information systems (GIS) and computer-aided design (CAD), Quickhull computes boundaries for point clouds derived from sensors, laser scans, or design models, aiding in the delineation of spatial extents and feature extraction. This is particularly useful for generating minimal enclosing polygons that represent clusters of geospatial data or models, streamlining and workflows. Within scientific , Quickhull serves as a preprocessing step for , which is the dual of the and essential for simulations in fields like finite element and molecular modeling. By first computing the hull, it ensures robust triangulation of scattered points, handling through merged facets. The algorithm is implemented in the open-source Qhull library, released in the 1990s, which provides robust handling of floating-point errors and supports higher dimensions. Qhull is integrated into major software ecosystems, including MATLAB's convhulln function for N-dimensional hulls, SciPy's spatial module for Python-based geometric computations, and for advanced C++ geometric algorithms. In niche areas like , extremes computed via Quickhull aid outlier detection by identifying boundary points that deviate from the core data distribution.

Comparison with Other Algorithms

Quickhull provides a simpler implementation than , as it avoids the polar angle sorting step central to the latter, making it easier to code and extend to higher dimensions. However, Quickhull's worst-case is O(n²), which can occur on degenerate inputs like points in convex position, whereas guarantees O(n log n) performance regardless of input distribution. In practice, Quickhull performs better on average for point sets with sparse hulls—those with many interior points—due to its divide-and-conquer strategy that quickly discards irrelevant points, achieving expected O(n log n) time similar to but with fewer operations on non-extreme points. Compared to Jarvis march (also known as gift wrapping), Quickhull scales more favorably for large inputs, with average O(n log n) time versus Jarvis march's O(n h), where h is the number of hull vertices; this makes Quickhull preferable when h is much smaller than n, as in typical random distributions. Jarvis march excels for very small h, such as h ≈ log n, where its simplicity and low constant factors can yield faster runtimes without the logarithmic overhead of Quickhull's partitioning. Empirical tests show Quickhull outperforming Jarvis march by orders of magnitude on datasets with thousands of points and modest h, due to reduced per-step computations. Quickhull exhibits output-sensitive behavior in practice, processing roughly O(n log h) time for hull size h in low dimensions, but lacks theoretical guarantees, unlike Chan's algorithm, which achieves the optimal O(n log h) bound through a sophisticated combination of Jarvis march and on subsets. Chan's approach ensures worst-case optimality for dimensions up to 3, making it suitable for adversarial inputs, while Quickhull's reliance on furthest-point selection can lead to unbalanced partitions and slowdowns. Despite this, Quickhull often matches or exceeds Chan's practical speed in and on random data, thanks to its minimal and avoidance of multiple hull computations. As a practical simplification of randomized incremental algorithms like Clarkson-Shor, Quickhull replaces random point insertion with deterministic furthest-point selection, which reduces the number of processed points (e.g., 153 versus 387 for 10,000 points in ) and memory usage (proportional to output facets rather than all created ones), leading to faster execution in low dimensions without needing full for expected performance. Clarkson-Shor provides stronger probabilistic guarantees, with expected O(n log n) time in , but incurs higher overhead from conflict graph maintenance; Quickhull trades this for simplicity and speed on non-degenerate inputs. Quickhull is particularly recommended for and applications with expected random or clustered point distributions, where many points lie inside the hull, leveraging its efficiency on interior-heavy sets; for scenarios requiring guaranteed worst-case or output-sensitive bounds, such as high-dimensional or adversarial data, alternatives like Chan's or Clarkson-Shor algorithms are preferable.

References

  1. [1]
    The quickhull algorithm for convex hulls - ACM Digital Library
    Dec 1, 1996 · This article presents a practical convex hull algorithm that combines the two-dimensional Quickhull algorithm with the general-dimension Beneath-Beyond ...
  2. [2]
    [PDF] The Quickhull Algorithm for Convex Hulls C. Bradford Barber* David ...
    Jan 9, 1995 · This paper presents a practical convex hull algorithm that combines the two-dimensional. Quickhull Algorithm with the general dimension ...Missing: seminal | Show results with:seminal
  3. [3]
    Algorithms for diametral pairs and convex hulls that are optimal ...
    Algorithms for diametral pairs and convex hulls that are optimal, randomized, and incremental. Authors: K. L. Clarkson. K. L. Clarkson. AT&T Bell Laboratories ...
  4. [4]
    [PDF] The Quickhull Algorithm for Convex Hulls - UF CISE
    The original randomized incremental algorithm upon which we build was proposed by Clarkson and Shor [1989]. They work in the dual space of halfspace ...
  5. [5]
    Qhull manual
    Qhull implements the Quickhull algorithm for computing the convex hull. Qhull includes options for hull volume, facet area, multiple output formats, ...
  6. [6]
    Qhull code for Convex Hull, Delaunay Triangulation, Voronoi ...
    Qhull implements the Quickhull algorithm for computing the convex hull. It handles roundoff errors from floating point arithmetic. It computes volumes, surface ...
  7. [7]
  8. [8]
  9. [9]
    [PDF] A Framework for Multi-Core Implementations of Divide and Conquer ...
    In this paper we present a framework for the parallelization of divide and conquer algorithms and show its efficiency and ease of use by applying it to a ...
  10. [10]
    Convex hull representations of models for computing collisions ...
    In this paper, we consider a collision detection problem that frequently arises in the field of robotics. Given a set of bodies with their initial positions ...
  11. [11]
    Creating Convex Hulls for geospatial data processing and display in ...
    JavaScript method to calculate a convex hull for geospatial data processing and spatial cluster mapping from an array of OpenLayers features.
  12. [12]
    convhulln - N-D convex hull - MATLAB - MathWorks
    k = convhulln(P) computes the ND convex hull of the points in a matrix P. example k = convhulln(P,opts) also specifies the Qhull options used to compute the ...Description · Examples · Input Arguments · Output Arguments
  13. [13]
    ConvexHull — SciPy v1.16.2 Manual
    A one-dimensional Boolean array indicating which facets are good. Used with options that compute good facets, eg QGn and QG-n.scipy.spatial.ConvexHull · 1.7.1 · 1.12.0 · 1.15.2
  14. [14]
    CGAL 6.1 - 3D Convex Hulls: User Manual
    This chapter describes the functions provided in CGAL for producing convex hulls in three dimensions as well as functions for checking if sets of points are ...Traits Class · Example · Example for Lower... · Extreme points
  15. [15]
    [PDF] A Proof for a QuickHull Algorithm - SURFACE at Syracuse University
    QuickHull is a simple planar convex hull algorithm analogous to Hoare's QuickSort [1]. This paper presents a pedagogical description and analysis of a QuickHull ...
  16. [16]
    Detection of Multivariate Outliers by Convex Hulls - SpringerLink
    Detection of anomalous values is achieved by looking at the variations in the convex hull of the data set as block of observations are deleted.
  17. [17]
    [PDF] 1 Convex Hulls - Jeff Erickson
    This algorithm is usually called Jarvis's march, but it is also referred to as the gift-wrapping algorithm. Jarvis's march starts by computing the leftmost ...