Fact-checked by Grok 2 weeks ago

Heapsort

Heapsort is a comparison-based that utilizes a to sort an array of elements in non-decreasing order. Invented by J. W. J. Williams in 1964, it builds upon the as a to achieve efficient selection of maximum elements. The algorithm operates in two main phases: first, it constructs a max- from the input array, ensuring the parent node is greater than or equal to its children, which takes time. In the second phase, it repeatedly extracts the maximum element (the ) by it with the last unsorted element, reducing the heap size by one, and then restores the heap property via a sift-down operation, performing this n times to place elements in sorted order at the end of the array. Overall, heapsort guarantees a of O(n log n) for building the and the phase combined, making it consistent across worst-case, average-case, and best-case scenarios. As an , heapsort requires only a constant amount of additional memory beyond the input array, unlike which needs auxiliary space. It is not , meaning that equal elements may not retain their relative order, and while its worst-case performance surpasses quicksort's potential O(n²), it is generally slower in practice due to more data movements. An improvement by in 1964 optimized the heap construction phase, making it the standard implementation used today.

Introduction

Overview

Heapsort is an in-place, comparison-based that employs a to achieve a of O(n log n) in the worst, average, and best cases. It operates by first constructing a max-heap from the input , which rearranges the elements to satisfy the property where each is greater than or equal to its children. The algorithm then proceeds iteratively: the maximum element (root of the ) is extracted and placed at the end of the , the heap size is reduced, and the property is restored by sifting down the new root. This two-phase process—heap construction followed by repeated extraction—ensures efficient without requiring additional data structures beyond the original . Invented by J. W. J. Williams, heapsort was first described in a 1964 publication in Communications of the ACM, where it was presented as Algorithm 232 alongside the introduction of the as a versatile structure. Key advantages include its guaranteed O(n log n) performance, which avoids the variable worst-case behavior of algorithms like , and its in-place nature, utilizing only O(1) extra space. Although effective, heapsort is not a sorting algorithm, meaning that equal elements may not retain their relative order from the input. It is fully deterministic, producing the same output order for any given input without reliance on random pivots or other probabilistic elements.

History

Heapsort was invented by J. W. J. Williams in 1964 and published as "Algorithm 232: Heapsort" in the Communications of the ACM. This algorithm introduced the data structure, which Williams had developed as an efficient means for implementing priority queues, enabling in-place with a guaranteed of O(n log n). The development of heapsort occurred shortly after C. A. R. Hoare's in 1961, addressing the need for a comparison-based method that avoided quicksort's potential O(n²) worst-case performance due to poor pivot choices. In the same year, proposed an enhancement to the heap construction phase in "Algorithm 245: Treesort 3," also published in the Communications of the ACM. Floyd's bottom-up approach reduced the of building the initial from O(n log n) to , making the overall more efficient in practice while preserving the worst-case bound. Subsequent refinements in the 1970s and beyond focused on optimizing the algorithm's constant factors and integration into broader systems. For instance, heapsort has been incorporated into hybrid sorting strategies, such as , which combines with heapsort as a fallback to ensure O(n log n) worst-case performance; this approach was formalized by David R. Musser in 1997 and adopted in the C++ Standard Template Library's std::sort.

Binary Heaps

Definition and Properties

A is a specialized tree-based consisting of a complete that satisfies the property. In a max-heap, the value stored at each is greater than or equal to the values stored at its children; conversely, in a min-heap, the value at each is less than or equal to those of its children. This property ensures that the of the always holds the maximum (in a max-heap) or minimum (in a min-heap) element among all s. Binary heaps are conventionally implemented using an for efficient space utilization, avoiding explicit pointers to child s. With 0-based indexing, the left child of a at i is located at 2i + 1, and the right child at 2i + 2; the parent of a at i (for i > 0) is at ((i - 1)/2). This representation implicitly captures the without gaps, as the complete fills levels from left to right. Key properties of a include its , which guarantees that all levels except possibly the last are fully populated, and the last level has nodes as far left as possible, enabling the contiguous storage. For n elements, the of the is (log₂ n), providing logarithmic access times to the and efficient navigation along paths. The standard employs a max-heap to achieve ascending order, distinguishing it from min-heap variants used in other applications. As a prerequisite for heapsort, the input must first be reorganized to satisfy the heap property throughout.

Basic Operations

The basic operations of a enable efficient maintenance of the max-heap property, where each parent is greater than or equal to its ren, supporting the used in heapsort. These operations include heapify (sift-down), insertion, extract-max, and peeking at the , all leveraging the complete representation in an for O(1) access to parent and positions. The heapify procedure restores the max-heap property in a subtree rooted at a given index i, assuming the subtrees of its children are already valid heaps. Starting from i, it identifies the largest among the and its two children; if a is larger, it swaps the node with that child and recursively (or iteratively) applies heapify to the affected until no violation exists or the leaf level is reached. This bottom-up restoration ensures the property propagates downward, with the recursive version following the . Insertion adds a new element to the by first appending it to the end of the , increasing the heap size, and then performing a sift-up operation. In sift-up, the new element is compared with its ; if it is larger, they are swapped, and the process repeats upward toward the until the heap property holds or the is reached. This maintains the complete shape while adjusting the order. Extract-max removes and returns the element (the maximum), which is then replaced by moving the last element in the to the position and decreasing the heap size. Heapify is subsequently called on the new to sift it down and restore the max- property. For a min-heap variant, extract-min follows an analogous process but prioritizes the smallest element. In the standard 0-based array indexing for a of size n, the of at i (where i ≥ 1) is at \lfloor (i-1)/2 \rfloor, the left child at $2i + 1, and the right child at $2i + 2, enabling constant-time navigation without explicit pointers. These operations achieve O(log n) worst-case time for both insertion and extract-max, as each traverses at most the of the balanced ; peeking at the maximum ( access) is O(1). Such efficiencies underpin the initial construction phase of heapsort.

Core Algorithm

Heap Construction

The initial phase of heapsort involves transforming an unsorted array of n elements into a binary max-heap, where the parent of every node is greater than or equal to its children, to facilitate efficient extraction of the maximum element during sorting. For ascending order, a max-heap is chosen because it allows repeated removal of the largest element from the root while maintaining the heap property. This construction step is crucial, as it sets up the data structure for the subsequent sorting procedure. In the original formulation by Williams, heap construction proceeds in a top-down manner through repeated insertion: the heap begins empty, and each of the n elements is successively added to the end of the array and then "sifted up" by swapping with its parent until the heap property is satisfied. This method requires O(n \log n) time in the worst and average cases, as each insertion can take up to O(\log n) time. Williams described this approach as part of the heapsort algorithm, emphasizing its use in building the heap incrementally from the input array. To improve efficiency, Floyd introduced a bottom-up method shortly after, which constructs the in linear time by starting from the middle of the and working upwards. Specifically, the process begins at the last non-leaf , indexed at \lfloor n/2 \rfloor - 1, and proceeds backward to the root at index 0; for each index i in this range, the heapify operation is applied to subtree rooted at i, sifting down the element if necessary to restore the max- property in that subtree. This approach leverages the fact that subtrees below the non-leaf nodes are already trivially heaps (single nodes or small trees), allowing the heap property to propagate upward efficiently. Floyd's method achieves O(n) time in both the average and worst cases, making it the standard for heapsort implementations. The linear time bound for Floyd's bottom-up construction arises from analyzing the total work in the heapify operations across all subtrees. The height of the subtree rooted at index i is at most h = \lfloor \log_2 (n - i) \rfloor, and heapify takes O(h) time; aggregating over all nodes, the total cost is bounded by \sum_{h=0}^{\lfloor \log_2 n \rfloor} (n / 2^{h+1}) \cdot h < 2n. This sum reflects the decreasing number of nodes at greater heights, ensuring the overall effort remains proportional to n. The exact worst-case number of comparisons is $2n - o(n), confirming the tight linear bound.

Sorting Procedure

Following the heap construction phase, which transforms the input array into a , the sorting procedure extracts elements iteratively to produce the sorted output in ascending order. This extraction begins with the root, which holds the maximum element, and proceeds in a loop from the end of the array toward the beginning. For each iteration i from n-1 down to 1, the algorithm swaps the root element at position 0 with the element at position i, effectively placing the current maximum in its final sorted position at the end of the subarray. The effective heap size is then decremented by 1 to exclude the newly sorted element, and a operation is performed on the root to restore the property among the remaining unsorted elements. When the loop completes, only the single remaining element at the root is left, which is already in its correct position as the smallest element in the array, requiring no further action. Heapsort's in-place characteristic ensures that sorted elements progressively accumulate at the array's end, while the active heap contracts from the front, utilizing constant extra space beyond the input array. This mechanism guarantees correctness because each extraction identifies and fixes the next largest element in its permanent position, with the heapify step maintaining the invariant that the root always holds the maximum of the unsorted portion after restoration. A high-level pseudocode snippet for this extraction loop, assuming a 0-based array indexing and a pre-constructed max-heap, is as follows:
for i ← length[A] - 1 downto 1
    swap A[0] ↔ A[i]
    heap_size ← heap_size - 1
    MAX-HEAPIFY(A, 0)

Pseudocode

The standard heapsort algorithm is typically presented using pseudocode that assumes 0-based array indexing and constructs a to sort elements in ascending order. This formulation integrates the operation, which sifts down an element to restore the max-heap property (where each parent is greater than or equal to its children), with the phase to initially form the heap from the unsorted array, and the extraction phase to repeatedly remove the maximum element and sift down the root. The following pseudocode defines the key procedures in a formal, imperative style:
procedure MAX-HEAPIFY(A, n, i)
    // A is the array, n is the current heap size, i is the index to heapify
    // Assumes subtrees at left and right children are max-heaps
    largest ← i
    left ← 2 * i + 1
    right ← 2 * i + 2
    
    if left < n and A[left] > A[largest] then
        largest ← left
    if right < n and A[right] > A[largest] then
        largest ← right
    
    if largest ≠ i then
        exchange A[i] with A[largest]
        // Recursively sift down the affected subtree
        MAX-HEAPIFY(A, n, largest)

procedure BUILD-MAX-HEAP(A, n)
    // Build max-heap from entire [array](/page/Array) (initial heap size n = length[A])
    for i ← floor(n / 2) - 1 downto 0 do
        MAX-HEAPIFY(A, n, i)
    // Starts from last non-leaf [node](/page/Node) and ensures heap property bottom-up

procedure HEAPSORT(A)
    n ← length[A]
    BUILD-MAX-HEAP(A, n)
    // Now A[0..n-1] is a max-[heap](/page/Heap)
    for i ← n - 1 downto 1 do
        exchange A[0] with A[i]  // Move maximum to end
        n ← n - 1  // Reduce heap size
        MAX-HEAPIFY(A, n, 0)  // Restore heap property on reduced heap
    // After loop, A is sorted in ascending order
This pseudocode follows the original heapsort design, where the sift-down (MAX-HEAPIFY) ensures the heap property by comparing and swapping with the larger child if needed, propagating downward as necessary. The build-heap loop iterates over non-leaf nodes to efficiently construct the initial max-heap in linear time. For sorting in descending order, an equivalent min-heap variant can be used by modifying the comparisons in MAX-HEAPIFY to select the smallest child (replacing > with <), resulting in a min-heap where each parent is less than or equal to its children, and extracting minima to the end of the array.

Implementations and Optimizations

Standard In-Place Implementation

The standard in-place implementation of Heapsort manipulates the input array directly to build a and perform the sorting phase, using only a constant amount of extra space beyond the input array itself. This approach, originally described by , relies on the array representation of a where for a 0-based index i, the left child is at 2i + 1 and the right child at 2i + 2, enabling efficient parent-child navigation without auxiliary data structures. The algorithm begins with heap construction, calling a heapify procedure iteratively from the bottom of the array upward to establish the , ensuring every subtree satisfies the condition that parent elements are greater than or equal to their children. The sorting phase then proceeds by repeatedly swapping the root (maximum element) with the last unsorted element, reducing the heap size by one, and re-heapifying the reduced heap rooted at index 0. This process continues until the heap size reaches zero, leaving the array sorted in ascending order. A Python-like pseudocode example illustrates this in-place operation on an array A of length n:
def parent(i):
    return (i - 1) // 2

def left(i):
    return 2 * i + 1

def right(i):
    return 2 * i + 2

def max_heapify(A, n, i):
    largest = i
    l = left(i)
    r = right(i)
    if l < n and A[l] > A[largest]:
        largest = l
    if r < n and A[r] > A[largest]:
        largest = r
    if largest != i:
        A[i], A[largest] = A[largest], A[i]
        max_heapify(A, n, largest)  # Recursive call; see note on iterative alternative below

def build_max_heap(A, n):
    for i in range(n // 2 - 1, -1, -1):
        max_heapify(A, n, i)

def heapsort(A):
    n = len(A)
    build_max_heap(A, n)
    for i in range(n - 1, 0, -1):
        A[0], A[i] = A[i], A[0]
        max_heapify(A, i, 0)
This implementation assumes 0-based indexing, common in languages like and , and uses recursion in max_heapify for clarity, though the recursive version risks for very large arrays (e.g., n > 10^6 on typical systems). An iterative version of max_heapify is preferred in practice, using a to sift down the element by repeatedly swapping with the larger child until the heap property is restored at the leaves, avoiding entirely and ensuring constant space. Edge cases are handled straightforwardly in this in-place framework: for an empty array (n=0), the procedure terminates immediately without action; for a single-element array (n=1), the heap construction and sorting loops do not execute, preserving the element as already sorted; and for an already sorted input, the algorithm still performs the full O(n log n) operations to build and extract from the heap, without early termination. The array-based representation contributes to cache-friendly performance, as heapify operations often access sequential memory locations during sifting (e.g., traversing down levels in a mostly linear fashion), reducing cache misses compared to algorithms with more random access patterns. However, common pitfalls include off-by-one errors in index calculations, such as incorrectly computing parent indices as i//2 instead of (i-1)//2 for 0-based arrays, or failing to check child indices against the current heap size n during heapify, which can lead to array bounds violations. The pseudocode above aligns with the high-level structure from standard references but adapts indices for practical 0-based languages.

Memory-Efficient Variants

While the standard in-place heapsort requires only O(1) auxiliary space beyond the input , certain variants introduce minor additional space usage to enhance practical performance, particularly in terms of locality or handling large datasets. Recursive implementations of the heapify (sift-down) operation, for instance, utilize O(log n) space due to the call depth, which corresponds to the height of the ; this can be mitigated by switching to an iterative heapify procedure to maintain O(1) space while preserving the O(n log n) . Such recursive variants are useful in languages with efficient tail-call optimization, but the space overhead is negligible for most practical array sizes. To address the poor cache locality inherent in binary heapsort—stemming from non-sequential memory accesses during sift-down operations—memory-tuned variants employ wider k-ary heaps (e.g., ternary heaps with k=3), which reduce the number of levels traversed and thus minimize memory traffic, albeit at the cost of more comparisons per level. This approach executes approximately 20-30% more instructions but achieves up to 81% speedup on large datasets by reducing cache misses by nearly half, without increasing auxiliary space beyond O(1). The trade-off favors scenarios where memory bandwidth is the bottleneck, such as on systems with limited cache sizes, yielding better constant factors in practice despite the same asymptotic complexity. Integrating Fibonacci heaps into the heapsort framework results in an overall O(n log n) sorting time that matches binary heapsort but suffers from higher practical constants due to the complex linking and consolidation operations in Fibonacci heaps. This variant is primarily of theoretical interest, as empirical benchmarks show it underperforms binary heapsort by factors of 2-5x on typical , though it highlights potential for hybrid priority queues in specialized applications. Overall, these memory-efficient tweaks prioritize practical efficiency over strict minimality, often improving runtime by 50-80% in constrained environments at the expense of slightly elevated space usage.

Variations

Bottom-Up Heap Construction

The bottom-up heap construction method for heapsort was introduced by in his 1964 paper on the Treesort algorithm, published shortly after J. W. J. Williams' original heapsort description earlier that year. This approach improves upon the initial heap-building phase by efficiently transforming an unsorted into a max-heap in linear time. In Floyd's method, construction begins by identifying the non-leaf nodes in the implicit representation of the . For an A of size n (with indices from 0 to n-1), the last non-leaf node is at index i = \lfloor (n-1)/2 \rfloor. The algorithm then iterates from this i down to 0, calling the heapify (or sift-down) procedure on each node A. The heapify procedure, which restores the max-heap property in the subtree rooted at i, swaps A with its larger child if the heap property is violated, recursing downward as needed until the subtree satisfies the condition. This bottom-up strategy exploits the fact that subtrees rooted at leaves (indices \lceil n/2 \rceil to n-1) are trivially valid heaps of size 1, and progressively larger subtrees are heapified using already-processed smaller ones, minimizing redundant work. The implementation involves a single linear pass over the approximately n/2 non-leaf nodes, with each heapify call taking O(\log n) time in the worst case; however, the amortized cost per call is O(1) due to the decreasing subtree heights encountered during the backward iteration. This yields an overall time complexity of O(n) for the construction phase. A sketch of the O(n) bound considers the maximum work performed at each tree height h (measured from the bottom, where h=0 for leaves). The number of nodes at height h is at most \lceil n / 2^{h+1} \rceil, and each such heapify requires at most O(h) swaps and comparisons. The total cost is thus bounded by \sum_{h=0}^{\lfloor \log_2 n \rfloor} h \cdot \frac{n}{2^{h+1}} \leq n \sum_{h=1}^{\infty} \frac{h}{2^{h+1}} = n \cdot 1 < 2n, since the infinite series \sum_{h=1}^{\infty} h / 2^h = 2. Compared to Williams' original method, which constructed the heap via n-1 successive insertions into an initially single-element heap (requiring O(n \log n) time due to repeated top-down percolations), Floyd's bottom-up technique avoids these unnecessary passes by directly leveraging the partial structure in the array.

Advanced Variants

Smoothsort is a variant of heapsort developed by in 1981, which employs a specialized heap structure based on to achieve adaptive performance. Unlike standard heapsort, it maintains a sequence of heap segments of Fibonacci-like sizes, allowing for O(n) time in the best case when the input is nearly sorted, while preserving O(n log n) in the worst case. The algorithm constructs the heap by building these Leonardo heaps incrementally and extracts elements by sifting down through the structure, enabling smoother transitions between cases without sacrificing in-place sorting. Weak heapsort, introduced by Ronald D. Dutton in 1993, relaxes the strict parent-child ordering of binary heaps to a weaker condition where each parent is greater than or equal to at least one of its children (for max-heaps). This relaxation reduces the constant factors in the comparison count, leading to fewer swaps and a more efficient implementation while still guaranteeing O(n log n) worst-case time. The sorting process builds a weak heap in linear time and extracts maxima by adjusting the structure with minimal violations, making it particularly suitable for in-place sorting with smaller practical constants than traditional heapsort. Ternary heapsort generalizes the binary heap to a ternary tree structure, where each node has up to three children, potentially improving cache locality due to fewer levels in the tree for the same number of elements. Developed as part of broader d-ary heap analyses, such as in , it adjusts the sift-up and sift-down operations to handle three children, which can reduce the height from log₂ n to approximately log₃ n, offering theoretical benefits in comparison counts for certain inputs despite higher per-level costs. The build and extract phases follow the heapsort paradigm but propagate changes across three subtrees, making it a structural extension for environments where branch prediction or memory access patterns favor wider trees. Hybrid approaches integrate heapsort as a reliable fallback in other sorting algorithms to ensure worst-case guarantees. Introsort, proposed by David M. Musser in 1997, combines quicksort's average-case efficiency with heapsort's O(n log n) bound by monitoring recursion depth and switching to heapsort when it exceeds logarithmic limits, preventing quadratic degradation on adversarial inputs. This makes introsort the default in libraries like glibc's qsort, where heapsort serves as the deterministic backstop without altering the core heap mechanics.

Performance Characteristics

Time and Space Complexity

The heap construction phase, utilizing , achieves a time complexity of O(n). This efficiency arises from sifting down each non-leaf node starting from the lowest level, where the total number of comparisons is bounded by \sum_{h=0}^{\lfloor \log_2 n \rfloor} \lfloor n / 2^h \rfloor \cdot h \leq 2n. The summation reflects the cost at each height h in the heap tree, with approximately n / 2^{h+1} nodes requiring up to h comparisons each during siftdown, yielding a linear overall bound. The sorting phase consists of n successive extract-max operations on the heap, where each extraction involves swapping the root with the last element and sifting down, taking O(\log n) time per operation due to the heap's height. Thus, this phase requires O(n \log n) time in total. Combining both phases, heapsort exhibits an overall time complexity of O(n \log n) in the worst, average, and best cases, as the linear build cost is dominated by the quadratic-logarithmic sorting cost. This matches the \Omega(n \log n) lower bound for comparison-based sorting algorithms in terms of decision tree height. The algorithm performs approximately $2n \log n comparisons in the worst case, stemming from roughly two key comparisons per level traversed in each siftdown operation. Regarding space complexity, the standard in-place implementation of heapsort uses O(1) auxiliary space beyond the input array, as all operations modify the array directly without additional data structures. However, recursive implementations of siftdown may incur O(\log n) stack space due to the heap's height, though this is typically not included in auxiliary space analysis for in-place algorithms, and iterative variants use O(1) auxiliary space.

Comparisons to Other Sorting Algorithms

Heapsort provides a guaranteed worst-case time complexity of O(n log n), making it more predictable than , which achieves O(n log n) on average but can degrade to O(n²) in the worst case due to poor pivot choices. However, often outperforms heapsort in practice because of its superior cache locality and fewer data movements, resulting in faster execution on modern hardware for typical inputs. In comparison to mergesort, both algorithms run in O(n log n) time in all cases, but heapsort is in-place with O(1) auxiliary space, whereas mergesort requires O(n) extra space for merging subarrays. Mergesort maintains stability, preserving the relative order of equal elements, which heapsort does not; this makes mergesort suitable for applications where order matters, such as sorting by multiple keys. Timsort, a hybrid of mergesort and insertion sort, matches heapsort's O(n log n) worst-case time but adapts to nearly sorted data for O(n) performance, leveraging natural runs in the input for efficiency on real-world datasets. Unlike the non-adaptive heapsort, timsort is stable and uses O(n) space, often running significantly faster than heapsort on diverse inputs due to its optimizations.
AlgorithmWorst-Case TimeAuxiliary SpaceStableAdaptive
QuicksortO(n²)O(log n)NoNo
MergesortO(n log n)O(n)YesNo
HeapsortO(n log n)O(1)NoNo
Radix SortO(nk)O(n + k)YesN/A
Heapsort is particularly useful in scenarios requiring a strict upper bound on running time without additional memory allocation, such as embedded systems or when stability is unnecessary.

Examples

Heap Building Process

The heap building process in heapsort transforms an unsorted array into a max-heap, where the parent of every node is greater than or equal to its children, enabling efficient extraction of the maximum element during sorting. Two primary methods exist for this construction: J. W. J. Williams' original top-down approach, which builds the heap incrementally by inserting elements one by one and sifting them up recursively, and Robert W. Floyd's more efficient bottom-up method, which sifts down from the last non-leaf node to establish the heap property iteratively. Williams' top-down method begins with an empty containing the first element and proceeds by appending each subsequent element to the end of the , then recursively sifting it up until the heap property is restored. Consider the example (0-indexed for illustration): [4, 1, 3, 2, 16, 9, 10, 14, 8, 7]. Start with the as [4]. Insert 1 at index 1: since 1 < 4 (parent at index 0), no swap occurs; : [4, 1]. Insert 3 at index 2: parent at 1 is 1 < 3, swap positions 2 and 1, now [4, 3, 1]; parent at 0 is 4 > 3, stop. Insert 2 at index 3: parent at 1 is 3 > 2, no swap; : [4, 3, 1, 2]. Insert 16 at index 4: parent at 2 is 1 < 16, swap 4 and 2; now [4, 3, 16, 2, 1], recursive call on 2; parent at 1 is 3 < 16, swap 2 and 1; [4, 16, 3, 2, 1], recursive on 1; parent at 0 is 4 < 16, swap 1 and 0; [16, 4, 3, 2, 1], no further parent. Continue similarly for the remaining elements: after inserting 9 (sift up swaps with indices 2 and 1), : [16, 9, 4, 2, 1, 3]; after 10 (sift up swaps with 3 and 1): [16, 10, 4, 9, 1, 3, 2]; after 14 (sift up swaps with 3 and 1): [16, 14, 4, 10, 1, 3, 2, 9]; after 8 (sift up swaps with 4 and 2): [16, 14, 8, 10, 4, 3, 2, 9, 1]; after 7 (sift up swap with 4): [16, 14, 8, 10, 7, 3, 2, 9, 1, 4]. This method restores the heap property through a series of parent-child comparisons and swaps during each recursive sift-up, but it performs O(n log n) operations in the worst case due to potential full-height traversals for each insertion. Note that this produces a valid max- different from the one built bottom-up on the same . In contrast, Floyd's bottom-up method constructs the heap in O(n) time by starting from the last non-leaf node (index floor(n/2) - 1 = 4 here, value 16) and calling the heapify procedure downward on each subtree, ensuring the heap property propagates efficiently without rebuilding from scratch. Using the same array [4, 1, 3, 2, 16, 9, 10, 14, 8, 7], begin at i=4 (16): its only child at 9 (7) < 16, no swap; array unchanged. At i=3 (2): children at 7 (14) and 8 (8), largest is 7 (14 > 2), swap 3 and 7; array: [4, 1, 3, 14, 16, 9, 10, 2, 8, 7]; then heapify at 7 (no children), done. At i=2 (3): children at 5 (9) and 6 (10), largest is 6 (10 > 3), swap 2 and 6; array: [4, 1, 10, 14, 16, 9, 3, 2, 8, 7]; heapify at 6 (no children), done. At i=1 (1): children at 3 (14) and 4 (16), largest is 4 (16 > 1), swap 1 and 4; array: [4, 16, 10, 14, 1, 9, 3, 2, 8, 7]; heapify at 4 (1): child at 9 (7 > 1), swap 4 and 9; array: [4, 16, 10, 14, 7, 9, 3, 2, 8, 1]; no further children, done. At i=0 (4): children at 1 (16) and 2 (10), largest is 1 (16 > 4), swap 0 and 1; array: [16, 4, 10, 14, 7, 9, 3, 2, 8, 1]; heapify at 1 (4): children at 3 (14) and 4 (7), largest is 3 (14 > 4), swap 1 and 3; array: [16, 14, 10, 4, 7, 9, 3, 2, 8, 1]; heapify at 3 (4): children at 7 (2) and 8 (8), largest is 8 (8 > 4), swap 3 and 8; array: [16, 14, 10, 8, 7, 9, 3, 2, 4, 1]; no further children, done. The final max-heap array is [16, 14, 10, 8, 7, 9, 3, 2, 4, 1]. This approach leverages the fact that subtrees are smaller, allowing the heap property to be established with fewer comparisons on average. The resulting heap structure (from the bottom-up method) can be visualized as a complete :
  • Level 0 (root): 16
  • Level 1: 14, 10
  • Level 2: 8, 7, 9, 3
  • Level 3: 2, 4, 1
Each non-leaf node satisfies the max- property with respect to its children, as verified through the heapify operations. While both methods produce valid max-heaps, Floyd's bottom-up is preferred in practice for its linear , making it integral to the efficiency of .

Complete Sorting Illustration

To illustrate the complete heapsort process, consider starting from a fully constructed max-heap of size 10: [16, 14, 10, 8, 7, 9, 3, 2, 4, 1]. The sorting phase proceeds by repeatedly extracting the (maximum element), swapping it to the end of the current heap, reducing the heap size by one, and heapifying down from the to restore the max-heap property in the remaining unsorted portion. In the first extraction, swap the 16 (index 0) with the last heap element 1 (index 9), yielding [1, 14, 10, 8, 7, 9, 3, 2, 4, 16]. Heapify down from index 0 (heap size now 9): the children are 14 and 10, so swap with the larger (14 at index 1) to get [14, 1, 10, 8, 7, 9, 3, 2, 4, 16]. At index 1, children are 8 and 7, so swap with 8 (index 3) to get [14, 8, 10, 1, 7, 9, 3, 2, 4, 16]. At index 3, children are 2 and 4, so swap with 4 (index 8) to get [14, 8, 10, 4, 7, 9, 3, 2, 1, 16]. This step requires 3 comparisons (one at each descent level to select the largest among the and its children). The resulting heap is [14, 8, 10, 4, 7, 9, 3, 2, 1] with fixed sorted suffix [16]. For the second extraction, swap the root 14 (index 0) with 1 (index 8), yielding [1, 8, 10, 4, 7, 9, 3, 2, 14, 16]. Heapify down (heap size 8): children 8 and 10, swap with 10 (index 2) to get [10, 8, 1, 4, 7, 9, 3, 2, 14, 16]. At index 2, children are 9 and 3, swap with 9 (index 5) to get [10, 8, 9, 4, 7, 1, 3, 2, 14, 16]. This requires 2 comparisons. The heap is now [10, 8, 9, 4, 7, 1, 3, 2] with suffix [14, 16]. Third extraction: swap 10 (index 0) with 2 (index 7), yielding [2, 8, 9, 4, 7, 1, 3, 10, 14, 16]. Heapify down (heap size 7): children 8 and 9, swap with 9 (index 2) to get [9, 8, 2, 4, 7, 1, 3, 10, 14, 16]. At index 2, children 5=1 and 6=3, swap with 3 (index 6) to get [9, 8, 3, 4, 7, 1, 2, 10, 14, 16]. This requires 2 comparisons. Heap: [9, 8, 3, 4, 7, 1, 2]; suffix: [10, 14, 16]. Fourth extraction: swap 9 (index 0) with 2 (index 6), yielding [2, 8, 3, 4, 7, 1, 9, 10, 14, 16]. Heapify down (heap size 6): children 8 and 3, swap with 8 (index 1) to get [8, 2, 3, 4, 7, 1, 9, 10, 14, 16]. At index 1, children 4 and 7, swap with 7 (index 4) to get [8, 7, 3, 4, 2, 1, 9, 10, 14, 16]. This requires 2 comparisons. Heap: [8, 7, 3, 4, 2, 1]; : [9, 10, 14, 16]. The process continues similarly for the remaining elements, with each extraction fixing one more element in the growing sorted while the active shrinks from the end. The final sorted array is [1, 2, 3, 4, 7, 8, 9, 10, 14, 16]. In this illustration, the extraction phase involves around 22 comparisons in total across all heapify operations, varying slightly based on descent paths.

Applications and Considerations

Practical Uses

Heapsort's underlying binary heap data structure is widely utilized in priority queues, where efficient extraction of the maximum or minimum element is required. This makes it particularly valuable in graph algorithms such as Dijkstra's shortest path algorithm, which relies on a priority queue to select the node with the smallest tentative distance. Using a binary heap implementation, Dijkstra's algorithm achieves a time complexity of O((V + E) \log V), where V is the number of vertices and E is the number of edges, providing a balance of efficiency for sparse graphs. In hybrid sorting algorithms, heapsort serves as a reliable fallback to ensure worst-case performance guarantees. For instance, combines quicksort's average-case speed with heapsort's O(n \log n) worst-case bound, switching to heapsort when quicksort's depth exceeds a based on the input size. This approach is employed in the Template Library's std::sort function, enhancing robustness against degenerate inputs. Heapsort's predictable O(n \log n) time complexity in all cases, including the worst, renders it suitable for embedded systems where real-time scheduling demands bounded execution times. In such environments, the algorithm's consistent performance avoids the variability seen in algorithms like , ensuring timely task without exceeding deadlines. For example, enhancements to dynamic priority scheduling in real-time systems have incorporated heapsort to reduce overhead while maintaining low . Modern programming libraries leverage heap operations derived from heapsort principles for partial sorting and management. In C++, the provides std::push_heap to maintain the property after insertions and std::pop_heap to extract the while preserving the heap, enabling efficient implementations of partial sorts or full heapsorts when combined with std::sort_heap. These functions operate in O(\log n) time for push and pop operations, supporting applications like event scheduling or k-largest elements selection.

Limitations and Notes

Heapsort is not a stable algorithm, as the relative order of equal elements in the input may be altered in the output due to the swapping operations during heap maintenance and extraction. This instability arises particularly from the heapify procedure, where elements are percolated down or up without regard for their original positions, making heapsort unsuitable for scenarios requiring preservation of equal key orders, such as multi-key in databases. The algorithm also suffers from poor cache performance compared to other sorting methods, stemming from its non-sequential memory access patterns in the heapify steps, which frequently jump to distant array indices rather than processing elements contiguously. Studies on cache behavior in heap operations confirm that standard heapsort incurs significantly more cache misses than sequential-access algorithms like mergesort, potentially degrading runtime on modern processors with hierarchical caches. Heapsort is not adaptive, meaning it maintains a worst-case, average-case, and best-case of Θ(n log n), without accelerating on partially or fully sorted inputs unlike hybrid algorithms that incorporate for small or ordered runs. Additionally, its inherently sequential structure, relying on global manipulations, makes it challenging to parallelize effectively without substantial modifications. To mitigate issues like , variants of heapsort have been proposed that incorporate additional to preserve element orders, though these often simplicity and efficiency.

References

  1. [1]
    Heapsort
    Sep 18, 2025 · A heap sort is really pretty simple. First we form the array into a heap. Then we repeatedly pop the heap, collecting the successive maximum values at the end ...
  2. [2]
    Algorithm 232: Heapsort | Communications of the ACM
    May 2, 2025 · The External Heapsort. Heapsort is an internal sorting method which sorts an array of n records in place in O(n log n) time. · Analysis of ...
  3. [3]
    [PDF] THE ANALYSIS OF HEAPSORT - cs.Princeton
    It is the only sorting algorithm in [8] for which Knuth is unable to give a precise formula for either the minimum, maximum or the average running time. Not ...
  4. [4]
    ICS 46 Spring 2022, Notes and Examples: Comparison-Based Sorting
    So, in total, heapsort spends Θ(n) time heapifying, then Θ(n log n) time removing the elements. The total time is Θ(n log n). Heapsort is an in-place sort. What ...
  5. [5]
    Heaps - CS 10 | Problem solving | Winter 2025
    Heapsort has two major phases. First, given an array of values in an unknown order, we have to rearrange the values to obey the max-heap property. That is, we ...
  6. [6]
    [PDF] Sorting Algorithms — Stability
    Unfortunately, many of the (otherwise) best sorting algorithms are not stable. For example, quicksort and heapsort are not stable. (Mergesort property ...<|control11|><|separator|>
  7. [7]
    Algorithm 64: Quicksort | Communications of the ACM
    Algorithm 64: Quicksort. Author: C. A. R. Hoare. C. A. R. Hoare. Elliott ... https://doi.org/10.1162/netn.a.21. Show More Cited By. Recommendations. Author ...
  8. [8]
    Algorithm 245: Treesort | Communications of the ACM
    Algorithm 245: Treesort. Author: Robert W. Floyd. Robert W. Floyd. Computer ... https://dl.acm.org/doi/10.1007/978-3-030-81688-9_37. Darwish OElmasry A ...
  9. [9]
    [PDF] Introspective Sorting and Selection Algorithms
    Introsort, like quicksort and heapsort, requires random-access iterators, which means that it cannot be used with linked lists. However it is not restricted ...
  10. [10]
    Lecture 4: Heaps and Heap Sort | Introduction to Algorithms
    9:40corresponding to i equals 1. 9:43The parent of i is i over 2. 9:52The left child of i is 2i. 9:57And the right child of i is 2i plus 1. 10:06So that's ...
  11. [11]
    Lecture 8: Binary Heaps | Introduction to Algorithms
    ... complete binary tree. 19:52for solving ... 27:34Let me give you some more intuition. 27:36If you are a binary heap, if you satisfy this max-heap property.
  12. [12]
    [PDF] CS 161 Lecture 4 - 1 Heaps
    A heap is a type of data structure. One of the interesting things about heaps is that they allow you to find the largest element in the heap in O(1) time.Missing: complexity paper Williams 1964
  13. [13]
    [PDF] COS597D - cs.Princeton
    Oct 26, 2011 · The algorithm takes an array A as input and proceeds in two stages: (1) Heapify. Rearrange the input array so that it forms a binary heap.<|control11|><|separator|>
  14. [14]
    [PDF] More Heaps CSE 332 Spring 2025 Lecture 5 - Washington
    If I'm at index i, what is the index of: My left child, right child and parent? My left child: 2i. My right child: 2i + 1. My parent: i. 2. 11. 1. 8. 5. 3. 4. 7.
  15. [15]
    [PDF] Priority queues Lecture 5a: Binary and k-ary heaps
    Binary heaps are a type of priority queue, based on heap-ordered trees, represented by arrays, and can be min or max heaps.
  16. [16]
    [PDF] Average case analysis of heap building by repeated insertion
    We show that the average number of swaps required to construct a heap on n keys by Williams' method of repeated insertion is (ω + o(1))n, where the constant.
  17. [17]
    [PDF] 6.006 Lecture 04: Heaps and heap sort - MIT OpenCourseWare
    • An array, visualized as a nearly complete binary tree. • Max Heap Property: The key of a node is ≥ than the keys of ... Height of a binary heap is O(lg n). 5 ...
  18. [18]
    [PDF] The Influence of Caches on the Performance of Sorting
    phase multi-merges the auxiliary array into the input array. ... base heapsort - predicted memory-tuned heapsort - measured memory-tuned heapsort - predicted.<|separator|>
  19. [19]
    Building heaps fast - ScienceDirect.com
    View PDFView articleView in Scopus Google Scholar. [Fl]. R.W Floyd. Algorithm 245: TREESORT. Comm. ACM, 7 (No. 12) (1964), p. 701. View in Scopus Google Scholar.
  20. [20]
    [PDF] arXiv:1012.0956v3 [cs.DS] 18 Apr 2011
    Apr 18, 2011 · Floyd's heap construction algorithm [4] proposed in 1964 as an improvement of the construction phase of the classical heapsort algorithm ...Missing: original | Show results with:original
  21. [21]
    Smoothsort, an alternative for sorting in situ (EWD 796a)
    Jul 28, 2007 · Heapsort [0] [1] is an efficient algorithm for sorting m(i : 0 ≤ i < N) in situ; some, however, consider it a disadvantage of heapsort that it ...
  22. [22]
    Worst-case analysis of a generalized heapsort algorithm
    The worst-case analysis of d-heapsort shows that while the number of comparisons required to build and to insert into a d-heap can increase by a factor d, the ...
  23. [23]
    [PDF] Comparative Performance Evaluation of Heap-Sort and Quick-Sort ...
    This paper gives the brief history about two sorting algorithms QuickSort and HeapSort and show the biggest differences between them. Quick Sort is a ...
  24. [24]
    [PDF] Comparison Sorting II - Washington
    Feb 28, 2016 · How Fast Can We Sort? • Heapsort & mergesort have O(n log n) worst-case running time. • Quicksort has O(n log n) average-case running time.<|separator|>
  25. [25]
    [PDF] Sorting algorithm
    Although heap sort has the same time bounds as merge sort, it requires only Ω(1) auxiliary space instead of merge sort's Ω(n), and is consequently often faster ...
  26. [26]
    Sorting
    The Exchange, Selection, Insertion, and Merge groups are part of a larger group of Comparison-Based sorting algorithms, which all sort (in part) by ...
  27. [27]
    [PDF] An Analysis of Comparison-based Sorting Algorithms
    Heapsort and mergesort were average. Timsort and introspective sort were impressive because they were much faster than heapsort and mergesort for most.
  28. [28]
    [PDF] Comparative Analysis of the Performance of Popular Sorting ...
    In this study, we compared the time performance of six popular sorting algorithms, namely QuickSort, TimSort,. MergeSort, HeapSort, RadixSort, and ShellSort, on ...
  29. [29]
    [PDF] Priority Queues and Dijkstra's Algorithm ∗
    Oct 12, 2007 · The heaps we use with Dijkstra-NoDec are bottom-up binary Heap, aligned 4-ary heap and sequence heap, which are three highly-optimized heaps ...
  30. [30]
    [PDF] CS 261: Graduate Data Structures Week 5: Priority queues
    In Dijkstra the priority always gets smaller (earlier in the priority ordering); this will be important for efficiency. 1. Initialize dictionary D of distances; ...
  31. [31]
    IntroSort or Introspective sort - GeeksforGeeks
    Jul 11, 2025 · Introsort(Introspective sort) is a comparison based sort that consists of three sorting phases. They are Quicksort, Heapsort, and Insertion sort.
  32. [32]
    (PDF) Improvement of the Dynamic Priority Scheduling Algorithm ...
    A heapsort algorithm with a lower time complexity was used to sort the comprehensive priority index so as to reduce the sorting overhead of the system. The ...<|separator|>
  33. [33]
    Heaps (Tables without clustered indexes) - SQL - Microsoft Learn
    Nov 22, 2024 · A heap is a table without a clustered index. One or more nonclustered indexes can be created on tables stored as a heap.When To Use A Heap · When Not To Use A Heap · Heap Structures
  34. [34]
    Heap File Organization in Database - GeeksforGeeks
    Jul 23, 2025 · Heap file organization is a simple, unorganized method of storing data in databases, where records are added at the end without any order.
  35. [35]
    [PDF] HeapSort - CMSC 351 - UMD MATH
    Given an array A indexed at 1, describe a process by which we could determine whether or not the array represents a max heap. Write the pseudocode for an ...
  36. [36]
    [PDF] 16: Heapsort
    HEAPSORT. Heapsort ... ▸ Poor use of cache because it accesses memory in non-sequential manner, jumping around. ... performance; stable. Quick. X probabilistic.
  37. [37]
    [PDF] The In uence of Caches on the Performance of Heaps - Dada
    Feb 29, 1996 · The reduction in cache misses translates to speedups as high as 75% for a heap running in the hold model, and performing heapsort with our ...
  38. [38]
    [PDF] CSE 332 Autumn 2023 Lecture 15: Sorting - Washington
    Does the algorithm run faster if the list is close to sorted? • If so, we call it Adaptive. • Memory Usage. • How much memory does the algorithm ...<|separator|>
  39. [39]
    [PDF] Parallel Sorting
    Mar 20, 2023 · Prospects of parallelizing standard O(N log N) sorting algorithms… Heap Sort. ▷ Manipulates a global array. ▷ Very serial in nature: repeatedly ...Missing: parallelization | Show results with:parallelization
  40. [40]
    [PDF] Priority Queues, Binary Heaps, and Heapsort
    A priority queue allows inserting/deleting items, removing the highest priority. A binary heap is an array, and Heapsort has O(NlgN) time.