Fact-checked by Grok 2 weeks ago

Binary heap

A binary heap is a specialized tree-based that is a complete satisfying the property: in a min-heap, the key of each node is less than or equal to the keys of its children, while in a max-heap, it is greater than or equal to them. This structure ensures that the root node always holds the minimum (or maximum) key, making it efficient for operations without requiring a fully sorted order among sibling nodes. Binary heaps maintain two fundamental properties: the shape property, where the tree is complete (all levels are fully filled except possibly the last, which is filled from left to right), and the heap ordering property as described above. This allows heaps to be compactly represented in an without explicit pointers, using level-order traversal: the is at 0 (or 1), with left and right children at indices 2i+1 and 2i+2 (or 2i and 2i+1 for 1-based indexing). The of the is O(log n) for n elements, which bounds the of key operations. Common operations on a binary heap include insertion, which adds a new at the end of the and "percolates" it up to restore the heap property in O(log n) time; extract-min (or extract-max), which removes and returns the while replacing it with the last and percolating down in O(log n) time; and heapify, which builds a from an unsorted in O(n) time by starting from the bottom (non-leaf nodes) and sifting down to restore the heap property. These operations enable efficient implementation of priority queues, where elements are dequeued based on priority rather than insertion order. Binary heaps are foundational in algorithms such as , which sorts an array in O(n log n) time by building a max-heap and repeatedly extracting the maximum, and they underpin many applications in operating systems (e.g., process scheduling), graph algorithms (e.g., Dijkstra's shortest path), and event-driven simulations requiring fast access to the highest-priority item.

Introduction

Definition

A binary heap is a complete in which every node satisfies the heap property: in a min-heap, the value stored in each node is less than or equal to the values in its children; in a max-heap, the value is greater than or equal to the values in its children. A complete binary tree is defined such that all levels are fully filled except possibly the last, which is filled from left to right, leaving no gaps between nodes on that level. This structure ensures the tree remains balanced and nearly full at all times. The binary heap data structure was introduced by J. W. J. Williams in as part of his algorithm for . Visually, it takes the form of a with the positioned at the top and child nodes arrayed below in successive levels, maintaining a compact layout with no interruptions in the filling of lower levels.

Properties

A binary heap is a complete binary tree with n elements, resulting in a height of \lfloor \log_2 n \rfloor, which provides a balanced structure where all levels are fully filled except possibly the last, filled from left to right. This balance distinguishes binary heaps from unbalanced trees, ensuring consistent depth across nodes. In a min-heap, the root node always contains the smallest element among all nodes, while in a max-heap, it contains the largest; this extremum location arises directly from the heap-order property, where each parent is smaller (min-heap) or larger (max-heap) than or equal to its children. Unlike binary search trees, binary heaps offer no guarantee of in-order traversal yielding sorted elements, as the ordering applies solely to the parent-child relationship without left-right distinctions. The heap-order property extends transitively: for any node, all its ancestors satisfy the ordering relative to it, meaning in a min-heap, every ancestor has a value less than or equal to the node's value, and vice versa for max-heaps; this follows inductively from the parent-child rule applied along the path from root to the node. This combination of properties ensures efficient priority queue behavior by positioning the extremum at the root for immediate access and leveraging the logarithmic height for adjustments that propagate along short paths to restore order after modifications, without requiring global rebalancing.

Representation

Array Storage

Binary heaps are typically implemented using a contiguous to represent the underlying complete binary tree structure, enabling efficient storage without explicit pointers to child nodes. The nodes are mapped to array indices via level-order traversal, starting with the root at index 0, followed by its left child at index 1, right child at index 2, and subsequent nodes filling left to right across each level. This representation leverages the complete binary tree property, where all levels are fully filled except possibly the last, which is filled from left to right. This array-based storage offers significant advantages in terms of efficiency, as it requires no additional memory for pointers or references to absent children, achieving O(1) auxiliary space beyond the n elements themselves. Furthermore, the contiguous allocation improves performance by promoting spatial locality, where accessing and child nodes involves nearby memory addresses, reducing cache misses compared to pointer-based implementations. For example, consider a complete with 7 s: the would store the at 0, its left at 1, right at 2, the left subtree's children at 3 and 4, and the right subtree's children at 5 and 6, as [root, left, right, left-left, left-right, right-left, right-right]. When the last level is incomplete, the size is exactly equal to the number of s present, avoiding any wasted space for missing children on the right side of the last level. Basic access to determine if a at i has children can be checked using bounds, ensuring operations stay within the valid range without needing explicit checks. The following illustrates simple checks for left and right children in a 0-based of size n:
function hasLeftChild(i, n):
    return 2 * i + 1 < n

function hasRightChild(i, n):
    return 2 * i + 2 < n
These checks rely solely on index arithmetic and the length, maintaining the efficiency of the representation.

Index Calculations

Binary heaps can use either 0-based or 1-based indexing, with 1-based being common in some implementations to simplify parent calculations.

0-Based Indexing

In the array-based representation of a binary heap with 0-based ing, navigation between and child s relies on mathematical formulas that exploit the level-order (breadth-first) arrangement of the complete binary tree. For a at i in an array of size n, the left child is located at $2i + 1 and the right child at $2i + 2, assuming these indices are less than n. The of a at i > 0 is at \lfloor (i - 1)/2 \rfloor, while the at 0 has no . These formulas arise from the systematic numbering of nodes in level order, where each level of the is filled from left to right before proceeding to the next. Starting with the at index 0, the first level occupies indices 1 and 2, the second level indices 3 through 6, and so on, with each subsequent level containing twice as many nodes as the previous. To reach a child's position from the parent at i, the index is effectively doubled to shift to the doubled capacity of the next level; the additions of 1 and 2 then position the left child immediately after the parent's "influence" in the numbering and the right child as its successor. Conversely, the parent formula reverses this by subtracting 1 to align with the prior level's end and dividing by 2 (with flooring) to return to the parent's slot.

1-Based Indexing

In 1-based indexing, the is at index 1, the left child of node i is at $2i, the right child at $2i + 1, and the of node i > 1 is at \lfloor i / 2 \rfloor. This avoids the offset in calculation and is often preferred in theoretical contexts or languages where arrays start at 1 (e.g., some ). The formulas derive similarly from level-order numbering starting at 1, doubling indices for children, and halving for parents. Edge cases must be handled to respect the complete tree's shape. The root has no parent, so the parent formula applies only for i > 0 (0-based) or i > 1 (1-based). For children, existence is verified against array bounds: if $2i + 1 \geq n (0-based) or $2i > n (1-based), no left child exists, implying no right child either (since the tree is complete and filled left-to-right). These checks prevent invalid access in partially filled heaps. The correctness of these formulas stems from the between array indices and level-order positions in a complete , ensuring that parent-child links align precisely without gaps or overlaps. This mapping preserves the tree's structural integrity because the consecutive numbering guarantees that a 's children occupy the two indices immediately following the subtree rooted at the in the breadth-first , maintaining the binary branching and completeness properties inherent to the heap. The choice between 0- and 1-based indexing is largely conventional, with both supporting O(1) navigation.

Construction

Building a Heap

The construction of a from an unsorted involves transforming the , which represents a complete , into a structure that satisfies the heap property. This process, known as building a heap, was originally described by J. W. J. Williams as part of the heapsort algorithm, where the is initialized as a near-complete tree and the heap order is enforced progressively. The overall algorithm proceeds bottom-up: the array is treated as a complete with n elements, indexed from 0 to n-1. The last non-leaf is located at floor(n/2) - 1, using standard calculations for parent-child relationships in array-based . Starting from this and iterating downward to the root ( 0), the heapify procedure is applied to each subtree rooted at i. This ensures that when heapify is invoked on a , its subtrees already satisfy the heap property, progressively building the full from the leaves upward. This bottom-up approach is efficient because it minimizes unnecessary operations in the upper levels of the ; by heapifying smaller subtrees first, avoids propagating changes through the entire of the for every element, unlike successive insertions that would require bubbling each new element from to . Consider the example of building a max-heap from the unsorted A = [4, 1, 3, 2, 16, 9, 10], with n=7 (indices 0 to 6). The last non-leaf is at floor(7/2) - 1 = 2. The initial representation is:
Index:  0  1  2  3  4  5  6
Value:  4  1  3  2 16  9 10
  • Apply heapify at index 2 (value 3, children at 5 and 6: 9 and 10). Since 3 < 10, swap with 10; the subtree now satisfies the . Array: [4, 1, 10, 2, 16, 9, 3].
  • Apply heapify at index 1 (value 1, children at 3 and 4: 2 and 16). Since 1 < 16, swap with 16; then at the new position (index 4, value 1, no children), done. Array: [4, 16, 10, 2, 1, 9, 3].
  • Apply heapify at index 0 (value 4, children at 1 and 2: 16 and 10). Since 4 < 16, swap with 16; then at index 1 (value 4, children at 3 and 4: 2 and 1), 4 > 2 and 4 > 1, done. Final array: [16, 4, 10, 2, 1, 9, 3].
This results in a max-heap where the parent is greater than or equal to its children. The corresponding after construction is a complete with root 16, left subtree rooted at 4 (with leaves 2 and 1), and right subtree rooted at 10 (with leaves 9 and 3). In the context of priority queues, heaps provide an efficient , and the building process allows for rapid initialization from an initial set of elements, enabling subsequent operations like insertion and extraction in O(log n) time.

Heapify Procedure

The heapify restores the binary heap property when it is violated at a specific , under the that the subtrees rooted at its children are already valid heaps, with the violation localized to the itself. This operation, central to maintaining heap integrity, was introduced as part of the to efficiently adjust the structure after modifications. There are two primary forms: heapify-down, which propagates a violation downward, and heapify-up, which propagates it upward. In a min-heap, the heapify-down (also known as sift-down) begins at a given index i representing the node with the violation. It identifies the smallest among the node and its children; if a child is smaller, it swaps the node with that child and recurses on the child's position until no violation exists. For a max-heap, the process is analogous but selects the largest child and swaps if the parent is smaller. The child indices are calculated as left = 2i and right = 2i + 1 in 1-based indexing. The following pseudocode illustrates the min-heapify-down procedure, assuming a 1-based A and a heap size denoted by heap_size[A]:
MIN-HEAPIFY(A, i)
    smallest = i
    l = 2 * i
    r = 2 * i + 1
    if l ≤ heap_size[A] and A[l] < A[smallest]
        smallest = l
    if r ≤ heap_size[A] and A[r] < A[smallest]
        smallest = r
    if smallest ≠ i
        exchange A[i] with A[smallest]
        MIN-HEAPIFY(A, smallest)
The heapify-up (also known as sift-up or bubble-up) operation, used after inserting a new element at the end of the heap, starts from that leaf position and compares the node with its parent. If the node violates the property (smaller than parent in a ), it swaps with the parent and recurses upward until the property holds or the root is reached. For a , the pseudocode is:
MIN-HEAPIFY-UP(A, i)
    while i > 1 and A[i] < A[PARENT(i)]
        exchange A[i] with A[PARENT(i)]
        i = PARENT(i)
where PARENT(i) = ⌊i/2⌋. Heapify-down is employed during minimum extraction (to restore the heap after root removal) and in the bottom-up construction of a heap from an array, while heapify-up is applied following insertions or decrease-key operations to maintain the ordering. To illustrate heapify-down in a min-heap, consider an array A = [4, 1, 3, 2] (1-based indices: position 1=4, 2=1, 3=3, 4=2), where the subtrees are valid min-heaps but the root violates the property (4 > children 1 and 3). The smallest child is at index 2 (value 1), so swap positions 1 and 2, yielding A = [1, 4, 3, 2]. Now at index 2 (value 4), the only child is at index 4 (value 2 < 4), so swap them, resulting in A = [1, 2, 3, 4]. No further children exist, so the heap property is restored. This process fixes the single violation through successive downward swaps.

Operations

Insertion

Insertion into a binary heap involves adding a new element while preserving the complete binary tree structure and the heap property. The process begins by appending the new element to the end of the underlying array, which corresponds to the next available position in the last level of the tree, maintaining the completeness. This initial placement ensures the tree remains complete before adjusting for the heap order. To restore the heap property, a heapify-up (or sift-up) operation is performed starting from the new element's position. In a min-heap, the new element is compared with its parent; if it is smaller than the parent, they are swapped, and the comparison continues with the new parent until the element reaches a position where it is not smaller than its parent or the root is reached. This bubbling up corrects any violations introduced by the insertion. For a max-heap, the comparison reverses, swapping if the new element is larger than its parent. The parent index is calculated as floor(i/2) for a 1-based array indexing. The following pseudocode illustrates the insertion for a min-heap using 1-based indexing, where the array is A[1..n] and n is the current heap size:
procedure MIN-HEAP-INSERT(A, key)
    n = A.heap-size
    A.heap-size = n + 1
    i = n + 1
    A[i] = key
    while i > 1 and A[i] < A[floor(i/2)]
        swap A[i] and A[floor(i/2)]
        i = floor(i/2)
This procedure directly inserts the key and bubbles it up as needed. An equivalent approach, as described in standard implementations, temporarily places a minimal value (e.g., negative infinity for max-heaps) at the new position and then uses an increase-key operation to set the value and adjust upward. Consider inserting the value 5 into a min-heap represented by the [1, 3, 2, 4] (1-based: A=1, A=3, A=2, A=4), which forms the tree:
  1
 / \
3   2
/
4
Append 5 at position 5: [1, 3, 2, 4, 5]. The parent of position 5 is position 2 (value 3). Since 5 > 3, no swap is needed, and the heap property holds. The resulting is [1, 3, 2, 4, 5], with the tree:
    1
   / \
  3   2
 / \
4   5
The insertion operation requires additional space proportional to the of the in recursive implementations, though iterative versions use amortized O(1) extra space per insertion beyond the array growth.

Extraction

The extraction operation in a binary heap removes and returns the , which holds the minimum (in a min-heap) or maximum (in a max-heap) value according to the heap property. This is a fundamental operation for priority queues implemented via heaps, enabling efficient access to the extremal element. To perform extraction, the procedure first saves the root value in a temporary . The last element in the heap is then moved to the position to maintain the complete structure, as removing the directly would otherwise disrupt the shape. The heap size is decremented by one, effectively removing the original last element. Finally, a heapify-down (or sift-down) operation is applied starting from the to restore the heap property by bubbling the new down the of smallest (for min-) or largest (for max-) children until the holds. In a min-, this sift-down compares the current node with its children and swaps with the smallest child if the heap property is violated, continuing recursively or iteratively down the . The following pseudocode illustrates the extract-min procedure for a min-heap represented in a 0-indexed A of size n, assuming the heapify-down function (also known as MIN-HEAPIFY) is available from the heapify procedure:
MIN-HEAP-EXTRACT-MIN(A, n)
    if n < 1
        error "Heap underflow"
    min = A[0]
    A[0] = A[n-1]
    n = n - 1
    MIN-HEAPIFY(A, 0, n)
    return min
For example, consider a min-heap stored in a 1-indexed array [1, 2, 3, 4, 5] (with root at index 1), forming the tree:
  1
 / \
2   3
/ \
4  5
Extracting the minimum (1) involves saving 1, moving 5 to the root, reducing size to 4, and heapifying down: 5 swaps with 2 (the smaller child), then the new position of 5 (now at index 2) swaps with 4 (its only child), resulting in [2, 4, 3, 5]. The tree now is:
  2
 / \
4   3
 \
  5
This restores the min-heap property. If the heap is empty (size 0), the extraction operation typically returns a null value or raises an underflow error to indicate no element is available.

Decrease-Key

The decrease-key operation in a binary min-heap updates the value of an element at a known index to a smaller value and restores the min-heap property by bubbling the element upward if necessary. This operation is essential in algorithms like , where vertex distances may require multiple updates during execution, enabling efficient priority adjustments in the heap-based priority queue. The operation assumes the index i of the target node is provided in advance, typically through handles or direct array access in priority queue implementations, as locating the element otherwise would require a linear-time search. For a min-heap, after assigning the new smaller value to the array element at index i, the procedure checks if it violates the heap property with its parent; if the new value is smaller than the parent's value, it repeatedly swaps the element with its parent until the property holds or the root is reached. This bubbling-up process, akin to the heapify-up procedure, ensures the structure remains a valid min-heap. The following pseudocode illustrates the decrease-key operation for a min-heap stored in an array A, where \text{PARENT}(i) = \lfloor i/2 \rfloor for i > 0:
HEAP-DECREASE-KEY(A, i, key)
1  A[i] ← key
2  while i > 1 and A[PARENT(i)] > A[i]
3      swap A[i] ↔ A[PARENT(i)]
4      i ← PARENT(i)
This runs in O(\log n) time, where n is the heap size, as the swaps traverse at most the tree height. Consider a min-heap array [3, 5, 4, 7] (indices starting at 0). Decreasing the key at index 0 from 3 to 1 yields [1, 5, 4, 7]; no swaps are needed since 1 remains smaller than its children 5 and 4. Now, decreasing the key at index 1 from 5 to 0 yields [1, 0, 4, 7]; this violates the property with parent 1, so swap positions 0 and 1 to get [0, 1, 4, 7], after which the root is reached and the heap property holds (0 ≤ 1, 0 ≤ 4, 1 ≤ 7).

Increase-Key

In a binary min-heap, the increase-key operation updates the value of a node at a known i to a larger value k (where k \geq the current value), which may violate the min-heap property if the new value exceeds the smallest among its ren. To restore the property, the procedure first assigns the new value to the array position and then performs a sift-down starting from i, swapping the node with its smallest if necessary until the subtree rooted at i satisfies the min-heap . This contrasts with the decrease-key operation, which propagates upward, highlighting the asymmetry of key updates in heaps. The steps are as follows:
  • Assign A = k.
  • If i has at least one and A is greater than the minimum value, swap A with the smallest 's value.
  • Repeat the comparison and swap from the 's index until no violation occurs or a is reached.
This relies on the heapify-down (or sift-down) procedure to maintain the structure, ensuring the overall heap property holds after the update. Here is pseudocode for the operation (assuming a 1-based indexing and the heapify-down as defined earlier):
INCREASE-KEY(A, i, k)
    if k < A[i]
        error "new key is smaller"
    A[i] = k
    HEAPIFY-DOWN(A, i)
The time complexity is O(\log n) in the worst case, matching the height of the heap. For example, consider a min-heap A = [1, 3, 2, 7, 5, 4] (1-based indices: positions 1 to 6). The tree structure has root 1, left subtree under 3 (with children 7 and 5), and right subtree under 2 (with child 4). Now increase the key at index 2 from 3 to 6. After assignment, A{{grok:render&&&type=render_inline_citation&&&citation_id=2&&&citation_type=wikipedia}} = 6, which violates the property with its children (min child is 5 at index 5). Swap positions 2 and 5: A = [1, 5, 2, 7, 6, 4]. At new position 5 (value 6), there are no children, so the process stops. The resulting [1, 5, 2, 7, 6, 4] maintains the min-heap property. The increase-key operation is less frequently used than decrease-key in min-heap-based priority queues, as many applications (e.g., shortest-path algorithms) involve reducing priorities rather than increasing them. However, it finds utility in update scenarios within max-heap contexts or when priorities may rise, such as dynamic scheduling systems. Searching for an arbitrary element in a binary heap, or determining whether it exists, requires a linear scan through the entire array representation, resulting in O(n) time complexity in the worst case, where n is the number of elements. This inefficiency arises because the heap property enforces a partial order—parent nodes are smaller (in a min-heap) or larger (in a max-heap) than their children—but does not maintain a global sorted order that would enable faster traversal, unlike structures such as binary search trees. Binary heaps are typically implemented using contiguous array storage, which allows straightforward iteration for the scan but offers no indexing advantage for arbitrary lookups. Accessing the minimum or maximum element is an exception, achievable in O(1) time by directly examining the root of the heap. For any other target value, however, the position is unpredictable due to the lack of sorted traversal paths, necessitating a full examination of all elements to confirm presence or absence. The standard implementation of this search uses a simple loop over the array:
function search(heap, target):
    for i from 0 to heap.size - 1:
        if heap[i] == target:
            return i  // index of the element, or true if checking existence
    return -1  // not found, or false
This pseudocode directly translates to O(n) runtime, as each iteration performs a constant-time comparison, and up to n comparisons may be needed. are designed primarily as efficient , optimizing operations like insertion and minimum/maximum extraction to O(log n) time while deprioritizing search to favor these core functionalities. The O(n) search cost reflects this trade-off, as maintaining additional ordering for logarithmic searches would complicate the structure and increase overhead for heap-specific tasks. When applications demand frequent arbitrary searches alongside priority operations, hybrid approaches or separate data structures—such as combining a heap with a for indexing—are recommended to mitigate the linear-time bottleneck. For illustration, consider searching for the value 4 in a min-heap array [1, 3, 2, 5, 4], where indices start at 0. The scan checks 1 (no match), 3 (no), 2 (no), 5 (no), and finally 4 (match at index 4), requiring examination of all five elements despite the early positions being irrelevant.

Arbitrary Deletion

Arbitrary deletion in a binary heap refers to the removal of a specific element that is not necessarily the root, requiring first locating its position within the heap structure. To perform this operation, the index i of the target element must be identified, typically through a linear scan of the heap array, which takes O(n) time in the worst case where n is the number of elements. Once the index i is found, if i = 0 (the root), the deletion proceeds as in standard extraction by replacing it with the last element and heapifying down. For a non-root element, the procedure involves replacing the element at index i with the last element in the heap (at index n-1), reducing the heap size by one (n \leftarrow n-1), and then restoring the heap property starting from position i. This restoration requires checking whether the new element at i violates the heap order with its parent (potentially requiring heapify-up) or with its children (potentially requiring heapify-down). In the worst case, both directions may need adjustment, though typically only one is necessary depending on the relative value of the replacement element compared to the original. The following pseudocode outlines the process for a min-heap, assuming the index i is known:
function arbitraryDelete(i):
    if i == 0:
        return extractMin()  // Standard root extraction
    else:
        heap[i] = heap[n-1]
        n = n - 1
        if heap[i] < heap[parent(i)]:  // Violates with parent
            heapifyUp(i)
        else:
            heapifyDown(i)
This approach leverages the existing heapify procedures to maintain the heap-ordered property efficiently after replacement. The primary challenge in arbitrary deletion arises from the O(n) cost of locating the element, combined with the O(\log n) time for the subsequent heapify operations, making the overall process inefficient for frequent use without additional indexing. In practice, arbitrary deletion is rare in binary heaps because it requires knowing the element's index (or "handle"); to avoid the search overhead, applications often maintain external handles to elements or opt for more advanced heap variants like that support efficient arbitrary deletion. For example, consider a min-heap represented as the array [1, 3, 2, 5] with indices 0 to 3. To delete the element 3 at index 1, first move the last element 5 to index 1, reducing the size to 3, resulting in [1, 5, 2]. Since 5 > 1 (the parent), no heapify-up is needed, but 5 violates the min-heap property with its child 2, so heapify-down swaps 5 and 2, yielding [1, 2, 5], which satisfies the heap order.

Variants

Min-Heap and Max-Heap

A binary min-heap is a complete binary tree in which each node satisfies the min-heap property: the value of every parent node is less than or equal to the values of its children, ensuring that the minimum element resides at the root. This structure is particularly useful for implementing priority queues where the smallest-priority item must be extracted first, such as in scheduling algorithms or Dijkstra's shortest-path algorithm. In contrast, a max-heap adheres to the max-heap property, where the value of every parent node is greater than or equal to the values of its children, placing the maximum element at the . Max-heaps are commonly employed in applications requiring the largest element to be prioritized, such as in the descending order phase of or certain selection algorithms. Both min-heaps and max-heaps share the same underlying complete structure and support identical operations with equivalent time complexities, differing only in the ordering invariant. The choice between them depends on the specific application needs, such as whether minimum or maximum extraction is required. One approach to simulate a max-heap using a min-heap implementation involves negating all key values before insertion and negating the result upon extraction; however, this method is inefficient for non-numeric keys or when negation alters semantics. For illustration, consider the representation of a small with elements {1, 2, 3, 4}:
  • Min-heap array: [1, 2, 3, 4]
        1
       / \
      2   3
     /
    4
    Here, the root 1 is the minimum, and parents are ≤ children.
  • Max-heap array: [4, 3, 2, 1] (same shape, reversed ordering)
        4
       / \
      3   2
     /
    1
    The root 4 is the maximum, with parents ≥ children.
These side-by-side examples highlight the ordering differences while preserving the structural completeness of the binary tree. The binary heap, introduced in 1964 as part of the heapsort algorithm, serves as the foundational structure for several advanced priority queue implementations that address its limitations in operations like merging and decrease-key. Subsequent developments built upon this base, leading to more sophisticated heaps designed for specific algorithmic needs, such as efficient unions in graph processing. Binomial heaps, proposed in 1978, extend the heap concept by organizing elements into a forest of trees, each of order corresponding to powers of two, enabling a operation in O(log n) time that is particularly useful for mergeable priority queues. This structure supports the same O(log n) worst-case bounds for insert, extract-min, and decrease-key as heaps but excels in scenarios requiring frequent merges, such as certain dynamic algorithms. Fibonacci heaps, developed in 1984 and formalized in 1987, further improve upon heaps by allowing amortized O(1) time for decrease-key operations through a lazy implementation of linking trees with degree bounds, making them ideal for graph algorithms like Dijkstra's where decrease-key is frequent. However, their practical performance suffers from higher constant factors compared to binary heaps due to complex pointer manipulations and potential for larger memory usage. Pairing heaps, introduced in 1986, offer a simpler alternative to heaps by using a self-adjusting tree structure that pairs subtrees during merges, achieving practical efficiency in many applications while being easier to implement than heaps. Experimental evaluations show pairing heaps often outperform heaps in real-world scenarios due to reduced overhead, though their worst-case bounds remain partially unresolved. In comparison to balanced binary search trees (BSTs), which support O(log n) time for insert, delete, and search operations, binary heaps provide faster O(log n) insert and extract-min but degrade to O(n) for search, making heaps preferable for tasks where minimum extraction dominates over lookups. Balanced BSTs, like AVL or red-black trees, are better suited when search efficiency is critical, as heaps do not maintain sorted order. Binary heaps are typically chosen for their simplicity and low constant factors in basic applications, while and heaps are favored for algorithms needing efficient unions or decrease-keys, such as optimization, and heaps for a balance of performance and ease in practical implementations.

Operation Runtimes

The time complexities of standard operations on a binary are determined by the structure's , which is O(\log n) for n elements, and the need for linear scans in certain cases. Insertion involves adding an element at the end of the array representation and percolating it up to restore the heap property, taking O(\log n) time in the worst case. Extraction of the minimum (or maximum) element requires removing the root, replacing it with the last element, and percolating down, also O(\log n) in the worst case. The decrease-key operation, which reduces a node's key and percolates it up, requires O(\log n) time in the worst case, as does the symmetric increase-key operation, which percolates down. for an arbitrary element necessitates a linear scan through the heap, resulting in O(n) time complexity. Arbitrary deletion combines for the element (O(n)) with removal and heap restoration (O(\log n)), yielding an overall worst-case time of O(n). These bounds hold for both average and worst-case scenarios in most operations due to the balanced nature of the complete binary tree underlying the heap, though search and arbitrary deletion are dominated by the linear component regardless of input distribution.
OperationAverageWorst-caseNotes
InsertionO(\log n)O(\log n)Traverses up to height; amortized same as worst.
ExtractionO(\log n)O(\log n)Heapify-down from root; find-min/max is O(1).
Decrease-keyO(\log n)O(\log n)Assumes known position; percolates up.
Increase-keyO(\log n)O(\log n)Assumes known position; percolates down.
SearchO(n)O(n)Linear scan; no ordering for faster lookup.
Arbitrary deletionO(n)O(n)O(n) search + O(\log n) removal/fix.

Building Complexity

The construction of a binary heap from an unsorted of n elements, known as the build-heap , achieves an overall of O(n). This efficiency arises from Floyd's bottom-up heapification , which starts by treating the as a complete and then restores the heap property by calling the heapify procedure on each non-leaf node, beginning from the last non-leaf node and proceeding towards the . A naive approach of performing n successive insertions into an initially empty heap would require O(n \log n) time, as each insertion takes O(\log n) in the worst case; in contrast, the bottom-up method leverages the fact that most elements are already near their final positions in the complete representation, leading to significantly less sifting on average. To prove the O(n) bound, consider the total cost of heapify operations across all nodes, where the cost for a is proportional to its h in the subtree rooted at that (the maximum number of swaps needed). In a complete of n nodes, the number of nodes with h is at most \lceil n / 2^{h+1} \rceil for h = 0, 1, \dots, \lfloor \log_2 n \rfloor. The total cost is thus bounded by \sum_{h=0}^{\lfloor \log_2 n \rfloor} h \cdot \frac{n}{2^{h+1}} = O(n) \sum_{h=0}^{\infty} \frac{h}{2^{h+1}}, since the infinite series \sum_{h=0}^{\infty} h / 2^h = 2 converges to a constant. This shows that the costs "telescope" to linear time, with leaves (height 0) contributing O(n) at negligible cost per , and higher levels having exponentially fewer nodes despite their O(\log n) cost. An alternative perspective on the total time T(n) is T(n) = \sum_{i=1}^n O(\log (n - i + 1)), which approximates to O(n) through integral bounds or the above height-based analysis. In practice, for large n, the build-heap operation exhibits strictly linear performance, often using fewer than $2n comparisons, confirming its efficiency over the quadratic worst-case alternatives in older heap construction methods.

References

  1. [1]
    Heaps
    ### Definition, Properties, and Key Operations of a Binary Heap
  2. [2]
    Heaps
    A heap is a binary tree data structure (see BinaryTrees) in which each element has a key (or sometimes priority) that is less than the keys of its children.
  3. [3]
    [PDF] Binary and Binomial Heaps - cs.Princeton
    Binary Heap: Definition. Binary heap. □. Almost complete binary tree. – filled on all levels, except last, where filled from left to right. □. Min-heap ordered.
  4. [4]
    CS202 Lecture Notes - Binary Heaps / Priority Queues - UTK-EECS
    A heap is a "complete binary tree" -- which means it is a tree where every level is full, except for the last. The last level fills up from the left to the ...
  5. [5]
    Algorithm 232: Heapsort | Communications of the ACM
    May 2, 2025 · First page of PDF. Formats available. You can view the full ... Index Terms. Algorithm 232: Heapsort. Information systems · Information ...
  6. [6]
    [PDF] CS 161 Lecture 4 - 1 Heaps
    A heap is a data structure where the largest element can be found in O(1) time. A max heap is a binary tree where each node is greater than its children.
  7. [7]
    Chapter 5 Heaps | B16 Algorithms and Data Structures 1 - Notes
    ... the heap property, value(P)≤value(A) value ⁡ ( P ) ≤ value ⁡ ( A ) for all ancestors A A of S S . We conclude that, after replacing value(S) value ⁡ ( S ) ...
  8. [8]
    6.1 Heaps - CLRS Solutions
    Then the heap consists of a complete binary tree of height m − 1 m − 1 m−1, along with k additional leaves along the bottom.
  9. [9]
    Priority Queues - Algorithms, 4th Edition
    Apr 24, 2022 · In a binary heap, the items are stored in an array such that each key is guaranteed to be larger than (or equal to) the keys at two other ...
  10. [10]
    [PDF] Performance Effects of heap structure - Outer Loop Consulting
    Consequently, the heap inherits the cache locality advantages of a contiguous array and removes the need for storing pointers in the node, as an arithmetic.
  11. [11]
    7.16. Array Implementation for Complete Binary Trees - OpenDSA
    The formulae for calculating the array indices of the various relatives of a node are as follows. The total number of nodes in the tree is n. The index of the ...
  12. [12]
    Heaps - CS 10 | Problem solving | Winter 2025
    A heap is an array that we can view as a nearly complete binary tree. In other words, we fill in the tree from the root down toward the leaves, level by level.
  13. [13]
    Binary Heaps - Stanford University
    May 5, 2021 · Binary Heaps. A heap is a tree-based structure that satisfies the heap property: Parents have a higher priority than any of their children.
  14. [14]
    Building Heap from Array - GeeksforGeeks
    Oct 18, 2025 · To build a Max Heap from an array, treat the array as a complete binary tree and heapify nodes from the last non-leaf node up to the root in ...
  15. [15]
    Introduction to Algorithms, Third Edition - Books - ACM Digital Library
    If you had to buy just one text on algorithms, Introduction to Algorithms is a magnificent choice. ... Thomas H. Cormen. Dartmouth College.
  16. [16]
    [PDF] Chapter 6
    May 22, 2017 · Write n = 2m −1+k where m is as large as possible. Then the heap consists of a complete binary tree of height m − 1, along with k additional ...
  17. [17]
    [PDF] CSci 231 Homework 5 Solutions
    (CLRS 6.5-3) Write pseudocode for the procedures HEAP-EXTRACT-MIN, HEAP-DECREASE-. KEY and HEAP-INSERT that implement a min-priority queue with a min-heap.
  18. [18]
    [PDF] CPS 130 Homework 9 - Solutions
    (CLRS 6-2) Analysis of d-ary heaps. A d-ary heap is like a binary heap, but instead of 2 children, nodes have d children. a. How would you represent a d-ary ...
  19. [19]
    [PDF] Dijkstra's Single Source Shortest Path Algorithm
    Suppose that we realize the priority queue of a set with n element with a binary min-heap. • extract-min takes O(log n) time. • decrease-key takes O(log n) time ...
  20. [20]
    [PDF] Binary and Binomial Heaps - cs.Princeton
    Binary Heap: Insertion. Insert element x into heap. □. Insert into next available slot. □. Bubble up until it's heap ordered. – Peter principle: nodes rise ...
  21. [21]
    [PDF] Algorithms - cs.Princeton
    Jul 25, 2017 · Binary heap: decrease key. Decrease key. Given a handle to node, repeatedly exchange element with its parent until heap order is restored. 13.
  22. [22]
    Priority Queues - UTK-EECS
    Decrease Key(p, amount): Lower the value of the item at position p by a positive amount. Use percolate up to move the item up in the heap. This operation is ...
  23. [23]
    [PDF] CMSC 420 Data Structures - UMD Computer Science
    ... complete binary tree and its mapping to an array. The regular structure allows us to use arithmetic to identify tree relations. Given a complete tree with n ...
  24. [24]
    Lecture 17. Priority Queues and Binary Heaps - Stanford University
    Nov 3, 2023 · Deleting an arbitrary element from a minheap would potentially take O(n) ... What is the exact height of a complete binary tree with n nodes?
  25. [25]
    12.17. Heaps and Priority Queues - OpenDSA
    A heap is defined by two properties. First, it is a complete binary tree, so heaps are nearly always implemented using the array representation for complete ...
  26. [26]
    [PDF] ‣ binary heaps ‣ d-ary heaps ‣ binomial heaps ‣ Fibonacci heaps
    Jul 25, 2017 · ・DECREASE-KEY(H, x, k): decrease the key of element x to k. The following operations are also useful: ・IS-EMPTY(H): is the heap empty? ・ ...
  27. [27]
    [PDF] 6.006 Lecture 04: Heaps and heap sort - MIT OpenCourseWare
    Heap. • Implementation of a priority queue. • An array, visualized as a nearly complete binary tree. • Max Heap Property: The key of a node is ≥ than the keys ...
  28. [28]
    [PDF] Heaps and Heapsort (CLRS 6) CPS 230, Fall 2001 1 Introduction
    A heap is a perfectly balanced binary tree where the minimum element is at the root. Heapsort sorts using a heap, taking O(n log n) time.
  29. [29]
    [PDF] Game Trees, Quad Trees and Heaps
    Hacker challenges you to quickly implement a max-heap data structure. - ... For a removeMax operation call removeMin on the min-heap and negate the number ...
  30. [30]
    [PDF] A Data Structure for Manipulating Priority Queues
    Williams, J.W.J. Algorithm 232. Heapsort. Comm. ACM 7, 6. (June 1964), 347-348. 27. Wyman, F.P. Improved Event-Scanning Mechanisms for Discrete. Event ...
  31. [31]
    [PDF] Fibonacci Heaps and Their Uses in Improved Network Optimization ...
    In this paper we develop an extension of binomial queues called Fibonacci heaps, abbreviated F-heaps. F-heaps support delete min and delete in O(log n) ...
  32. [32]
    [PDF] Fibonacci Heaps
    ○ In a binary heap, enqueue, extract-min, and decrease-key can be made to work in time O(log n) time each. ○ Cost of Dijkstra's / Prim's algorithm: = O(n ...
  33. [33]
    [PDF] The pairing heap: a new form of self-adjusting heap
    In this paper we describe a new form of heap, called the pairing heap, intended to be competitive with the Fibonacci heap in theory and easy to implement ...
  34. [34]
    Priority Queues and Heaps - CS@Cornell
    One simple implementation of a priority queue is as a binary search tree, using element priorities as keys. ... A binary heap, on the other hand, is a binary tree ...
  35. [35]
    [PDF] Priority queues Lecture 5a: Binary and k-ary heaps
    Binary heaps are a type of priority queue, based on heap-ordered trees, represented by arrays, and can be min or max heaps.
  36. [36]
    [PDF] Chapter 6 Heap and Its Application
    The binary heap data structure is an array that can be viewed as a nearly ... a bear/corn style search in O(n), where n is the number of elements as ...