Fact-checked by Grok 2 weeks ago

Binary search tree

A binary search tree (BST) is a in which each contains a key, and for any given node, all keys in its left subtree are less than the node's key, while all keys in its right subtree are greater than the node's key. This ordering property, known as the binary search tree property, is maintained recursively across all subtrees and enables efficient implementation of dynamic sets. The structure of a BST consists of nodes, each typically holding a (and optionally an associated ), along with pointers to left and right child ; the has no duplicate keys and follows a rooted at the top . Basic operations such as search, insertion, and deletion leverage this property by traversing the from the , comparing keys at each step to decide whether to move left or right, resulting in an average of O(log n) for a balanced with n nodes. In the worst case, however, degeneration into a linear chain can lead to O(n) performance, prompting the development of self-balancing variants like AVL trees to ensure logarithmic bounds. BSTs were developed in 1960 by several researchers, including A. D. Booth, A. J. T. Colin, P. F. Windley, and T. N. Hibbard, to address efficient storage and retrieval of in early systems. They serve as a foundational for symbol tables, sorted collections, and algorithms requiring ordered access, such as in-order traversal that yields keys in ascending order in linear time. Despite potential imbalances from insertion order, BSTs remain widely used due to their simplicity and adaptability in applications like databases and search engines.

Fundamentals

Definition and Structure

A binary search tree (BST), also known as an ordered binary tree, is a binary tree data structure in which each node has at most two children, referred to as the left child and the right child. The defining characteristic of a BST is its ordering property: for any node, all keys stored in the left subtree are strictly less than the node's key, and all keys in the right subtree are strictly greater than the node's key; this property holds recursively for every subtree. This organization enables efficient searching by allowing comparisons to prune half of the search space at each step, assuming the keys are comparable. Each in a BST typically stores a , which serves as the primary value for ordering and searching, along with pointers to its left and right nodes; these pointers may be if no exists. Nodes may also include optional fields such as a reference to the parent for bidirectional traversal or an associated containing additional data linked to the , such as in implementations where keys map to values. The of the serves as the starting point for all operations, and the absence of duplicate keys is often assumed to maintain the strict ordering, though variations allow multiples under specific conventions. The ordering property distinguishes a BST from a general , where no such constraints apply, and it ensures that an in-order traversal of the tree yields the keys in sorted ascending order. This property is preserved through all valid insertions and deletions, forming the invariant that underpins the data structure's utility. To illustrate, consider the following textual representation of a sample BST with integer keys inserted in the sequence 8, 3, 10, 1, 6, 14, 13:
         8
       /   \
      3     10
     / \      \
    1   6      14
               /
              13
In this example, the is 8; the left subtree rooted at 3 contains 1 and 6 (both less than 8 and properly ordered relative to 3); the right subtree rooted at 10 contains 14 (greater than 8 and 10), with 13 as its left child (greater than 10 but less than 14).

Properties and

The binary search (BST) relies on a core ordering to maintain its structure and functionality: for every n, all keys in the left subtree of n are strictly less than the key of n, and all keys in the right subtree are strictly greater than the key of n. This property applies recursively to every subtree within the . Violation of this at any disrupts the overall ordering, preventing the from functioning as a BST and compromising its ability to support efficient ordered operations. BSTs typically enforce of to preserve the strict in the ordering , disallowing duplicate keys in standard implementations. When duplicates must be handled—such as in applications requiring multiplicity—a common convention is to insert them into the right subtree of the existing with the equal key, treating them as greater than or equal while maintaining the recursive ordering in subtrees. The of a BST, defined as the longest path from the to a , directly influences the of operations, as searches and insertions follow paths proportional to this height. In a balanced BST, where nodes are roughly evenly distributed, the expected height is O(\log n) for n nodes, resulting in average path lengths to leaves of O(\log n) under random insertion orders. However, in the worst case, insertions in sorted order produce a skewed, degenerate tree with height O(n), leading to path lengths that degrade performance to linear time. This ordering enables the search of BSTs: elements can be located or verified through a directed following comparisons at each , avoiding a full traversal of the and achieving logarithmic average-case in balanced scenarios. The also ensures that an in-order traversal of the yields the keys in non-decreasing sorted order.

Core Operations

Searching

Searching in a binary search tree (BST) involves locating a with a given by exploiting the tree's ordering property, where all s in the left subtree are less than the 's , and all s in the right subtree are greater. The process begins at the root and repeatedly compares the target with the current 's : if the target is smaller, traverse to the left child; if larger, to the right child; if equal, the is found. This continues until the target is located or a (empty) child pointer is reached, indicating the key is not present. The time complexity of this search operation is O(h), where h is the height of the tree, as each step descends one level along a path from root to leaf or null. In a balanced BST with n nodes, h = O(\log n), yielding an average-case time of approximately $1.39 \log_2 n comparisons for random insertions; however, in the worst case of a degenerate (skewed) tree, h = O(n), resulting in linear time O(n). Finding the in-order successor of a node—the node with the smallest key greater than the given node's key—relies on the BST's structure. If the node has a right subtree, the successor is the leftmost (minimum) node in that subtree; otherwise, it is the nearest ancestor for which the node is in the left subtree. The in-order predecessor, the node with the largest key smaller than the given node's key, is found symmetrically: if a left subtree exists, it is the rightmost (maximum) node there; else, the nearest ancestor where the node is in the right subtree. Both operations run in O(h) time. An iterative implementation of the search avoids recursion by using a loop to traverse from the root:
ITERATIVE-TREE-SEARCH(x, k)
    y ← x
    while y ≠ NIL and k ≠ y.key
        if k < y.key
            y ← y.left
        else
            y ← y.right
    return y
Here, x is the root, k is the target key, and the function returns the node containing k or null if absent.

Insertion

Insertion into a binary search tree (BST) involves locating the appropriate position for a new key by following the same search path as in a standard BST search operation, which traverses from the root downward based on comparisons with node keys. Once the search reaches a null pointer (indicating a missing child), a new node containing the key is created and attached as a leaf to that position: to the left of the parent if the key is less than the parent's key, or to the right if greater. This process preserves the BST property that all keys in the left subtree are less than the node's key and all in the right subtree are greater. When encountering duplicate keys during insertion, common policies include ignoring the duplicate to maintain unique keys in the tree or treating it as greater than the existing key and placing it in the right subtree to allow multiples. Ignoring duplicates is typical for set-like structures, while allowing them via the right subtree placement supports multiset implementations without violating the ordering invariant. The time complexity of BST insertion is O(h), where h is the height of the tree, matching the complexity of searching since it follows an identical path length; in a balanced BST, this is O(\log n) on average, but it degrades to O(n) in the worst case for skewed trees. A recursive implementation of insertion is straightforward and commonly used. The following pseudocode illustrates the procedure, assuming unique keys and throwing an exception on duplicates:
BSTNode* insert(BSTNode* node, Key key, Value value) {
    if (node == nullptr) {
        return new BSTNode(key, value, nullptr, nullptr);  // Base case: create new leaf [node](/page/Node)
    }
    if (key < node->key) {
        node->left = insert(node->left, key, value);  // Recurse left
    } else if (key > node->key) {
        node->right = insert(node->right, key, value);  // Recurse right
    } else {
        throw DuplicateKeyException("Key already exists");  // Handle duplicate
    }
    return node;  // Return modified subtree [root](/page/Root)
}
To insert into the tree, call root = insert(root, key, value). This recursive approach naturally handles the traversal and attachment without explicit parent tracking.

Deletion

Deletion in a binary search tree requires removing a specified while maintaining the BST invariants, which classify the operation into three distinct cases based on the number of children the target possesses. The algorithm first locates the containing the to delete, typically via a search that takes O(h) time, where h is the tree height. In the simplest case, if the node is a leaf (zero children), removal involves severing the link from its parent to the node, effectively freeing the node without affecting other structure. This preserves the BST properties since no subtree ordering is disrupted. When the node has exactly one child, the child subtree replaces the node directly by updating the parent's pointer to point to the child, bypassing the node entirely. This maintains the search order as the child's position inherits the correct relational placement relative to the rest of the tree. The most complex case occurs when the node has two children. Here, the node's value is replaced by that of its in-order successor—the smallest key in the right subtree—ensuring the new value satisfies the BST ordering with respect to both subtrees. The in-order successor is located by descending leftward from the right child until reaching a node with no left child, a process that takes O(h) time in the worst case. After copying the successor's value to the target node, the successor itself is deleted; as it has at most one child (its right subtree, if any), this reduces to the one-child or leaf case, with the parent's pointer updated accordingly. The overall time complexity for deletion remains O(h), dominated by the search for the node and the successor, as the replacement and subsequent deletion do not exceed this bound. A standard recursive implementation handles these cases as follows, assuming a tree node structure with key, left, and right fields and no explicit parent pointers (updates propagate via return values):
BSTNode* deleteNode(BSTNode* root, int key) {
    if (root == NULL) {
        return root;
    }
    if (key < root->key) {
        root->left = deleteNode(root->left, key);
    } else if (key > root->key) {
        root->right = deleteNode(root->right, key);
    } else {
        // Node with only one child or no child
        if (root->left == NULL) {
            BSTNode* temp = root->right;
            free(root);
            return temp;
        } else if (root->right == NULL) {
            BSTNode* temp = root->left;
            free(root);
            return temp;
        }
        // Node with two children: Get the in-order successor
        BSTNode* temp = minValueNode(root->right);
        root->key = temp->key;
        // Delete the in-order successor
        root->right = deleteNode(root->right, temp->key);
    }
    return root;
}

BSTNode* minValueNode(BSTNode* node) {
    BSTNode* current = node;
    while (current->left != NULL) {
        current = current->left;
    }
    return current;
}
This pseudocode assumes memory deallocation via free for the removed node and handles root updates by returning the new root if necessary.

Traversals

In-order Traversal

In-order traversal of a binary search tree (BST) follows the left-root-right order, recursively processing the left subtree, visiting the current , and then processing the right subtree. This method leverages the BST property—where all keys in the left subtree are less than the node's key, and all keys in the right subtree are greater—to visit nodes in non-decreasing (ascending) order of their keys, effectively producing a sorted sequence of the tree's elements. The recursive algorithm for in-order traversal, known as INORDER-TREE-WALK, is defined as follows:
INORDER-TREE-WALK(x)
1 if x ≠ NIL
2     INORDER-TREE-WALK(x.left)
3     print x.[key](/page/Key)
4     INORDER-TREE-WALK(x.right)
This procedure ensures correctness for sorted output because the left subtree traversal prints all smaller keys first, followed by the current , and then all larger keys from the right subtree, maintaining the BST throughout. The is \Theta(n), where n is the number of s, as each is visited exactly once. is O(h), where h is the , due to the maximum depth along the leftmost or rightmost path. An iterative implementation simulates the recursion using a to avoid explicit recursive calls and potential in deep trees. The algorithm initializes an empty and a current pointer to the ; it repeatedly pushes the current onto the and moves left until reaching a , then pops the top , visits it, and moves to its right child, repeating until the is empty and the current pointer is . This approach also achieves O(n) time and O(h) space complexity, with the holding at most h nodes. In-order traversal serves as the foundation for generating sorted lists from BSTs, enabling applications like tree-based where the traversal output directly yields an ordered sequence without additional comparisons.

Pre-order and Post-order Traversals

In binary search trees, traversal is a depth-first method that visits the root node first, followed by the entire left subtree, and then the entire right subtree. This order facilitates operations that require replicating the tree's structure, such as copying the BST, where the root is created before its subtrees. The recursive implementation of pre-order traversal follows this pseudocode:
PreOrder(T)
    if T is not null
        visit T's data
        PreOrder(T's left child)
        PreOrder(T's right child)
    else
        handle empty subtree
Iterative versions employ a to mimic the recursive , pushing the initially, then popping nodes to visit them and pushing their right and left children in reverse order to maintain the pre-order sequence. Post-order traversal, also depth-first, processes the left subtree completely, then the right subtree, and finally the root node. This is particularly useful in BSTs for tasks requiring subtree completion before the root, such as deleting the tree (by freeing child nodes first) or computing aggregate properties like subtree sizes (by summing child sizes before adding the root). The recursive post-order is:
PostOrder(T)
    if T is not null
        PostOrder(T's left child)
        PostOrder(T's right child)
        visit T's data
    else
        handle empty subtree
Iterative post-order implementations typically use a (or two stacks) to track nodes and ensure children are processed before parents, often by reversing the pre-order visit order or using additional state to defer root visits. Both pre-order and post-order traversals exhibit time complexity, where n is the number of nodes, since each node is visited exactly once. Their space complexity is O(h), where h is the tree height, arising from the maximum depth of the recursion or explicit stack in iterative forms.

Balancing Techniques

Height-Balanced Trees

Height-balanced binary search trees, such as , address the potential degradation of standard binary search trees into linear structures by enforcing a strict balance condition on subtree heights. An , named after its inventors and Evgenii Landis, is a binary search tree where the balance factor of every node—defined as the height of the left subtree minus the height of the right subtree—satisfies |h_L - h_R| \leq 1. This condition ensures that no subtree is excessively taller than its sibling, preventing the tree from becoming skewed. To maintain this balance after insertion or deletion, which may temporarily violate the condition along the path from the modified to the , AVL trees employ local restructuring operations known as rotations. These rotations adjust the tree structure by changing parent-child relationships without altering the in-order traversal order, thereby preserving the binary search tree property. Rotations are performed only when a node's balance factor becomes +2 or -2, and they restore balance in constant time. There are four primary rotation cases, corresponding to the position of the imbalance relative to the subtrees: left-left (), right-right (), left-right (LR), and right-left (). The LL rotation handles the case where the left subtree of an unbalanced z is taller (balance factor +2), and the left subtree of z's left y is also taller or equal. In this scenario, y becomes the new of the subtree, z becomes y's right , and y's original right becomes z's left . This single is illustrated conceptually as restructuring the chain z--y--y's left descendant. for an LL rotation on z is as follows:
[function](/page/Function) LL_rotation(z):
    y = z.left
    z.left = y.right
    y.right = z
    # Update heights of z and y
    update_height(z)
    update_height(y)
    return y  # New subtree [root](/page/Root)
The RR rotation is symmetric to LL, applied when the right subtree of z is taller (balance factor -2) and z's right child y has a taller or equal right subtree; here, y becomes the new root, z becomes y's left child, and y's original left child becomes z's right child. For the LR case, where z's left subtree is taller but z's left child y has a taller right subtree, a rotation is needed: first perform a right on y to promote its right child x as z's new left child, then apply an rotation on z with x as the . This resolves the zigzag imbalance. Similarly, the RL case, where z's right subtree is taller but z's right child y has a taller left subtree, requires a left rotation on y followed by an RR rotation on z. for an LR rotation on z is:
function LR_rotation(z):
    y = z.left
    x = y.right
    y.right = x.left
    z.left = x.right
    x.left = y
    x.right = z
    # Update heights of y, z, and x
    update_height(y)
    update_height(z)
    update_height(x)
    return x  # New subtree [root](/page/Root)
The RL rotation mirrors this process. These rotations are selected based on the balance factors of the involved nodes during the post-operation traversal upward from the modification site. The balance condition guarantees that the height h of an with n nodes satisfies h = O(\log n), specifically h \leq 1.44 \log_2 (n+2) - 0.328, ensuring that search, insertion, and deletion operations run in O(\log n) time in the worst case. This bound arises from the minimum number of nodes in an AVL tree of height h following a Fibonacci-like recurrence, where the tree is at least as full as a tree.

Weight-Balanced and Other Variants

Weight-balanced trees, also known as trees of bounded balance, maintain balance by ensuring that the sizes of subtrees at each node differ by at most a controlled factor defined by a parameter \alpha, where $0 < \alpha \leq 1/2. Specifically, for a tree T_n with n nodes, the root-balance P(T_n) = \frac{\ell + 1}{n + 1}, where \ell is the number of nodes in the left subtree, must satisfy \alpha \leq P(T_n) \leq 1 - \alpha, and this condition holds recursively for all subtrees. This weight-based criterion, where the weight of a subtree is its size plus one for the root, guarantees a height of O(\log n), specifically at most \log_{1/(1-\alpha)} (n+1), ensuring O(\log n) time for search, insertion, and deletion operations. During insertion or deletion, after updating subtree sizes, the tree is rebalanced using rotations or split-rotations if the balance condition is violated at any node, with an expected number of such transformations bounded by \frac{1}{1 - 2\alpha}. Red-black trees achieve balance through node coloring with red or black, enforcing structural properties that limit height to $2 \log_2 (n + 1). The key properties are: every node is either red or black; the root is black; all leaves (NIL nodes) are black; no two red nodes are adjacent; and every path from a node to its descendant leaves contains the same number of black nodes. These ensure no path is more than twice as long as another, yielding O(\log n) worst-case time for operations. Insertion begins by adding a red node like in a standard BST, then resolves violations via color flips (recoloring a parent and children) or rotations (single or double) to maintain properties, propagating upward if needed. Deletion similarly involves finding and removing the node, using a temporary replacement if necessary, followed by color adjustments and rotations to restore balance, often pushing "holes" downward with color flips. Other variants include splay trees, which forgo explicit balancing in favor of self-adjustment for amortized efficiency. In a splay tree, after accessing a node (for search, insert, or delete), it is "splayed" to the root through a sequence of rotations: zig (single rotation with parent), zig-zig (two rotations in the same direction with grandparent), or zig-zag (double rotation), restructuring the tree locally without global parameters. This amortizes to O(\log n) time per operation over a sequence, proven via a potential function based on subtree sizes, making frequently accessed nodes closer to the root. Treaps combine BST ordering on keys with heap ordering on random priorities to achieve probabilistic balance. Each node has a key and a randomly assigned priority; the tree satisfies BST invariants on keys and max- invariants on priorities (parent priority > child priorities), enforced via rotations during insertions and deletions to maintain the . With priorities drawn from a random , the expected is O(\log n), yielding expected O(\log n) time for operations, as the structure mimics a random BST.

Applications

Sorting Algorithms

Binary search trees (BSTs) enable an efficient sorting method known as , where unsorted elements are inserted into an empty BST one by one, leveraging the tree's ordering property to maintain sorted structure during construction. Once all elements are inserted, an in-order traversal of the tree produces the elements in ascending order, as this traversal visits nodes from left to right subtrees in sorted sequence. The time complexity of tree sort is O(n log n) in the average case for a balanced tree, where n is the number of elements, due to the logarithmic height allowing efficient insertions and a linear traversal. However, in the worst case, if the input causes the tree to become skewed (e.g., already sorted data), the height degenerates to O(n), leading to O(n²) time complexity for insertions. The space complexity is O(n) to store the tree nodes. Advantages of tree sort include its simplicity and adaptability to dynamic data insertion, making it suitable for scenarios where elements arrive incrementally. Disadvantages primarily stem from sensitivity to input order, which can cause ing and degrade to time unless balancing is enforced. Variants of tree sort employ balanced BSTs, such as AVL trees or red-black trees, to ensure the tree height remains O(log n), guaranteeing O(n log n) worst-case for both construction and traversal. These balanced structures mitigate issues by automatically adjusting the during insertions, providing consistent comparable to or .

Priority Queues and Search Structures

Binary search trees (BSTs) serve as an effective underlying structure for , where node keys represent element priorities. Insertion into such a involves the standard BST insertion operation, which places the new element in its appropriate position based on priority, achieving O(log n) expected time in a balanced . Extracting the minimum-priority element requires traversing to the leftmost to locate the minimum, followed by its deletion via standard BST removal, also in O(log n) expected time. This approach contrasts with heap-based implementations by leveraging the sorted order inherent in BSTs, though it necessitates balancing to maintain efficiency against worst-case degeneration to O(n). BSTs facilitate efficient range queries to retrieve all keys within a specified [low, high]. The process begins at the and recursively prunes irrelevant subtrees: if the current 's key is less than low, the search proceeds only to the right subtree; if greater than high, only to the left; otherwise, both subtrees are explored after including the node if it falls within the . This traversal yields an output-sensitive of O(log n + k), where k is the number of reported elements, by decomposing the query into O(log n) subtrees and paths. Augmented variants can extend this to aggregate queries, such as counting or summing values in the , by storing partial aggregates in nodes and combining them associatively during traversal. For indexed search operations, BSTs can be augmented to support order statistics, enabling selection of the k-th smallest element in O(log n) time. Each stores the of its subtree (including itself), computed as the sum of left subtree , 1, and right subtree ; this value updates in O(log n) during insertions and deletions. The select(k) recurses based on these sizes: if k equals the left subtree plus one, the current is the k-th; if smaller, recurse left with adjusted k; otherwise, recurse right with k minus left minus one. This augmentation, known as an order statistic tree, preserves BST properties while adding rank queries symmetrically. In practical applications, BSTs underpin database indexing schemes, where they organize keys for fast lookups and range scans on large datasets, often evolving into multi-way variants like B-trees for disk-based efficiency. Similarly, in compiler design, BSTs implement symbol tables to store and retrieve identifiers, attributes, and scopes during and semantic processing, supporting operations like lookup and insertion in logarithmic time. These uses highlight BSTs' role in dynamic, ordered across software systems.

Historical Development

Origins and Early Contributions

The (BST) emerged as a in the early , building on earlier ideas of trees for organizing data efficiently. Its development was driven by the need for structures that support fast and insertion in systems, contrasting with linear sorted arrays that required costly shifts for or unordered trees lacking search efficiency. The BST enforces an ordering property where, for any node, all keys in the left subtree are less than the node's key, and all in the right subtree are greater, enabling logarithmic-time operations in balanced cases. One of the earliest documented contributions came from P. F. Windley, who in introduced tree structures for rearranging data to minimize processing overhead in computers, laying groundwork for ordered tree-based storage and search. Concurrently, A. D. Booth and A. J. T. Colin described a method for construction in , emphasizing its efficiency for lookup operations in text processing and storage, where traditional lists were inadequate for growing datasets. These works highlighted the practical advantages of hierarchies over flat arrays for dynamic . Further formalization occurred through T. N. Hibbard's 1962 analysis, which explored combinatorial properties of certain ordered trees and their applications to and algorithms, providing probabilistic insights into tree shapes under random insertions and influencing early implementations in programming systems. These foundational papers established BSTs in contexts like early database indexing and studies, predating widespread adoption in languages such as for search routines. Hibbard's contributions also addressed deletion strategies, ensuring structural integrity during modifications.

Evolution and Key Milestones

Following the initial conceptualization of binary search trees in the early , significant advancements focused on balancing mechanisms to mitigate worst-case degradation into linear structures. In 1962, and Evgenii Landis introduced the , the first widely recognized self-balancing variant, which enforces height balance by ensuring the heights of left and right subtrees differ by at most one, thereby guaranteeing O(log n) performance for insertions, deletions, and searches. This innovation addressed the vulnerability of unbalanced binary search trees to skewed configurations from sequential inputs. Building on balancing concepts, proposed symmetric binary B-trees in 1972, a structure that evolved into the modern red-black tree through the addition of node coloring rules. In 1978, Leonidas J. Guibas and formalized the red-black tree framework, using red and black colors to maintain balance properties such as equal black-height paths from root to leaves, offering amortized O(log n) operations with fewer rotations than AVL trees during updates. 's subsequent work further refined these ideas, and red-black trees gained prominence through detailed expositions in the 1990 first edition of by Cormen, Leiserson, Rivest, and (CLRS), which standardized their presentation in algorithms education. The 1980s saw the emergence of self-adjusting variants, exemplified by the developed by Daniel Sleator and in 1985. Splay trees perform rotations to move accessed nodes to the root, achieving amortized O(log n) time per operation without explicit balancing, which proved advantageous for sequences with temporal locality. Key milestones in adoption include the integration of red-black trees into programming language standard libraries during the 1990s, such as Java's TreeMap class in JDK 1.2 (released 1998), which leverages them for sorted map implementations ensuring efficient ordered access. This standardization facilitated their use in production software. In contexts, balanced binary search tree variants have influenced indexing structures, enabling efficient range queries and updates in in-memory databases handling terabyte-scale datasets. More recently, binary search tree principles have extended to , where binary decision trees—rooted in similar hierarchical splitting—form the basis of algorithms like (1984) and random forests, supporting scalable classification on massive datasets by mimicking search efficiencies for feature partitioning.

References

  1. [1]
    CS 225 | Binary Search Trees
    A binary search tree (BST) is a binary tree where every node in the left subtree is less than the root, and every node in the right subtree is of a value ...
  2. [2]
    [PDF] 1 Binary search trees
    A binary search tree is a data structure composed of nodes. Each node has a key, which determines the node's position in the tree.
  3. [3]
    [PDF] Chapter 15 Binary Search Trees (BSTs)
    A binary search tree is a binary tree in which we assign a key to every node, and for each key. (node) all keys in its left subtree are less than it, and all ...<|control11|><|separator|>
  4. [4]
    History of Trees | CS62 - Computer Science Department
    The binary search tree structure was developed by Conway Berners-Lee and David Wheeler in the 1960s for the problem of efficient storage of labeled data.
  5. [5]
    3.2 Binary Search Trees - Algorithms, 4th Edition
    Mar 19, 2021 · A binary search tree (BST) is a binary tree where each node has a Comparable key (and an associated value) and satisfies the restriction that the key in any ...
  6. [6]
    [PDF] Binary Search Trees
    A binary tree is a rooted tree where each node contains at most two children. Each child can be identi ed as either a left or right child. parent.
  7. [7]
    [PDF] Lecture 11: Binary Search Trees - Washington
    Binary Search Tree invariants: -For every node with key 𝑘: -The left subtree has only keys smaller than 𝑘. -The right subtree has only keys greater than 𝑘.
  8. [8]
    Binary search tree invariants
    Binary search tree invariants. Structural property: a BST is a binary tree; Ordering property: Each data item in a BST has a key associated with it; Keys in a ...
  9. [9]
    Binary Search Trees - cs.wisc.edu
    We assume that duplicates are not allowed (an attempt to insert a duplicate value causes an exception). The public insert method uses an auxiliary recursive ...
  10. [10]
    [PDF] Trees & Binary Search Trees | CMSC 132 - UMD Computer Science
    BST (Duplicate Keys). • You can handle duplicate keys by arbitrarily placing duplicates of an entry in the entry's right subtree. • Updated BST definition.<|control11|><|separator|>
  11. [11]
    [PDF] 4.3 Binary Search Trees - cs.Princeton
    But… Worst-case for search/insert/height is N. BST. O(log N) insert and search if keys arrive in random order.
  12. [12]
    Binary Search Trees
    May 12, 2023 · So a balanced tree has height logN, and searching a balanced binary tree would be O(logN). ... Thus the worst case is O(N). There's quite a ...
  13. [13]
    [PDF] Binary Search Trees - CS 15: Data Structures
    A binary search tree (BST) is a special binary tree that allows efficient searching for data, built on the BST invariant.
  14. [14]
    CS 367-3 - Binary Search Trees - cs.wisc.edu
    One property of a binary search tree is that an in-order traversal walks over the nodes in order of their keys (thus the name in-order).
  15. [15]
    Binary Search Trees - Temple CIS
    A binary tree is a tree where each node can only have a left successor and a right successor, or, recursively, as either empty or a root node with a left ...
  16. [16]
    [PDF] 12 Binary Search Trees
    The search tree data structure supports many dynamic-set operations, including. SEARCH, MINIMUM, MAXIMUM, PREDECESSOR, SUCCESSOR, INSERT, and. DELETE.
  17. [17]
    Introduction to Algorithms - MIT Press
    A comprehensive update of the leading algorithms text, with new material on matchings in bipartite graphs, online algorithms, machine learning, and other ...
  18. [18]
    CS312 Lecture 10: Graphs. Trees. Binary Search Trees.
    Also, the definition above precludes the existence of duplicate values in the tree. Should duplicate values be necessary, their existence is typically ...
  19. [19]
    Binary Search Tree Insertion
    This process is repeated until a null link is found or we find a key equal to the key we are trying to insert (if duplicate keys are disallowed). The new tree ...
  20. [20]
    [PDF] CMSC 420: Lecture 4 Binary Search Trees - UMD Computer Science
    Recursive Binary Tree Insertion. BSTNode insert(Key x, Value v, BSTNode p) { if (p == null). // fell out of the tree? p = new BSTNode(x, v, null, null);.
  21. [21]
    Art of Computer Programming, The: Volume 3: Sorting and ... - InformIT
    30-day returnsArt of Computer Programming, The: Volume 3: Sorting and Searching, 2nd Edition. By Donald E. Knuth; Published Apr 24, 1998 by Addison-Wesley Professional.
  22. [22]
    [PDF] Chapter 12
    May 5, 2017 · The binary-search-tree property guarantees that all nodes in the left subtree are smaller, and all nodes in the right subtree are larger.
  23. [23]
    12.5. Binary Tree Traversals - OpenDSA
    An inorder traversal first visits the left child (including its entire subtree), then visits the node, and finally visits the right child (including its entire ...
  24. [24]
    Binary Tree Traversals
    In an inorder traversal of a binary tree, we traverse one subtree of a node, then "visit" the node, and then traverse its other subtree. Usually, we traverse ...
  25. [25]
    Traversing a Binary Tree
    Pre order traversal: A pre order traversal prints the contents of a sorted tree, in pre order. In other words, the contents of the root node are printed first ...
  26. [26]
    [PDF] Trees (part 2 – Data Structure)
    Analyzing time complexity for recursive functions on binary trees. – Preorder ( Node, Left, Right ): visit the node, then its left subtree, then its right ...
  27. [27]
    Balanced binary trees (AVL trees) - CS 2112/ENGRD 2112 Fall 2015
    AVL trees, named after its inventors, Adelson-Velsky and Landis, were the original balanced binary search trees. They strengthen the usual BST invariant with ...Missing: paper | Show results with:paper
  28. [28]
    [PDF] AVL trees height proof - People | MIT CSAIL
    ... AVL trees have height ℎ = O(logn). This means that nicer / more balanced AVL trees will have the same bound on their height. You can think of such trees as ...
  29. [29]
    [PDF] Chapter 20 AVL Trees - Texas Computer Science
    ✦ To know what an AVL tree is (§20.1). ✦ To understand how to rebalance a tree using the LL rotation, LR rotation, RR rotation, and RL rotation (§ ...
  30. [30]
    [PDF] The AVL Tree Rotations Tutorial - UF CISE
    A double right rotation, or right-left rotation, or simply RL, is a rotation that must be performed when attempting to balance a tree which has a left subtree, ...
  31. [31]
    [PDF] CMSC 420: Lecture 5 AVL Trees - UMD Computer Science
    Theorem: An AVL tree with n nodes has height O(log n). Proof: Let lg denote logarithm base 2. From the above lemma, up to constant factors we have n ≥ ϕh ...
  32. [32]
    Binary Search Trees of Bounded Balance - SIAM Publications Library
    Binary Search Trees of Bounded Balance. Authors: J. Nievergelt and E. M. ReingoldAuthors Info & Affiliations. https://doi.org/10.1137/0202005 · PDF · BibTeX.
  33. [33]
    [PDF] A dichromatic framework for balanced trees - Robert Sedgewick
    In this paper we present a uniform fratnework for the imp1crnentation and study of balanced tree algorithrns. 'Inc. fratTIework deals exclusively with binary ...
  34. [34]
    [PDF] Self-Adjusting Binary Search Trees
    In this paper we develop and analyze the spluy tree, a self-adjusting form of binary search tree. The restructuring heuristic used in splay trees is spluying, ...
  35. [35]
    [PDF] Randomized Search Trees - Cecilia R. Aragon* Raimund G. Seidel
    The term "treap" was first used for a different data structure by Ed McCreight, who later aban- doned it in favor of the more mundane "priority search tree".
  36. [36]
    Lecture 5: Binary Search Trees, BST Sort | Introduction to Algorithms
    In this lecture, binary search trees are introduced, and several operations are covered: insertion, finding a value, finding the minimum element.
  37. [37]
    Priority Queues and Heaps - CS@Cornell
    Implementation 1: Binary Search Tree. One simple implementation of a priority queue is as a binary search tree, using element priorities as keys. New ...
  38. [38]
    [PDF] Range search and augmented binary search trees
    ▷ Other more complex queries e.g. do a recursive range search on another attribute for elements within range. Page 12. Range reporting. Call with node = tree ...
  39. [39]
    [PDF] Lecture 9: Augmentation - MIT OpenCourseWare
    This lecture covers augmentation of data structures, including. • easy tree augmentation. • order-statistics trees. • finger search trees, and. • range trees.
  40. [40]
    12.4. Tree-based Indexing — CS3 Data Structures & Algorithms
    One approach would be to use the binary search tree (BST) to store primary and secondary key indices. BSTs can store duplicate key values.
  41. [41]
    [PDF] 4.4 Symbol Tables and BSTs - cs.Princeton
    A binary search tree is a binary tree, with keys in symmetric order. Binary tree is either: •Empty. •A key-value pair and two binary trees.
  42. [42]
    On the efficiency of a new method of dictionary construction
    On the efficiency of a new method of dictionary construction. Author links ... Booth, 1955. Booth A.D.. Nature, 176 (1955), p. 565. Crossref View in Scopus.
  43. [43]
    Trees, Forests and Rearranging | The Computer Journal
    Trees, Forests and Rearranging. Get access Arrow. P. F. Windley. P. F. Windley. University of Leeds Computing Laboratory, Leeds, UK. Search for other works by ...
  44. [44]
    Some Combinatorial Properties of Certain Trees With Applications to ...
    Some combinatorial properties of certain trees with applications to searching and sorting. Author: Thomas N. Hibbard.
  45. [45]
    Self-adjusting binary search trees | Journal of the ACM
    The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy.
  46. [46]
    TreeMap (Java Platform SE 8 ) - Oracle Help Center
    The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.
  47. [47]
    Two-point-based binary search trees for accelerating big data ... - NIH
    Nov 26, 2018 · This paper proposes a new approach which is based on sorting the feature vectors of training data in a binary search tree to accelerate big data classification ...
  48. [48]
    1.10. Decision Trees — scikit-learn 1.7.2 documentation
    Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the ...