Fact-checked by Grok 2 weeks ago

AVL tree

An AVL tree is a in which the heights of the left and right subtrees of any node differ by at most one, ensuring logarithmic height and efficient operations. Named after its Soviet inventors, and Evgenii Landis, the structure was introduced in their 1962 paper "An algorithm for the organization of information," published in the Proceedings of the USSR Academy of Sciences. This innovation addressed the limitations of standard binary search trees, which could degenerate into linear structures with O(n) performance under certain insertion orders. The balance in an AVL tree is quantified by the balance factor of each , defined as the height of the right subtree minus the height of the left subtree, which must be -1, 0, or 1. To maintain this property during insertions or deletions, the tree performs local rotations: single left or right rotations for simple imbalances, and double rotations for more complex cases like zig-zag patterns. These operations ensure that search, insertion, and deletion all run in O(log n) time, where n is the number of nodes, making AVL trees suitable for applications such as implementing maps, sets, and associative arrays in systems requiring frequent dynamic updates.

Fundamentals

Definition

An AVL tree is a in which the subtrees of every node differ in height by at most one, ensuring efficient across operations. This structure maintains the ordering properties of a standard , where for any node, all keys in the left subtree are less than the node's key and all keys in the right subtree are greater, allowing for systematic traversal and manipulation based on comparisons. By keeping the tree balanced, an AVL tree prevents degeneration into a linear chain, which could occur in unbalanced binary search trees and lead to linear for operations. Invented by Soviet computer scientists and Evgenii Landis, the AVL tree was first described in their 1962 paper as an for organizing information to achieve balanced access times. Each in an AVL tree typically contains a key value, pointers to its left and right child nodes, and auxiliary information to track balance, such as a balance factor that measures the height difference between subtrees. This design guarantees a height of O(log n) for a tree with n nodes, resulting in O(log n) worst-case for searching, insertion, and deletion, compared to the potential O(n) in unbalanced binary search trees. The core purpose of the AVL tree is to provide a practical implementation of a balanced search structure that supports dynamic updates while preserving logarithmic efficiency, making it suitable for applications requiring frequent data insertions and lookups. Balance is maintained through local adjustments during modifications, with the balance factor serving as the key metric for detecting and correcting imbalances.

Balance Factor

The balance factor of a in an AVL tree is defined as the of its left subtree minus the of its right subtree. This metric, introduced in the seminal work on balanced trees, ensures that the difference in subtree heights does not exceed in for any . In the standard convention, the of a (empty) subtree is -, while the of a (with two children) is 0. The formula for the balance factor \text{BF}(v) of a node v is thus: \text{BF}(v) = h(\text{left subtree of } v) - h(\text{right subtree of } v) where h(T) denotes the height of subtree T. For the tree to remain balanced, the balance factor of every node must be -1, 0, or 1. A value of 0 indicates equal-height subtrees, +1 means the left subtree is one level taller, and -1 means the right subtree is one level taller. If the absolute value of the balance factor exceeds 1 at any node following an insertion or deletion, the tree violates the AVL property and requires rebalancing to restore equilibrium. To illustrate, consider a simple AVL tree with nodes containing values 10 (root), 5 (left child), and 15 (right child). Here, both subtrees of the are leaves with 0, so \text{BF}(10) = 0 - 0 = 0; \text{BF}(5) = -1 - (-1) = 0; and \text{BF}(15) = -1 - (-1) = 0. The tree remains balanced:
    10
   /  \
  5    15
Now, inserting 20 as the right child of 15 creates an unbalanced node at the : the left subtree is 0, the right is 1, so \text{BF}(10) = 0 - 1 = -1 (still balanced), but if further insertions skew it—e.g., adding another right child to make the right subtree 2—then \text{BF}(10) = 0 - 2 = -2, violating the condition and necessitating rebalancing.

History

The AVL tree was invented in 1962 by Soviet computer scientists Georgy Maksimovich Adelson-Velsky and Evgenii Mikhailovich Landis while working at the Institute for Theoretical and Experimental Physics in . Their work addressed the need for efficient data organization in early computing environments, where standard binary search trees could degrade to linear performance under certain insertion orders, hindering applications like rapid lookups in dynamically changing datasets. The structure was first described in their seminal paper, "An algorithm for the organization of information," published in the proceedings of the Doklady Akademii Nauk SSSR (Soviet Mathematics Doklady in English translation). In this brief four-page article, Adelson-Velsky and Landis introduced a self-balancing mechanism using height differences—later termed the balance factor—to ensure logarithmic-time operations for search, insertion, and deletion, making it suitable for database and tasks in the resource-constrained computers of the era. Originally motivated by the challenge of organizing quick searches for repeated positions in games, such as chess analysis, the AVL tree represented a pioneering solution for maintaining balance in binary trees without excessive computational overhead. This innovation influenced subsequent balanced tree designs, serving as one of the earliest examples of self-balancing binary search trees and predating structures like red-black trees by a decade.

Structural Properties

Tree Height

The height of an AVL tree is a critical structural property that ensures its logarithmic performance, distinguishing it from unbalanced search trees which can degenerate to linear in the worst case. In an AVL tree, the balance condition—that the heights of left and right subtrees differ by at most 1—imposes strict limits on the tree's relative to the number of nodes n. This prevents the worst-case linear of unbalanced search trees, guaranteeing that the remains O(\log n). To derive precise bounds, consider the minimum number of nodes N(h) in an AVL tree of height h. The smallest such tree occurs when one subtree has height h-1 and the other has height h-2, with the root adding one node, leading to the recurrence N(h) = N(h-1) + N(h-2) + 1, with base cases N(0) = 0 and N(1) = 1. This recurrence is closely related to the , where N(h) = F_{h+2} - 1 and F_k denotes the k-th number (with F_1 = 1, F_2 = 1). The closed-form solution for the Fibonacci numbers, Binet's , yields N(h) \approx \frac{\phi^{h+2}}{\sqrt{5}} - 1, where \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618 is the . From this, the maximum height h for n nodes satisfies n \geq N(h), implying h \approx \log_\phi n. More precisely, solving the inequality gives the tight bound h \leq 1.44 \log_2 (n + 2) - 0.328. A proof sketch for the O(\log n) height proceeds by induction: assume the bound holds for smaller heights; then N(h) \geq \frac{\phi^{h+2} - (-1/\phi)^{h+2}}{\sqrt{5}} - 1 \geq c \phi^h for some constant c > 0, so n \geq c \phi^h and thus h \leq \log_\phi (n/c) = O(\log n). This analysis, building on the original AVL tree definition, confirms the efficiency of operations in AVL trees.

Node Organization

In an AVL tree, each typically consists of a (or for ), pointers to its left and right child s, and an auxiliary field to maintain balance information, such as the of the subtree rooted at that . This structure extends the basic by incorporating the additional field, which enables efficient balance checks during tree modifications. The original formulation by Adelson-Velsky and Landis emphasized storing a balance factor derived from subtree s, but modern implementations often prioritize a height field for simpler updates. A representative pseudocode definition for an AVL tree node in a language like or C++ might appear as follows:
class Node {
    [int](/page/INT) key;          // The stored value or key
    [Node](/page/Node) left;        // Pointer to left [child](/page/Child)
    [Node](/page/Node) right;       // Pointer to right [child](/page/Child)
    [int](/page/INT) height;       // [Height](/page/Height) of the subtree rooted at this node
}
This layout assumes the key is an for , though it can generalize to any comparable type. The field is initialized to 1 for leaf nodes and updated recursively during insertions or deletions. When modifying the tree, the height of a is recomputed using the : \text{height}(v) = 1 + \max(\text{height}(v.\text{left}), \text{height}(v.\text{right})) where children are considered to have height 0. This rule ensures that height values reflect the longest path from the to a in its subtree, facilitating quick balance factor calculations as the difference between left and right subtree heights. Compared to a plain , which requires only the key and child pointers per node, the AVL tree incurs a constant memory overhead of approximately 2 extra bits (for encoding the balance factor in {-1, 0, +1}) or an for , depending on the . This minimal addition supports the self-balancing without significantly impacting efficiency for large trees. Variations in node organization exist to optimize computations: some implementations store the balance factor directly (computed as the right subtree minus the left) instead of the , reducing the need for max operations during updates at the cost of slightly more complex derivations when needed. The balance factor can then be maintained as a simple in {-1, 0, +1}, directly indicating the relative subtree heights.

Core Operations

Searching

The search operation in an AVL tree is identical to that in a standard (BST), relying on the BST property that all keys in a node's left subtree are less than the node's , and all keys in the right subtree are greater. To locate a target , traversal begins at the : if the current node's matches the target, the node is returned; if the target is smaller, the search recurses on the left child; otherwise, it recurses on the right child. The process terminates when the key is found or a null child is encountered, indicating absence. The following illustrates the recursive :
function search(node, key):
    if node is null:
        return null  // Key not found
    if key == node.key:
        return node  // Key found
    if key < node.key:
        return search(node.left, key)
    else:
        return search(node.right, key)
This approach ensures ordered traversal without additional checks for balance during the operation. Due to the AVL tree's height balance, which bounds the tree height at O(log n), the search achieves O(log n) time complexity in both the average and worst cases, providing guaranteed efficiency over potentially degenerate BSTs. No rebalancing is performed during search, as the tree remains static.

Insertion

Insertion into an AVL tree follows the standard binary search tree insertion procedure to place the new at a leaf position, adhering to key comparison rules to determine the attachment point as either the left or right child of an existing leaf. After placement, the heights of all ancestors from the new node up to the root are recalculated as one plus the maximum height of their subtrees. Balance factors are then examined along this path, starting from the inserted node's parent to the root, where the balance factor of a node is defined as the height of its right subtree minus the height of its left subtree. An imbalance occurs if the absolute value of any balance factor exceeds 1, necessitating rebalancing through rotations. Due to the modification of only a single path during insertion, at most one node becomes imbalanced, requiring at most two rotations to restore the . This process was originally described in the seminal 1962 paper introducing the . The specific rebalancing rotation is determined by the location of the insertion relative to the imbalanced node, leading to four distinct cases:
  • LL (Left-Left): Insertion in the left subtree of the left child; resolved by a single right rotation on the imbalanced node.
  • LR (Left-Right): Insertion in the right subtree of the left child; resolved by a left rotation on the left child followed by a right rotation on the imbalanced node.
  • RR (Right-Right): Insertion in the right subtree of the right child; resolved by a single left rotation on the imbalanced node.
  • RL (Right-Left): Insertion in the left subtree of the right child; resolved by a right rotation on the right child followed by a left rotation on the imbalanced node.
These cases ensure the tree remains height-balanced after insertion. The following pseudocode illustrates the basic recursive insertion without rebalancing, which updates node heights post-insertion:
function insert(node, key):
    if node is null:
        create new Node with key and height 1
        return new Node
    if key < node.key:
        node.left = insert(node.left, key)
    else if key > node.key:
        node.right = insert(node.right, key)
    // Update height
    node.height = 1 + max(getHeight(node.left), getHeight(node.right))
    return node

function getHeight(node):
    if node is null:
        return 0
    return node.height
Balance checks and rotations are applied after this recursive call unwinds.

Deletion

The deletion operation in an AVL tree follows the standard binary search tree (BST) deletion procedure to remove the target , followed by height updates and rebalancing to restore the AVL balance property along the path from the deletion site to the root. This ensures the tree remains height-balanced, with balance factors between -1 and 1 for all s. Unlike insertion, which typically causes local height increases, deletion can lead to underflows that propagate upward, potentially requiring rebalancing at multiple levels. The algorithm first locates the node to delete using the BST search method, which traverses from the root in O(log n) time on average. Once found, removal handles three distinct cases based on the node's children:
  • Leaf node (no children): The node is simply removed by updating its parent's pointer to null, and the parent's height is decremented if necessary.
  • Node with one child: The node is replaced by its child, which is attached directly to the parent, preserving BST order. The height of the parent and ancestors is then updated.
  • Node with two children: The value of the node is replaced by the value of its inorder successor (the minimum-value node in the right subtree), after which the successor node—guaranteed to have at most one child—is deleted as in the one-child or leaf case. This maintains the BST property without violating search order.
After structural removal, the heights of all nodes along the to the are recomputed bottom-up. At each node, the balance factor is checked; if it exceeds the AVL limits (i.e., becomes -2 or +2), rebalancing occurs through single or double rotations, similar to those used in insertion, to correct the imbalance. Imbalance propagation can continue upward, but the total number of rebalancing steps remains O(log n) in the worst case due to the tree's height bound. A special case arises when deleting the root node, which may alter the tree's root pointer (e.g., to the inorder successor). Post-deletion rebalancing then starts from the new , potentially restructuring the entire top level if imbalances occur. This ensures the overall tree height remains logarithmic. The following outlines the core deletion function, emphasizing successor handling and rebalancing (assuming a class with key, left, right, height, and a balance function):
function delete(node, key):
    if node is null:
        return node
    if key < node.key:
        node.left = delete(node.left, key)
    elif key > node.key:
        node.right = delete(node.right, key)
    else:
        // Node to delete found
        if node.left is null:
            return node.right
        elif node.right is null:
            return node.left
        else:
            successor = findMin(node.right)
            node.key = successor.key
            node.right = delete(node.right, successor.key)
    
    // Update height
    node.height = 1 + max(height(node.left), height(node.right))
    
    // Check balance and rebalance
    balance = getBalance(node)
    if balance > 1:
        if getBalance(node.right) >= 0:
            return leftRotate(node)
        else:
            node.right = rightRotate(node.right)
            return leftRotate(node)
    if balance < -1:
        if getBalance(node.left) <= 0:
            return rightRotate(node)
        else:
            node.left = leftRotate(node.left)
            return rightRotate(node)
    return node

// Helper: find minimum node in subtree
function findMin(node):
    while node.left is not null:
        node = node.left
    return node

// Helpers for rotations and balance omitted for brevity
This recursive implementation handles the cases and propagates rebalancing naturally up the call stack. The overall time complexity for deletion is O(log n), matching search and insertion, as rebalancing amortizes to constant time per level on average.

Traversal

Traversal methods in AVL trees follow the standard approaches for binary search trees, visiting all nodes while preserving the tree's structural properties. The inorder traversal processes nodes in left-root-right order, producing the keys in non-decreasing sorted sequence due to the inherent binary search tree ordering maintained in AVL trees. This operation requires O(n) time complexity, where n is the number of nodes, as each node is visited exactly once. Preorder traversal visits the root first, followed by the left subtree and then the right subtree, which is particularly useful for tasks like serializing the tree structure or creating a prefix notation representation. Postorder traversal, conversely, processes the left subtree, then the right subtree, and finally the root, aiding in operations such as computing subtree sizes or deleting the tree by freeing nodes bottom-up. Both preorder and postorder traversals also achieve O(n) time complexity. Level-order traversal employs a breadth-first search strategy, using a queue to visit nodes level by level from the root downward, which facilitates applications like finding the minimum height or printing the tree layer-wise. This method similarly operates in O(n) time. The recursive implementation of inorder traversal can be expressed as follows:
function inorder(node):
    if node is not null:
        inorder(node.left)
        visit(node)
        inorder(node.right)
This pseudocode applies analogously to preorder and postorder by adjusting the visit order relative to the recursive calls. The self-balancing property of AVL trees limits the height to O(log n), ensuring that recursive traversals use at most O(log n) stack space and avoiding potential stack overflow issues encountered in unbalanced trees.

Rebalancing Methods

Single Rotations

Single rotations in serve as the primary mechanism to restore balance when an insertion or deletion results in a local imbalance where the heights of the subtrees differ by more than one, specifically in cases where the imbalance aligns with the direction of the disturbing operation, such as left-left (LL) or right-right (RR) configurations. These operations were introduced in the seminal work by , who described them as transformations to maintain the height difference constraint at every node. Consider the LL case, where a node y has a left child x, and the insertion has increased the height of x's left subtree, leading to a balance factor of -2 at y (indicating the left subtree is taller by two levels). A right rotation is applied to y: x becomes the new subtree root, y is repositioned as x's right child, and y's original left child (previously x's right subtree) is set as y's new left child. This restructuring preserves the in-order traversal of the tree while reducing the height difference to at most one. The transformation can be visualized as follows: prior to the rotation, the subtree consists of y as root with left child x (whose left subtree is T1 and right subtree is T2), and y's right subtree is T3; after the rotation, x is the root with left subtree T1 and right child y (whose left subtree is now T2 and right subtree remains T3). The right rotation can be implemented with the following pseudocode, which assumes nodes store height information:
Node rightRotate(Node y) {
    Node x = y.left;
    Node T2 = x.right;
    x.right = y;
    y.left = T2;
    // Update heights
    y.height = 1 + max(height(y.left), height(y.right));
    x.height = 1 + max(height(x.left), height(x.right));
    return x;
}
This code performs the structural changes and updates the heights of y and x to reflect the new configuration, ensuring the balance factor is recalculated correctly. Symmetrically, for the RR case—where the balance factor is +2 at a node due to an overly tall right subtree—a left rotation mirrors the right rotation process, swapping left and right references accordingly to rebalance the tree. Single rotations are triggered only when the child's balance factor matches the sign of the parent's imbalance, confirming the same-direction violation.

Double Rotations

Double rotations, also known as left-right (LR) or right-left (RL) rotations, are rebalancing operations in employed when an insertion or deletion results in a zig-zag imbalance, where the heavier subtree of the unbalanced node contains a subtree with opposite weighting relative to its parent. These cases arise when the balance factor of the unbalanced node is +2 or -2, and the balance factor of the intervening child node indicates an opposite tilt—for instance, in an LR imbalance, the left child of the unbalanced node has a balance factor of +1 (right-heavy). Consider the LR case, where node x is unbalanced with a left-heavy tilt (balance factor -2), its left child y is right-heavy (balance factor +1), and the insertion occurred in the right subtree of y (node z). The initial configuration forms a zig-zag pattern: x leans left to y, but y leans right to z. To resolve this, a double rotation proceeds in two steps. First, perform a left rotation on y: the right child of y (which is z) becomes the new root of this subtree, y becomes the left child of z, and the original left child of z (if any) becomes the right child of y. This intermediate tree now has z as the left child of x, with y as z's left child and the original right subtree of y attached accordingly. Second, perform a right rotation on x: z becomes the new root, the original left child of z (now y) remains as z's left child, x becomes the right child of z, and the original right child of x attaches to x's left. The resulting tree restores the balance factor to 0 at z, with heights of subtrees differing by at most 1. The RL case is the symmetric mirror: for a right-heavy unbalanced node x (balance factor +2) with a left-heavy right child y (balance factor -1), first right-rotate on y to promote its left child z, then left-rotate on x to elevate z. This double rotation ensures balance restoration with minimal structural change, reducing the height of the unbalanced subtree by exactly 1 while preserving the in-order traversal and binary search tree properties. Pseudocode for the LR double rotation, building on single rotation primitives, is as follows:
Node* leftRightRotate(Node* x) {
    x->left = leftRotate(x->left);
    return rightRotate(x);
}
Here, leftRotate and rightRotate are the single rotation functions applied sequentially. The variant mirrors this:
Node* rightLeftRotate(Node* x) {
    x->right = rightRotate(x->right);
    return leftRotate(x);
}
These operations are invoked only when the child's balance factor confirms the opposite imbalance, preventing unnecessary rotations and guaranteeing O(1) time per rebalancing step.

Advanced Features

Set Operations

AVL trees support set operations such as , , and by exploiting their balanced properties, which maintain ordered keys and logarithmic access times. These operations treat the trees as representations of finite sets without duplicates, enabling efficient manipulation through searches, insertions, and ordered traversals. The of two AVL trees of sizes n and m can be computed by creating a copy of one tree and inserting elements from the other, using searches to check for and avoid duplicates. Each search and potential insertion requires O(\log k) time, where k is the size of the target tree at that step, leading to an overall of O(n \log m + m \log n) due to the searches across both trees in a symmetric implementation. For the intersection, both trees are traversed in inorder to produce sorted sequences of keys. The traversal advances the cursor in the tree with the smaller current ; when keys match, the common element is collected. This process identifies all shared elements in O(n + m) time, after which a new AVL tree can be constructed from the sorted result of shared elements in O(k) time, where k is the number of shared elements, using a bottom-up building approach. The difference operation follows a similar inorder traversal approach but excludes matching elements from the first set. Pointers to the inorder sequences of both trees are advanced by comparing : advance the smaller, skip both on match (omitting from the result), and collect the key from the first tree when it is smaller. The result is then used to construct a new AVL tree from the sorted differing elements in O(k) time. These operations require inorder iterators or merged traversal mechanisms to facilitate to ordered elements, ensuring efficient with the tree's structure without unnecessary intermediate storage.

Bulk Operations

operations in AVL trees enable the efficient processing of multiple insertions or deletions as a single batch, which is particularly useful for amortizing the cost of rebalancing across numerous updates. Unlike performing individual operations sequentially, bulk methods defer or batch rebalancing to maintain the tree's invariant while reducing overall overhead. This approach leverages the underlying structure of single insertions and deletions but optimizes them for mass modifications within the same tree. For bulk insertion of k unsorted keys into an existing n-node AVL tree, the keys are first sorted in O(k \log k) time, then a balanced AVL subtree is constructed from the sorted list in O(k) time using recursive splitting: the middle element becomes the , and the process recurses on the left and right halves. This subtree is then merged into the existing tree, typically by individual insertions with deferred rebalancing, leading to an overall of O(k \log n + k \log k). A bottom-up traversal can then update balance factors and apply rotations to restore the AVL property. For unsorted input without sorting, or for scattered insertions, the standard approach is to perform k individual insertions, each O(\log n), for O(k \log n) total time, with rebalancing included. Bulk deletion follows a similar strategy for a set of k keys to remove. The keys are located and removed individually without full rebalancing during the process, potentially using lazy deletion by marking nodes as deleted to avoid immediate structural changes, for O(k \log n) time. For range-based deletions, the affected subtrees are identified, and those subtrees are rebuilt from the remaining elements using the recursive splitting method described for insertions in O(r + \log n) time, where r is the range size. A subsequent rebalancing traversal then corrects any imbalances. The demonstrates that batching rebalances allows a of bulk operations totaling n updates to complete in O(n \log n) time overall, matching the cost of individual operations but with improved practical efficiency due to reduced frequency. AVL-specific techniques, such as periodic rebalancing after bulk changes, ensure the difference (|h_\mathrm{left} - h_\mathrm{right}| \leq 1) is preserved without excessive local adjustments. These operations find applications in database indexing systems for batch loading records and in file systems handling large-scale updates, where maintaining balance under heavy write loads is critical.

Performance and Comparisons

Time and Space Complexity

The of fundamental operations in an AVL tree—search, insertion, and deletion—is O(log n) in the worst case, where n is the number of nodes, owing to the tree's height being maintained at most approximately 1.44 log₂(n + 2) - 1, which ensures a logarithmic bound. This height restriction arises from the factor , preventing degeneration into a linear structure. Rotations used in rebalancing after insertions or deletions each take O(1) time, and since at most O(log n) such rotations may occur along the path from to , the overall amortized cost remains O(log n). In contrast to an unbalanced binary search tree (BST), where insertions or deletions in sorted order can lead to a height of O(n) and thus O(n) worst-case time for operations, the AVL tree's self-balancing mechanism avoids this pathology, providing consistent logarithmic performance. For traversal operations, such as inorder traversal, the time complexity is O(n), as every node must be visited exactly once, regardless of the tree's balance. Recursive implementations of traversal incur O(log n) additional space due to the recursion stack depth matching the tree height. The of an AVL tree is O(n) in total, as it stores n nodes, each typically requiring constant extra space for pointers, keys, values, and balance factors. During operations like search or insertion, the auxiliary space is O(log n), corresponding to the length from to , or O(1) for iterative variants.
OperationAverage TimeWorst-Case TimeSpace
SearchO(log n)O(log n)O(log n)
InsertionO(log n)O(log n)O(log n)
DeletionO(log n)O(log n)O(log n)
Traversal (e.g., inorder)O(n)O(n)O(log n)

Comparison to Other Balanced Trees

AVL trees maintain a strict balance condition where the height difference between the left and right subtrees of any , known as the balance factor, is at most 1 in . In contrast, red-black trees enforce a looser balance through color invariants: no two red nodes are adjacent, every path from the to a contains the same number of nodes (black-), the is , and leaves are considered . This allows red-black trees to have a of at most twice that of a perfectly balanced , potentially up to approximately 2 log₂(n+1), compared to the tighter bound of about 1.44 log₂(n+2) for AVL trees. As a result, AVL trees require more frequent rotations during insertions and deletions to preserve their strict balance, but each rebalancing operation typically involves fewer rotations; red-black trees, however, perform rotations less often but with more complex color adjustments. Deletion in red-black trees is generally simpler to implement than in AVL trees, as it avoids the need for as many recalculations, though both achieve O(log n) worst-case time. Compared to splay trees, AVL trees provide guaranteed worst-case O(log n) time for search, insertion, and deletion operations due to their fixed bound. Splay trees, introduced by Sleator and Tarjan, achieve amortized O(log n) time per operation through splaying—rotating recently accessed nodes to the root—which can speed up sequences of operations on frequently accessed elements but offers no worst-case guarantee and can degrade to O(n) in the worst single operation. While splay trees adapt to access patterns without explicit balancing, AVL trees prioritize consistent performance across all operations, making them preferable when worst-case predictability is essential. B-trees differ fundamentally from AVL trees in design and application: AVL trees are search trees optimized for in-memory operations, where to nodes is cheap. B-trees, by contrast, support high fanout (typically hundreds of children per node) to minimize disk I/O in systems, as each node access may involve a costly block read. This makes B-trees more space-efficient for large sets, with leaves holding and internal nodes fewer in proportion, whereas AVL trees allocate more evenly across all nodes. AVL trees suit main-memory scenarios like systems, while B-trees dominate database indexing for disk-based storage.
AspectAVL TreesRed-Black TreesSplay TreesB-Trees
Balance StrictnessHeight difference ≤1 per nodeBlack-height equal on paths; height ≤2 log(n+1)No fixed balance; adapts via splayingHigh ; balanced height across subtrees
Rotation FrequencyMore frequent, simpler per rebalanceLess frequent, more complex (colors)Frequent during splay, amortizedInfrequent; splits/merges on /underflow
Implementation ComplexityModerate; height tracking requiredHigher; and casesLower; no balance invariantsHigher; multi-child handling
Worst-Case TimeO(log n) guaranteedO(log n) guaranteedO(n) per operationO(log n) with fanout factor
AVL trees are favored in real-time systems requiring predictable worst-case response times, such as or safety-critical applications, due to their bounded . Red-black trees offer a of simplicity and performance for general-purpose libraries, like Java's TreeMap. Splay trees excel in scenarios with , such as caching, but may introduce variability unsuitable for hard real-time constraints. B-trees and variants like B+-trees are standard for database systems handling persistent, disk-resident data.

References

  1. [1]
    AVL Trees
    AVL trees keep themselves balanced by performing rotations. A tree rotation is a way of rearranging the nodes of a BST that will change the height, but will NOT ...
  2. [2]
    Adelson-Velsky and Landis trees (AVL) - University of Vermont
    Jan 5, 2025 · AVL trees are a kind of balanced binary search tree, invented in 1962 by Adelson-Velsky and Landis. Balancing avoids pathological structures and keeps ...Missing: paper | Show results with:paper
  3. [3]
    [PDF] An algorithm for the organization of information by GM Adel'son-Vel ...
    An algorithm for the organization of information by G. M. Adel'son-Vel'skii and E. M. Landis. Soviet Mathematics Doklady, 3, 1259-1263, 1962.
  4. [4]
    [PDF] Lecture 16 AVL Trees
    Binary search trees are an excellent data structure to implement associative arrays, maps, sets, and similar interfaces. The main difficulty is that they.
  5. [5]
    [PDF] AVL TREES
    Inserting a node into an AVL tree involves performing an expandExternal(w) on T, which changes the heights of some of the nodes in T.
  6. [6]
    [PDF] Lecture 16 AVL Trees
    In this lecture we use AVL trees, which is a simple and efficient data structure to maintain balance, and is also the first that has been proposed. It is named ...
  7. [7]
    [PDF] NOTES ON AVL TREES by Vassos Hadzilacos
    AVL trees are a compromise between arbitrary and ideally height-balanced binary search trees, named after Adelson-Velski and Landis. They store balance factors ...
  8. [8]
    [PDF] CSE373: Data Structures & Algorithms Lecture 7: AVL Trees
    An AVL tree is a self-balancing binary search tree where the height difference between subtrees at each node is at most 1, with a balance between -1 and 1.
  9. [9]
    [PDF] Georgy Maksimovich Adelson-Velsky
    Oct 30, 2014 · Originally intended to organize a rapid search for repeated posi- tions in games, AVL trees became an example in computer science world-wide of.
  10. [10]
    Generic Red-Black Tree and its C# Implementation
    Rudolf Bayer invented this important tree structure in 1972, about 10 years after the introduction of AVL trees. He is also credited with the invention of ...
  11. [11]
    Balanced binary trees (AVL trees) - CS 2112/ENGRD 2112 Fall 2015
    AVL trees, named after its inventors, Adelson-Velsky and Landis, were the original balanced binary search trees. They strengthen the usual BST invariant with ...Missing: paper | Show results with:paper
  12. [12]
    [PDF] CMSC 420: Lecture 5 AVL Trees - UMD Computer Science
    AVL Tree: A binary search tree that satisfies the AVL balance condition. For any node v of the tree, let height(v) denote the height of the subtree rooted ...
  13. [13]
    [PDF] Performance Analysis of BSTs in System Software - Ben Pfaff
    Some data structures require extra memory on top of that required by ordinary BSTs: AVL trees require 2 extra bits per node and red-black trees require 1 extra ...
  14. [14]
    [PDF] AVL Trees - CMSC 420 - UMD MATH
    Feb 22, 2023 · At this point we reiterate that an AVL tree is a binary search tree at heart, meaning search is exactly the same. 3.2 Worst-Case Time Complexity.
  15. [15]
    Balanced Search Trees
    Nov 6, 2023 · An AVL tree (Adelson-Velskii and Landis) is a binary search tree for which each node's children differ in height by at most 1. Guarantees that ...
  16. [16]
    Self-balancing BST - Temple CIS
    An AVL tree is maintained by making some changes to the algorithms for BST insertion and deletion. We can visualize the change as updating the balance factor ...
  17. [17]
    26.2. The AVL Tree - OpenDSA
    The key to making the AVL tree work is to alter the insert and delete routines so as to maintain the balance property.Missing: motivation | Show results with:motivation<|control11|><|separator|>
  18. [18]
    AVL Trees - cs.wisc.edu
    An AVL tree is a binary search tree that is "almost" balanced. Recall that the height of a tree is the number of nodes on the longest path from the root to a ...
  19. [19]
    Insert operations on AVL trees
    Notations: w = the newly inserted node in the AVL tree. x = the first node in the (previously) AVL tree on the path from w to the root that is imbalanced.
  20. [20]
    A09: AVL Trees - cs.wisc.edu
    AVL stands for Adelson-Velsky and Landis (the people who created it), and is a self balancing binary tree. Utilizes balance factor, or bf; bf = height of left ...
  21. [21]
    [PDF] AVL Trees - cs.wisc.edu
    balance factor = height(right) - height(left). - left sub-tree increases, balance factor decreases by 1. - right sub-tree increases, balance factor increases by ...
  22. [22]
    AVL Deletion - UCSD Math
    Below we show the basic schemes for rebalancing an AVL tree after a deletion. The basic algorithm for rebalancing after deletion works similarly to the ...
  23. [23]
    Delete operations on AVL trees - Emory CS
    Delete operations on AVL trees · Review: deleting an entry from a binary search tree · Deletion in an AVL tree can also cause imbalance · Re-balancing the AVL tree ...Missing: primary sources
  24. [24]
    [PDF] Rebalancing operations for deletions in AVL-trees - Numdam
    In this paper we consider only deletions in an arbitrary AVL-tree with n leaves and we show that the number of rebalancing opérations ( = balance and structural ...
  25. [25]
    [PDF] AVL Tree Examples
    Deletion from an AVL Tree. First we will do a normal binary search tree delete. Note that structurally speaking, all deletes from a binary search tree delete ...
  26. [26]
    CS 312 Lecture ?: AVL Trees
    When a tree satisfies the BST invariant, an in-order traversal of the tree nodes will visit the nodes in ascending order of their contained values. So it's ...
  27. [27]
    CSCI 103L – Tree Traversal, BST and AVL Trees
    An AVL tree is a type of balanced Binary Search Tree that uses the height of substrees and rotations to maintain balance.
  28. [28]
    [PDF] 6.006 Introduction to Algorithms, Lecture 7: Binary Trees, Part 2: AVL
    Proof: Repeatedly perform last possible right rotation in traversal order; resulting tree is a canonical chain. Each rotation increases depth of the last node ...
  29. [29]
    [PDF] CMSC 420: Lecture 5 AVL Trees - UMD Computer Science
    AVL balance condition: For every node in the tree, the absolute difference in the heights of its left and right subtrees is at most 1. For any node v of the ...
  30. [30]
    [PDF] The AVL Tree Rotations Tutorial - UF CISE
    A double right rotation, or right-left rotation, or simply RL, is a rotation that must be performed when attempting to balance a tree which has a left subtree, ...
  31. [31]
    [PDF] Lecture 16 AVL Trees - Carnegie Mellon University
    To describe AVL trees we need the concept of tree height, which we de- fine as the maximal length of a path from the root to a leaf. So the empty tree has ...
  32. [32]
    CSCB63 – Design and Analysis of Data Structures
    So, did we do better than our first try? • Best union / intersection / difference algorithm for balanced trees (including AVL and red-black trees) is Θ(n log(m.
  33. [33]
    [PDF] Introduction
    Union, Intersection, and Difference. The function union takes two BSTs and returns a BST that contains the union of the keys in the two BSTs.
  34. [34]
    [PDF] Chapter 1: The Study of Data Structures
    Notice that an inorder traversal of a BST will list the elements in sorted order. ... the naive implementations of set union, intersection and difference are each ...
  35. [35]
    Amortized Complexity of Bulk Updates in AVL-Trees - SpringerLink
    A bulk insertion for a given set of keys inserts all keys in the set into a leaf-oriented AVL-tree. Similarly, a bulk deletion deletes them all.
  36. [36]
    [PDF] AVL Trees - csail
    Feb 11, 2011 · In this case, h = n, so the runtime of these binary search tree operations are O(n). ... Rotations are O(1) time operations. AVL Insertion.
  37. [37]
    CS 312 Lecture 14: AVL Trees
    An AVL tree is a balanced binary search tree where the height difference between its left and right children is at most 1.
  38. [38]
    [PDF] Balanced Binary Trees - GMU CS Department
    AVL Trees. • AVL trees keep track of the difference in height between the right and left sub-trees for each node. • This difference is called the balance factor.
  39. [39]
    Balanced Search Trees
    Jul 21, 2025 · The height of a red-black tree is no more than twice the height of the equivalent 2-3-4 tree. And we have already noted that the height of B- ...
  40. [40]
    EECS 311: Balanced Binary Search Trees
    Red-black trees​​ They differ in what that invariant is and what bookkeeping information is kept. Instead of two numbers on each node representing the heights of ...
  41. [41]
    [PDF] Balanced Binary Search Trees - UF CISE
    This is an AVL tree. The height of an AVL tree that has n nodes is at most 1.44 log2 (n+2). The height of every n node binary tree is at least log2 (n+1).
  42. [42]
    BalancedTrees
    Because deletion is simpler in red-black trees than in AVL trees, and because operations on red-black trees tend to have slightly smaller constants than ...
  43. [43]
    [PDF] AVL Insertion, Deletion Other Trees and their representations
    ◦ Red-Black and AA-trees are simpler to implement than AVL trees, but harder to understand why they work. ◦ Used for disk-based searches, and for database ...
  44. [44]
    Splay Trees 6.854 Notes #3 - csail
    Delivers O ( log ⁡ n ) time per operation performance too. But only in amortized (as opposed to worst-case) sense! Is actually optimal in a much more ...
  45. [45]
    [PDF] Splay Trees. 1. Invented by Daniel Sleator and Robert Tarjan. Self ...
    Balanced search trees (AVL, weight balanced, B-Trees etc) have O(log n) worst case search, but are either complicated to implement, or require extra space ...
  46. [46]
    Alternatives to splay trees with O(log n) worst-case access times
    Splay trees have a O(n) worst case performance per operation. There are many search tree data struc- tures with O(logn) worst case times, such as AVL trees. [1] ...
  47. [47]
    [PDF] Splay Trees
    Balanced BSTs guarantee tree operations run in worst- case time O(log n). ○. Claim: If the elements in the tree aren't accessed uniformly, then a balanced ...
  48. [48]
    [PDF] Implementation Techniques for Main Memory Database Systems
    We evaluate AVL vs. B+-tree access methods for main memory databases, hash-based query pro cessing strategies vs. sort-merge, and study recovery ...
  49. [49]
    [PDF] Cache-oblivious B-Trees - Erik Demaine
    Typically, an L1 cache access is two orders of magnitude faster than a main memory access and six orders of magnitude faster than a disk access [27]. Thus, it ...
  50. [50]
    [PDF] A Study of Index Structures for Main Memory Database Management ...
    The B-tree is much more space efficient than the AVL tree because the leaves contain only data, and the leaves comprise the majority of the nodes in the tree. ( ...
  51. [51]
    [PDF] Copyright © 1984, by the author(s). All rights reserved. Permission to ...
    ... and theworst case access time may besignificantly poorer thaninthe case ofa B-tree. ... AVL-Tree is resident in main memory and there are no disk accesses. ... use ...
  52. [52]
    [PDF] Data Structure And Algorithm Analysis In C data structure and ...
    For instance, real-time systems demand predictable ... trees (binary, AVL, B-trees) and ... Dynamic data structures such as self-balancing trees (AVL, Red-Black ...<|control11|><|separator|>
  53. [53]
    [PDF] Data Structures for Data-Intensive Applications: Tradeoffs and ...
    AVL-Trees (Adelson-Velsky and Landis, 1962) and Red-Black-Trees. (Bayer, 1972) are two notable examples. New designs of self-rotating self-balancing binary ...