Fact-checked by Grok 2 weeks ago

Amortized analysis

Amortized analysis is a method for analyzing the computational complexity of algorithms, particularly data structures, by averaging the cost of a sequence of operations over the entire sequence rather than examining the worst-case cost of individual operations. This approach provides a more realistic bound on performance when operations vary in cost, such as in dynamic resizing of arrays or union-find structures, where occasional expensive operations are offset by many cheaper ones. The technique was formally introduced by in his 1985 paper "Amortized Computational Complexity," which addressed the limitations of traditional worst-case analysis for incremental data structures and algorithms. Tarjan's work built on earlier informal uses of averaging in algorithm analysis, emphasizing its robustness for sequences of operations in practice. Amortized analysis differs from average-case analysis by focusing on the worst-case sequence of operations while still averaging costs, making it suitable for proving guarantees in adversarial settings. There are three primary for performing amortized : the aggregate method, which bounds the total cost of a directly; the accounting method, which assigns credits or debits to operations to cover future costs; and the , which uses a to track the "stored" work in the . These methods are widely applied to analyze structures like stacks with dynamic arrays (achieving O(1) amortized time for push and pop), binary search trees with splay operations, and heaps in algorithms. By providing tighter bounds than worst-case alone, amortized has become a in the design and theoretical study of efficient algorithms.

Fundamentals

Definition

Amortized analysis is a deterministic in for bounding the average cost per operation over a worst-case of operations on a , providing a more realistic performance measure than isolated worst-case analysis for individual operations. This approach assumes a of operations, such as insertions or deletions, and evaluates the total resource usage—typically time or space—across the entire rather than each step in isolation. Central to amortized analysis are two key concepts: the actual cost, which denotes the true of an individual (e.g., the number of element copies during a resize), and the amortized cost, which serves as an upper bound on the per over the sequence. The amortized cost \hat{c} is derived from the formula \hat{c} = \frac{\sum_{i=1}^m c_i}{m}, where c_i is the actual of the i-th and m is the number of operations in the sequence; the analysis ensures that this average is bounded by a constant or slowly growing function, even if some c_i are large. The motivation behind amortized analysis stems from an analogy to financial amortization, in which sporadic high-cost are offset by the accumulated "savings" from numerous low-cost operations, distributing the burden evenly across the sequence. This technique, formally introduced by Robert E. Tarjan, addresses the limitations of traditional worst-case analysis by revealing efficient average behavior in data structures subjected to varied sequences.

Comparison to Other Analysis Methods

Amortized analysis differs from worst-case analysis in that it evaluates the average over a sequence of rather than the maximum cost of any individual . In worst-case analysis, the focus is on the highest possible expense for a single step, which can lead to overly pessimistic bounds for algorithms with occasional costly . For instance, a resize in a might incur O(n) time in the worst case, but amortized analysis reveals an O(1) average cost per insertion across many . In contrast to average-case analysis, amortized analysis is deterministic and does not depend on probabilistic assumptions about input distributions; instead, it guarantees bounds over the worst-case sequence of operations. Average-case analysis computes expected costs under random inputs, which may not hold for adversarial scenarios, whereas amortized analysis ensures performance without such randomness. Amortized analysis is best applied to algorithms and data structures exhibiting variable operation costs, where the total expense over a long sequence remains low despite sporadic high-cost events. It provides tighter, more practical bounds that reflect real-world efficiency in structures like dynamic arrays, particularly in resizing scenarios, but it falls short in guaranteeing bounds for isolated operations, where worst-case analysis is preferable.

History

Early Uses

The concept of amortization originated in , where it refers to the process of spreading the repayment of a , such as a or , over a period of time through regular smaller payments, rather than a single large outlay. This financial influenced the terminology in , where "amortized" describes averaging costs across a sequence of operations to account for occasional expensive steps offset by frequent inexpensive ones. In early literature, informal applications of amortized ideas appeared through aggregate analysis, which summed total costs over a sequence of operations and divided by the number to estimate average performance. Knuth's 1973 volume of employed such techniques in analyzing and algorithms, particularly in evaluating average probe sequences for hash tables without explicitly naming the approach. Similarly, pre-1980s examples included informal assessments of operations, such as multipop commands that empty portions of a stack in one step; analysts computed total costs for a sequence of pushes, pops, and multipops, then derived per-operation averages to bound efficiency. During the , these ideas gained recognition in evaluating efficiency, notably in hashing schemes where collision resolutions were aggregated over insertions and searches, as detailed in Knuth's work, and in tree balancing mechanisms like 2-3 trees, where and Tarjan analyzed insertion and deletion s by averaging reorganization costs. A pivotal early application occurred in Tarjan's 1975 analysis of union-find structures, where he implicitly amortized path compression and linking costs over a of operations to achieve nearly time. The key insight driving these early analyses was that worst-case cost spikes, such as tree rotations or hash rehashings, were infrequent and compensated by sequences of low-cost operations, yielding practical average bounds without formal frameworks. This laid groundwork for Tarjan's formalization of amortized analysis as a general method.

Formal Development

The formal development of amortized analysis as a rigorous technique in began with Robert Tarjan's , which introduced a for analyzing the of data structures and algorithms over sequences of operations rather than individual ones. Tarjan emphasized the limitations of traditional worst-case or average-case analyses for , where operations vary in cost, and proposed amortization to bound the average time per operation in the worst-case sequence, providing tighter and more realistic performance guarantees. This approach addressed the need for sequence-based bounds, enabling the study of self-adjusting and incremental data structures that perform poorly in isolation but efficiently overall. Tarjan's also introduced the . Sleator and Tarjan subsequently applied this method in their analysis of splay trees, a class of self-adjusting search trees, establishing amortized O(log n) time per operation despite worst-case O(n) costs. This technique proved instrumental for analyzing adaptive structures, where rotations and adjustments complicate direct analysis. In the mid-1980s, amortized analysis evolved further through its application to advanced priority queues, notably in the work of Michael Fredman and on heaps from 1984 to 1986. Their 1987 publication (initially presented in 1984) used amortized techniques, including a refined , to achieve O(1) amortized time for insert and decrease-key operations, with O(log n) for extract-min, significantly improving shortest-path and minimum-spanning-tree algorithms. This demonstrated amortized analysis's power in optimizing complex data structures. By the late 1980s and 1990s, amortized analysis became a standard tool, integrated into influential textbooks such as the first edition of by Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein in 1990, which dedicated a chapter to the topic and its methods. This formalization shifted algorithm design from focusing solely on per-operation worst-case costs to holistic , profoundly influencing the development of efficient data structures like union-find and heaps.

Methods of Amortized Analysis

Aggregate Analysis

Aggregate analysis is the simplest method of amortized analysis, focusing on bounding the total cost of a of operations rather than individual costs. In this approach, the total actual cost C(m) incurred by performing m operations on a is computed, and the amortized cost per operation \hat{c} is then obtained by dividing the total by the number of operations: \hat{c} = \frac{C(m)}{m}. This method assumes that the operations are of similar types and that the sequence length m is sufficiently large to provide a meaningful , ensuring the analysis captures worst-case behavior over the entire . To apply aggregate analysis, one first identifies a worst-case sequence of m operations and derives an upper bound on C(m). If this bound is O(m), for instance, the amortized cost per operation is O(1). More formally, if C(m) \leq c \cdot m for some constant c, then the amortized cost of each operation is at most c. The proof typically involves showing that the total cost is linear in m through direct summation of operation costs or by induction on the sequence length, accounting for any dependencies between operations. This method is particularly straightforward when operations have uniform characteristics, as it avoids the need for per-operation . However, it is less effective for sequences with highly varying operation costs, where the global average may obscure significant differences in individual performance. Aggregate analysis has been applied to structures like dynamic arrays, where total resizing costs over insertions remain linear.

Accounting Method

The accounting method, also known as the banker's method, is a in amortized analysis where each operation i is assigned an amortized cost \hat{a}_i that is greater than or equal to its actual cost \hat{c}_i, with the excess \hat{a}_i - \hat{c}_i deposited as fictional credits to cover the costs of future expensive s. These credits are stored in a metaphorical bank account associated with the data structure, ensuring that the total credit balance remains non-negative after every . To apply the method, a fixed amortized cost \hat{a}_i (typically O(1)) is chosen for each type of operation in advance. The analysis proceeds by verifying that, for every operation, the credits deposited (\hat{a}_i - \hat{c}_i when positive) suffice to pay for any credits spent (\hat{c}_i - \hat{a}_i when the actual cost exceeds the amortized cost), maintaining a non-negative throughout a sequence of n operations. High-cost operations draw from the accumulated credits prepaid by prior low-cost operations, allowing the amortized cost to bound the average performance. The relationship between costs is captured by the equation \hat{a}_i = \hat{c}_i + (credits deposited by operation i - credits spent by operation i), ensuring that the total amortized cost over n operations satisfies \sum_{i=1}^n \hat{a}_i \geq \sum_{i=1}^n \hat{c}_i. This holds because the total actual cost equals the total amortized cost minus the final balance (assuming zero initial credits), and the non-negative balance implies the inequality. The proof relies on : assuming the invariant holds up to operation i-1, the choice of \hat{a}_i ensures it holds after i, bounding the total actual cost by the total amortized cost. For intuition, consider insertions into a dynamic that doubles in size upon filling: each insertion is charged an amortized cost of 3 units, with 1 unit paying for the insertion and 2 units deposited as . When resizing from size n to $2n, the actual cost of copying n (plus the new insertion) is covered by spending 1 per copied from the previously deposited pool, maintaining the without negative . Here, cheaper non-resizing insertions overpay to the occasional high-cost resize. The accounting method offers an intuitive way to handle data structures with uneven operation costs by treating credits as prepaid resources, making it easier to reason about long-term efficiency compared to worst-case analysis alone. However, it requires careful selection and scaling of the fixed amortized costs to ensure the holds for all possible operation sequences.

Potential Method

The potential method in amortized analysis employs a potential function \Phi that depends on the state D_i of the data structure after the i-th operation, representing the amount of "stored work" or potential energy accumulated for future operations. The amortized cost \hat{c}_i of the i-th operation is then defined as the actual cost \hat{c}_i plus the change in potential: \hat{c}_i = c_i + \Phi(D_i) - \Phi(D_{i-1}). To apply the method, one selects a \Phi \geq 0 such that \Phi of the initial state is 0, ensuring that each amortized cost \hat{c}_i \geq 0 and is bounded by some constant, while the total amortized cost over m operations equals the total actual cost plus \Phi(D_m) - \Phi(D_0). If \Phi remains bounded for all states, the total amortized cost is O of the total actual cost, providing a tight bound on average performance. The key formula establishing this relationship is \sum_{i=1}^m \hat{c}_i = \sum_{i=1}^m c_i + \Phi(D_m) - \Phi(D_0), which telescopes the changes in potential across operations. Designing an appropriate \Phi typically involves making it proportional to a measure of "" in the or its size, such as the number of unused slots in an , to capture how operations build up or release stored work. This approach offers the advantage of handling complex data structures where state changes vary significantly between operations, providing a more flexible framework than fixed-charge methods. However, a primary disadvantage is the difficulty in identifying a suitable \Phi that satisfies the required properties for all operations. To prove amortized bounds using the potential method, one demonstrates that \hat{c}_i \leq k for some constant k and all i, by leveraging the properties of \Phi to ensure that increases in potential during cheap operations offset decreases during expensive ones.

Examples

Dynamic Array

A , also known as a resizable array, is a that supports efficient operations by automatically increasing its capacity when it becomes full. Typically, when the array is full, its size is doubled, requiring the copying of all existing elements to a new larger , which incurs an O(n) cost where n is the current number of elements. Most operations run in O(1) time by simply adding an element to the next available slot, but occasional resizing operations create cost spikes. Amortized analysis demonstrates that the average cost per append remains O(1) over a sequence of m operations. Using the aggregate method, the total cost for m appends starting from an empty is O(m). Resizing occurs at powers of two (sizes 1, 2, 4, ..., up to roughly m/2), and the copying costs sum to less than 2m, since the 1 + 2 + 4 + ... + m/2 < m; adding the m individual append costs yields a total under 3m, so the amortized cost is O(1) per operation. The accounting method provides an alternative perspective by assigning credits to operations. Each is charged 3 units: 1 unit covers the actual insertion cost, and 2 units are saved as credits for future resizes. When resizing from size n to 2n, the 2n copying cost is covered by the n credits accumulated from the previous n appends (2 units each), leaving an excess of n credits in the new for subsequent uses. This ensures no operation exceeds its amortized budget of 3 units. The formalizes this using a Φ defined as Φ = 2 × (number of elements) - + 1, which is non-negative and zero initially. The amortized cost â of an operation is the actual cost ĉ plus the change in potential ΔΦ. For a non-resizing , ĉ = 1 and ΔΦ = 2, so â = 3. For a resizing from capacity n (full) to 2n, ĉ ≈ 2n and ΔΦ ≈ 2 - n, so â ≈ 3. Thus, every has an amortized cost of O(1). A resizing greater than 1 ensures that the number of resizes over m operations is O(log m), keeping the total cost linear in m and the amortized cost per operation O(1); a of 2 is commonly used for its and balance between resize and copying overhead. To handle deletions efficiently and prevent excessive usage, dynamic arrays often include : when the number of elements falls to a fraction (e.g., one-quarter) of the , the array size is halved by to a smaller , incurring O(n) cost. Amortized analysis, using a like Φ = 2n - L if at least half full or L/2 - n if less than half full (where n is elements and L is ), shows that both insertions and deletions maintain O(1) amortized time, keeping utilization between 25% and 100%.

Two-Stack Queue

A queue can be implemented using two stacks, known as the input stack and the output stack, to simulate first-in, first-out (FIFO) behavior with amortized constant-time operations. The enqueue operation simply pushes the new element onto the input stack, which takes constant time, O(1). The dequeue operation pops the top element from the output stack if it is not empty, also in O(1) time; however, if the output stack is empty, all elements are popped from the input stack and pushed onto the output stack in reverse order (taking linear time, O(n), where n is the number of elements being transferred), after which the top element is popped from the output stack. This reversal ensures the correct FIFO order, as the most recently added elements to the input stack become the first to be dequeued. Each element is moved at most twice across the stacks during its lifetime: once when enqueued to the input stack and once when transferred to the output stack. In aggregate analysis, consider a sequence of m enqueue and dequeue operations on the . Each element is enqueued once (one push to the input ), transferred once (one pop from input and one push to output), and dequeued once (one pop from output), resulting in at most $3m stack operations in total. Thus, the is O(m), yielding an amortized cost of O(1) per operation. This holds even for unbalanced sequences of enqueues and dequeues, as long as the queue is not dequeued when empty, because each element contributes a bounded number of operations independently of the . The accounting method assigns credits to operations to cover future costs. Charge three credits for each enqueue: one to pay for the immediate push to the input stack and two stored with the element for its later transfer. A standard dequeue (from a non-empty output stack) costs one credit for the pop. When the output stack is empty and a transfer occurs for k elements, the $2k credits stored from those enqueues pay for the k pops and k pushes, while the current dequeue pays its own pop cost with one credit. This ensures no operation runs a , confirming an amortized cost of O(1) per operation. For the potential method, define the potential function \Phi as twice the number of elements in the input stack, \Phi = 2 \times |\text{input stack}|. The amortized cost \hat{c} for an operation is the actual cost \hat{c} plus the change in potential \Delta \Phi.
  • For enqueue: actual cost \hat{c} = 1 (push), \Delta \Phi = 2, so \hat{c} = 1 + 2 = 3.
  • For dequeue from non-empty output stack: \hat{c} = 1 (pop), \Delta \Phi = 0, so \hat{c} = 1.
  • For dequeue triggering a transfer of k elements: actual cost \hat{c} = 2k + 1 (k pops + k pushes + 1 final pop), \Delta \Phi = -2k (input stack empties), so \hat{c} = 2k + 1 - 2k = 1.
The initial and final potentials are non-negative and bounded, so the total amortized cost over m operations is at most $3m (or less, depending on the mix), yielding O(1) amortized per operation. This two-stack approach is a fundamental technique in design, often used in educational contexts and software implementations where stacks are the primitive , achieving amortized O(1) time for both enqueue and dequeue while using constant extra space beyond the stacks themselves.

Applications

Self-Adjusting Data Structures

Self-adjusting data structures are designed to improve performance by dynamically reorganizing themselves based on the sequence of operations performed, leveraging amortized analysis to guarantee efficient average-case behavior over a series of accesses. Unlike static structures, these adapt to access patterns, such as frequent item requests, by performing adjustments like rotations or rearrangements that may be costly individually but yield overall savings. Amortized analysis, particularly the , proves that the total cost remains low, enabling practical efficiency in scenarios with . A prominent example is the , a where each access, insertion, or deletion involves a splaying operation that rotates the accessed node to the root through a series of single or double rotations. This self-adjustment amortizes the cost of searches by bringing recently accessed nodes closer to the root, reducing future access times for related elements. Using the , Sleator and Tarjan showed that the amortized time for each operation is O(log n), where n is the number of nodes, by defining a based on the differences in depths or ranks between nodes in the tree. The potential captures the "balance" improvement from rotations, ensuring that expensive splays are offset by cheaper subsequent operations. Another classic self-adjusting structure is the move-to-front (MTF) heuristic applied to linear lists for self-organizing search. Upon accessing an item, MTF moves it to the front of the list, potentially shifting other elements. For a sequence of independent requests (e.g., uniform random accesses), aggregate analysis demonstrates that the amortized search cost is O(1), as the probability of accessing recently moved items decreases the expected position linearly with the number of distinct items. This bound holds because the total cost over m operations is at most 2m plus a constant term related to the list length, averaging to constant time per operation under the independence assumption. In general, the potential method for analyzing self-adjusting structures often relies on functions measuring rank or depth differences, which increase during cheap operations to "pay for" costly adjustments later. This technique, pioneered by Sleator and Tarjan in their work on splay trees and list update rules, provides amortized bounds that enable "pay as you go" without requiring worst-case per-operation guarantees. Recent applications of amortized analysis to self-adjusting data structures extend to caching mechanisms, such as variants of LRU that incorporate frequency or recency adaptation, where MTF-like rules maintain eviction lists with competitive amortized costs against optimal offline policies. Post-2010 developments include automated tools for verifying amortized complexity in probabilistic self-adjusting structures like splay trees. These structures offer practical speedups in search-heavy scenarios, such as dictionary implementations or real-time query systems, by exploiting access locality to achieve near-logarithmic or constant amortized times empirically superior to balanced trees.

Union-Find and Priority Queues

The union-find data structure, also known as disjoint-set union, maintains a of elements into disjoint subsets and supports two primary operations: , which merges two subsets, and find, which determines the representative (root) of the subset containing a given element. Without optimizations, naive implementations of these operations can take O(n) time per operation in the worst case, leading to quadratic total time for m operations on n elements. However, combining union-by-rank (which links trees by attaching the shorter to the taller based on rank) with path compression (which flattens paths during find by setting nodes directly to the root) achieves an amortized time of O(α(n)) per operation, where α(n) is the inverse , which grows so slowly that it is effectively constant for all practical n. This bound ensures that a sequence of m operations runs in nearly linear total time, specifically O(m α(n) + n). The amortized analysis for union-find employs the , defining a based on the structure of the forest of trees representing the subsets, such as the sum of ranks or levels along paths to roots. Path compression reduces the potential by shortening paths, while union-by-rank limits increases in potential during merges, ensuring that the amortized cost remains bounded by the inverse Ackermann growth. Recent optimizations extend this to settings; for instance, work-efficient union-find algorithms post-2000 achieve similar amortized bounds with logarithmic , enabling efficient processing on multicore systems. GPU implementations further accelerate connected components labeling by leveraging massive in union-find traversals, maintaining amortized efficiency through optimized pointer jumping and atomic operations. In queues, heaps provide an advanced implementation supporting insert, decrease-key, extract-min, and operations with superior amortized bounds compared to heaps. Decrease-key runs in O(1) amortized time through lazy propagation of changes without immediate restructuring, while insert and extract-min achieve O(log n) amortized time via delayed consolidation of trees during extraction. The analyzes these by defining a potential as the sum over all trees of (number of trees) plus twice the number of nodes with degree at least 2, capturing the "laziness" in the structure; consolidations pay off accumulated potential from prior cheap operations like decrease-key. Naive queues without such amortization can degrade to O(n) per operation, but heaps ensure total time for m operations is O(m + n log n), making them ideal for graph algorithms. Union-find and Fibonacci heaps find common application in graph algorithms, where union-find efficiently handles connectivity queries and merges during Kruskal's algorithm, achieving near-linear total time for dense graphs, while Fibonacci heaps optimize Dijkstra's shortest paths or Prim's MST by supporting fast decrease-key for edge relaxations.

References

  1. [1]
    [PDF] Amortized Computational Complexity - cs.Princeton
    Amortized computational complexity means averaging over time, specifically averaging the running times of operations in a sequence over the sequence.
  2. [2]
    [PDF] Lecture 11 - Amortized Analysis - MIT OpenCourseWare
    In this lecture we discuss three methods of amortized analysis: aggregate analysis, the account- ing method, and the potential method. 11.1 Aggregate Analysis.
  3. [3]
    [PDF] Amortized Analysis
    Definition 7.1 The amortized cost per operation for a sequence of n operations is the total cost of the operations divided by n. For example, if we have 100 ...
  4. [4]
    [PDF] CMSC 420: Amortized Analysis - UMD MATH
    The term amortized analysis refers to analysis of a data structure in which the the costs associated with the most costly operations are averaged out over time.
  5. [5]
    [PDF] Amortized Analysis - Carnegie Mellon University
    The key idea of amortized analysis is to consider the worst-case cost of asequence of op- erations on a data structure, rather than the worst-case individual ...
  6. [6]
    CS 312 Lecture 18 Amortized Algorithms
    Amortized analysis refers to determining the time-averaged running time for a sequence of operations.
  7. [7]
    [PDF] An Introduction to Amortized and Competitive Analyses
    Amortized analysis is used to show that the average cost of an operation over some data structure is small, if one averages over a sequence of operations, ...
  8. [8]
    [PDF] Amortized Analysis Explained by Rebecca Fiebrink - IME-USP
    Amortized analysis is closely related to competitive analysis, which involves comparing the worstcase performance of an online algorithm to the performance of ...<|control11|><|separator|>
  9. [9]
    9.2. Amortized Analysis — OCaml Programming
    “Amortization” is a financial term. One of its meanings is to pay off a debt over time. In algorithmic analysis, we use it to refer to paying off the cost of an ...Missing: origins computer science influence
  10. [10]
    The Art of Computer Programming (TAOCP)
    Sorting and Searching , Second Edition (Reading, Massachusetts: Addison ... Knuth Computer Science Department CoDa Building room W208 389 Jane Stanford WayMissing: 1973 amortized
  11. [11]
    Amortized Computational Complexity | SIAM Journal on Matrix ...
    Amortized computational complexity is averaging over time, a technique in data structure complexity analysis. Amortized running time is a robust measure.
  12. [12]
    [PDF] Self-Adjusting Binary Search Trees
    D. D. SLEATOR AND R. E. TARJAN​​ To define the potential of a splay tree, we assume that each item i has a positive weight w(i), whose value is arbitrary but ...
  13. [13]
    Fibonacci heaps and their uses in improved network optimization ...
    In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the ...
  14. [14]
    Introduction to Algorithms - Google Books
    'Introduction to Algorithms' covers both classical material and such modern developments as amortized analysis and parallel algorithms.Missing: CLRS | Show results with:CLRS
  15. [15]
    Lecture 21: Amortized Analysis - Cornell: Computer Science
    The aggregate method directly seeks a bound on the overall running time of a sequence of operations. In contrast, the accounting method seeks to find a payment ...
  16. [16]
    [PDF] Amortized Analysis: CLRS Chapter 17 Last revised: August 30, 2006
    Aug 30, 2006 · Amortized analysis analyzes the time of a sequence of operations, where the average cost can be small, even if individual operations are ...Missing: textbook 1990s
  17. [17]
    [PDF] AmortizedAnalysis.pdf - cs.Princeton
    May 1, 2022 · Amortized analysis. Determine worst-case running time of a sequence ... Aggregate method. Analyze cost of a sequence of operations. 14.Missing: seminal | Show results with:seminal
  18. [18]
    [PDF] Dynamic tables
    Mar 13, 2008 · CS 5633 Analysis of Algorithms. 25. 3/13/08. Example: Accounting analysis of dynamic tables. Charge an amortized cost of ĉ i. = $3 for the ith.
  19. [19]
    [PDF] Lecture 5: Amortization - MIT OpenCourseWare
    Potential Method​​ amortized cost of an operation = actual cost of this operation + ΔΦ and amortized cost = actual cost + Φ(final DS) − Φ(initial DS).
  20. [20]
    [PDF] Amortized Analysis
    In the potential method, we define a potential function Φ that maps a data structure to a non- negative real value. ○. Each operation on the data structure ...
  21. [21]
    [PDF] Lecture 21: Amortized Analysis 1 Dynamic Array Problem
    The dynamic array problem aims for O(1) amortized time per add, where array size changes dynamically. The amortized cost for add operation is O(1).Missing: seminal paper
  22. [22]
    [PDF] Introduction Lecture 1b: Amortized analysis and dynamic arrays
    Amortized analysis uses a potential function to calculate amortized time, which is actual time plus a constant times the change in the potential function. It ...Missing: seminal paper
  23. [23]
    [PDF] Amortized Analysis
    Dynamic tables with deletions. • Halving (first attempt). If the array is half full copy the elements to a new array of half the size. • Consequence. The ...Missing: contraction | Show results with:contraction
  24. [24]
    [PDF] Amortized Analysis
    We can implement a FIFO queue using two stacks as follows: - enqueueHx): push x onto stack1. - dequeue(): if stack2 is not empty, pop it. otherwise, we POP ...
  25. [25]
    [PDF] 14 Amortized Analysis
    (a) Describe how to implement a queue using two stacks and O(1) additional memory, so that the amortized time for any enqueue or dequeue operation is O(1).
  26. [26]
    Self-adjusting binary search trees | Journal of the ACM
    A splay tree is a self-adjusting binary search tree that uses a restructuring heuristic called splaying, achieving O(log n) amortized time per operation.
  27. [27]
    ATLAS: Automated Amortised Complexity Analysis of Self-adjusting ...
    Jul 15, 2021 · In this paper, we present the first fully-automated amortised cost analysis of self-adjusting data structures, that is, of splay trees, splay heaps and pairing ...
  28. [28]
    Efficiency of a Good But Not Linear Set Union Algorithm
    Efficiency of a Good But Not Linear Set Union Algorithm. Author: Robert Endre Tarjan ... First page of PDF. Formats available. You can view the full content in ...
  29. [29]
    [PDF] 11 Data Structures for Disjoint Sets - Jeff Erickson
    It follows that if we use union by rank, Find with path compression runs in O(α(n)) amortized time. Even this upper bound is somewhat conservative if m is ...
  30. [30]
    [PDF] Work-Efficient Parallel Union-Find
    Tarjan [22] presented an analysis that showed an amortized time of Opαpm,nqq per find, which ... to blindly apply the union operations in parallel since union ...Missing: 2000 | Show results with:2000
  31. [31]
    An Optimized Union-Find Algorithm for Connected Components ...
    Aug 28, 2017 · In this paper, we report an optimized union-find (UF) algorithm that can label the connected components on a 2D image efficiently by employing the GPU ...
  32. [32]
    [PDF] Fibonacci Heaps and Their Uses in Improved Network Optimization ...
    M. L. FREDMAN AND R. E. TARJAN. FIG. 2. Pointers representing an F-heap ... 1975), 21-23. RECEIVED NOVEMBER 1984, REVISED FEBRUARY 1986; ACCEPTED JUNE 1986.