Fact-checked by Grok 2 weeks ago

Potential method

The potential method is a fundamental technique in amortized analysis within computer science, employed to assess the average time and space complexity of a sequence of operations on data structures. It was introduced by Daniel Sleator and Robert Tarjan in 1985 as part of their analysis of splay trees. It defines a potential function \Phi that maps each configuration of the data structure to a non-negative real number, interpreting this value as "stored credits" or potential energy accumulated from prior operations to offset future expensive ones. The amortized cost of an individual operation is calculated as the actual cost plus the change in potential \Delta \Phi = \Phi(\text{final state}) - \Phi(\text{initial state}), yielding the relation: total actual cost \sum c_i = \sum \hat{c}_i + \Phi(\text{initial}) - \Phi(\text{final}), where under typical conditions \Phi(\text{initial}) = 0 and \Phi(\text{final}) \geq 0, this bounds the total actual cost by \sum c_i \leq \sum \hat{c}_i. This approach contrasts with worst-case analysis by averaging costs over multiple operations, where cheap operations increase the potential (effectively overcharging to build reserves) and expensive ones decrease it (drawing on those reserves). Typically, \Phi is set to zero for the empty initial state and designed to remain non-negative. If all amortized costs are bounded by a constant O(1), the average cost per operation is also O(1). The method's flexibility allows tailoring \Phi to specific structures, such as using the number of trailing ones in a binary counter to amortize increment costs to O(1), or the number of full nodes in dynamic arrays to handle resizing efficiently. Common applications include analyzing self-adjusting data structures like splay trees, union-find with path compression, and resizable hash tables, where sporadic high costs (e.g., node splits or doublings) are smoothed out. By prioritizing a well-chosen \Phi that closely tracks the structure's "health" or discrepancy from an ideal state, the potential method provides tight bounds that reveal the practical efficiency of algorithms beyond isolated worst-case scenarios.

Fundamentals

Definition and Purpose

The potential method is a technique within that assigns a to the states of a or , representing a form of stored "credit" or "energy" that can offset the costs of future operations. This approach smooths out the variability in operation costs by treating excess work performed during inexpensive steps as accumulated potential, which is later drawn upon to cover the expenses of more costly operations. By defining the potential in this way, the method enables a more realistic assessment of average performance over a sequence of operations, rather than focusing solely on isolated worst-case scenarios. The primary purpose of the potential method is to establish upper bounds on the amortized cost of operations in systems where individual steps exhibit significant cost fluctuations, such as resizing in dynamic arrays or balancing in self-adjusting trees. It is particularly valuable when traditional worst-case analysis per operation yields overly pessimistic bounds, as it allows analysts to prove that the remains low across any reasonable sequence of inputs, thereby justifying the efficiency of algorithms that occasionally incur high expenses. This technique supports the design and of structures that adapt dynamically, ensuring their practical aligns with theoretical guarantees. The potential method was formally introduced by in 1985 as a key component of frameworks, building on earlier implicit uses in research. Tarjan's work emphasized its role in providing robust measures of for amortized settings, influencing subsequent developments in algorithm analysis. At its core, the intuition behind the method is that cheaper operations "prepay" for expensive ones by building up potential, much like charging a battery during low-demand periods to power high-demand surges later.

Potential Function Basics

The , denoted \Phi, is a that maps each S of a to a non-negative \Phi(S). It is typically defined such that \Phi(S_0) = 0 for the initial S_0. To facilitate , \Phi must satisfy specific requirements: it is bounded below by zero, ensuring \Phi(S_i) \geq 0 for all reachable states S_i, and it must be straightforward to compute from the current , often via simple metrics like counts or differences. The is structured to represent stored within the system, accumulating potential during sequences of operations to offset future expenses. A fundamental property links \Phi to operation costs, where the amortized cost \hat{c}_i of the i-th operation is \hat{c}_i = c_i + \Delta\Phi_i, with actual cost c_i and change in potential \Delta\Phi_i = \Phi(S_i) - \Phi(S_{i-1}). The selection of \Phi emphasizes : it is engineered to rise during low-cost operations to build reserves and fall during high-cost ones to draw on those reserves, thereby smoothing cost distribution across operations.

Core Concepts

Amortized versus Actual Cost

In the potential method of , the actual cost c_i represents the precise measure of resources, such as time or , consumed by the i-th in a sequence performed on a . This cost captures the immediate, unaveraged resource usage without considering the broader context of the operation sequence. The amortized cost \hat{c}_i, in contrast, provides an averaged perspective by incorporating the change in the \Phi, defined as
\hat{c}_i = c_i + \Phi(D_i) - \Phi(D_{i-1}),
where D_i denotes the data structure's following the i-th and D_{i-1} the preceding it. This adjusts the actual cost by the potential difference, allowing cheap operations to subsidize expensive ones through stored potential.
Over a sequence of n operations, the total actual cost \sum_{i=1}^n c_i relates to the total amortized cost \sum_{i=1}^n \hat{c}_i via
\sum_{i=1}^n c_i = \sum_{i=1}^n \hat{c}_i - \Phi(D_n) + \Phi(D_0),
assuming a non-negative potential function \Phi \geq 0. If \Phi(D_0) = 0, this simplifies to \sum_{i=1}^n c_i \leq \sum_{i=1}^n \hat{c}_i, since \Phi(D_n) \geq 0, ensuring the total actual cost is bounded above by the total amortized cost.
This relationship interprets the as a conceptual "bank account" for the , where positive changes in \Phi during low-cost operations deposit credits, and negative changes during high-cost operations withdraw them to cover excess actual costs. Thus, the amortized cost reflects the long-term resource efficiency across the sequence, even if individual actual costs vary significantly.

Aggregate Analysis Connection

Aggregate analysis provides a foundational approach to by bounding the of a of n s and dividing by n to obtain the amortized per . In this method, if the C_n over n s satisfies C_n = O(n), the amortized is O(1), even if individual s exhibit higher worst-case costs. This technique directly computes or estimates the sum of actual costs without per-operation adjustments, making it suitable for uniform s where costs can be aggregated globally. The potential method connects to aggregate analysis by offering a mechanism to derive similar total cost bounds through a \Phi, which tracks the "stored work" in the data structure's state. Specifically, the amortized \hat{c}_i for the i-th operation is defined as \hat{c}_i = c_i + \Phi(D_i) - \Phi(D_{i-1}), where c_i is the actual and D_i is the state after the operation. Summing over n operations yields \sum_{i=1}^n \hat{c}_i = \sum_{i=1}^n c_i + \Phi(D_n) - \Phi(D_0); if \Phi is non-negative and bounded (e.g., $0 \leq \Phi(D_i) \leq k for some constant k) and each \hat{c}_i = O(1), then \sum c_i = O(n), mirroring bounds and proving O(1) amortized time despite occasional high- operations. This ensures the potential method refines aggregate-style proofs by accounting for state-dependent distribution. Unlike direct aggregate analysis, which relies on summing actual costs uniformly, the potential method handles varying operation costs more flexibly by temporally distributing excess charges across the sequence via changes in . Aggregate analysis may require case-by-case total cost estimation for different sequences, whereas potentials provide a systematic way to "prepay" for future expensive operations through incremental potential. A key result linking the methods is that if the amortized costs satisfy \hat{c}_i \leq f(i) for all i, then the average actual cost is bounded by \frac{1}{n} \sum_{i=1}^n c_i \leq \frac{1}{n} \sum_{i=1}^n f(i) + \frac{\Phi(D_0) - \Phi(D_n)}{n} = O(\max_i f(i)), assuming bounded \Phi. This theorem demonstrates how potential-based amortized bounds imply aggregate-style average costs, generalizing to O(1) when f(i) is constant.

Analysis Methods

Credit Assignment via Potentials

In the potential method of , credit assignment operates through a Φ that quantifies stored "credit" or prepaid work within the data structure's state. Cheap operations, where the actual cost c_i is less than the assigned amortized cost â_i, generate excess by increasing the potential (ΔΦ_i > 0), effectively paying extra to build a reserve for future expenses. Conversely, expensive operations, with c_i > â_i, consume this by decreasing the potential (ΔΦ_i < 0), drawing on the accumulated reserve to cover their higher actual costs without exceeding the amortized bound. Formally, each unit of potential represents one unit of computational work prepaid by prior operations, ensuring that the amortized cost â_i = c_i + ΔΦ_i covers c_i even during temporary spikes where c_i exceeds â_i. This mechanism allows the total actual cost over a sequence of n operations to be bounded by the sum of amortized costs, as the potential adjustments account for fluctuations in individual operation expenses. The potential function Φ is typically defined as a non-negative value based on the data structure's configuration, such as the number of certain elements or inversions, to reflect the available credit. To prove amortized bounds, the potential function is chosen such that c_i + ΔΦ_i ≤ â_i for each operation i, where â_i is the desired amortized bound (often a constant). Equivalently, ΔΦ_i ≤ â_i - c_i, ensuring the amortized cost remains within the desired limit while the potential tracks credit flow. Over a sequence of operations, this leads to a telescoping sum: the total actual cost ∑{i=1}^n c_i = ∑{i=1}^n â_i - (Φ(D_n) - Φ(D_0)), where D_k denotes the data structure state after k operations. Since Φ is non-negative and usually starts at zero (Φ(D_0) = 0), with Φ(D_n) ≥ 0, the total actual cost is at most the sum of the amortized costs, establishing the bound. A common pitfall in applying the potential method is failing to ensure the potential remains non-negative throughout the analysis, which could lead to negative amortization where the bound no longer holds, as the telescoping sum might subtract a negative final potential and overestimate the total cost. Analysts must verify that Φ(D_i) ≥ 0 for all states D_i encountered, often by constructing Φ to naturally count structural features like unused capacity or imbalances that cannot drop below zero. This requirement underscores the method's reliance on a carefully crafted potential function to maintain the integrity of credit assignment.

Worst-Case Amortized Bounds

In amortized analysis, the worst-case input sequences pose a significant challenge because individual operations can exhibit high actual costs, such as O(log n) or worse, while the overall sequence maintains a lower average performance. The mitigates this by defining a Φ that captures "stored work" or credit accumulated during low-cost operations to offset future high-cost ones, thereby smoothing costs across the sequence even under adversarial inputs. This approach ensures that the amortized cost per operation remains bounded, providing a worst-case guarantee for the entire sequence rather than isolated steps. To derive worst-case amortized bounds, the potential function Φ is constructed to increase gradually along the most expensive paths in the computation, ensuring that the amortized cost \hat{c}_i = c_i + \Phi(D_i) - \Phi(D_{i-1}) for the i-th operation satisfies \hat{c}_i = O(1) (or a desired bound) in the worst case, where c_i is the actual cost, and D_i denotes the data structure state after the i-th operation. Typically, Φ is non-negative, with Φ(D_0) = 0 at the initial state, and it is chosen such that changes in potential align with the structure's "imbalance" or unused capacity, preventing unbounded growth. This construction relies on the credit assignment principle, where excess amortized costs from cheap operations build potential to cover deficits in expensive ones. By bounding the maximum \hat{c}_i over all possible sequences, the method guarantees that no operation's amortized cost exceeds the target, even in adversarial scenarios. The proof technique for establishing these bounds involves aggregating over a sequence of n operations: the total actual cost satisfies \sum_{i=1}^n c_i = \sum_{i=1}^n \hat{c}_i + \Phi(D_0) - \Phi(D_n). If each \hat{c}_i \leq \tilde{O}(1) in the worst case and \Phi(D_n) - \Phi(D_0) is bounded by a constant or O(n) term that does not exceed the desired average, then \sum_{i=1}^n c_i \leq n \cdot \max(\hat{c}_i) + O(1), yielding an amortized bound of \tilde{O}(1) per operation for the sequence. This holds under worst-case assumptions, as the potential's design ensures the difference \Phi(D_n) - \Phi(D_0) remains controlled regardless of input ordering. For instance, in analyzing , the potential function is defined to track the number of trailing 1s (or carry propagations), which increases by at most 1 per operation but resets during costly carries, keeping the amortized cost O(1) despite logarithmic worst-case increments.

Illustrative Examples

Dynamic Array Operations

A dynamic array, also known as a resizable array, maintains a contiguous block of memory with a current capacity m that can grow to accommodate insertions. When the number of elements n reaches m, the array doubles its capacity to $2m by allocating a new block and copying all n elements into it, incurring an actual cost of \Theta(n) for the copy operation plus \Theta(1) for the insertion itself. To analyze the amortized cost of insertions using the potential method, define the potential function as \Phi = 2n - m, where n is the current number of elements and m is the current capacity; this function is nonnegative for the doubling strategy starting from an empty array, as the capacity after each resize satisfies m \geq n and resets to approximately twice the prior full size. For a typical insertion when the array is not full (n < m), the actual cost c_i = 1, and the change in potential \Delta \Phi = 2 since n increases by 1 while m remains fixed. The amortized cost is thus \hat{c}_i = c_i + \Delta \Phi = 1 + 2 = 3 = O(1). When an insertion triggers a resize (at n = m), the actual cost c_i = \Theta(m) + 1 due to copying m elements and the insertion. Here, \Delta \Phi = (2(m+1) - 2m) - (2m - m) = 2 - m = -m + 2, so the amortized cost is \hat{c}_i = (m + 1) + (-m + 2) = 3 = O(1), spreading the resizing expense across prior insertions via the drop in potential. Over a sequence of n insertions starting from empty, the total amortized cost is \sum \hat{c}_i = O(n) since each is O(1), and the total actual cost satisfies \sum c_i = \sum \hat{c}_i - \Phi_{\text{final}} + \Phi_{\text{initial}} = O(n) - O(n) + 0 = O(n) because the final potential \Phi_{\text{final}} = 2n - O(n) = O(n); thus, the amortized cost per insertion is O(1).

Multi-Pop Stack Behavior

The multi-pop stack is a data structure that extends the standard stack by supporting an additional operation to remove multiple elements at once, allowing for analysis of operations with varying costs under the potential method. The operations include push, which adds an element to the top of the stack at an actual cost of 1; single pop, which removes the top element at an actual cost of 1; and multi-pop of k items, which removes up to k elements from the top (or fewer if the stack has fewer than k elements) at an actual cost of k, assuming k does not exceed the stack size for simplicity in the analysis. To apply the potential method, the potential function Φ is defined as the number of items currently in the stack, which starts at 0 for an empty stack and remains nonnegative throughout any sequence of operations. Each push operation increases Φ by 1, as it adds one item, while a single pop decreases Φ by 1, and a multi-pop of k items decreases Φ by k. This potential captures the "stored work" associated with elements that may be removed later, enabling the method to account for the variability in pop costs. The amortized costs derived from this potential function reveal how the extra cost charged to pushes subsidizes pops. Specifically, the amortized cost of each push is 2, as the actual cost of 1 is augmented by the increase in potential of 1, effectively paying for both the push itself and a future pop of that item. For a multi-pop of k items, the amortized cost is 0 in total (or equivalently, 1 per item when considering the credits from prior pushes), since the actual cost of k is exactly offset by the decrease in potential of k. A single pop follows similarly, with an amortized cost of 0. This assignment ensures that expensive multi-pops are covered by the overcharges on preceding pushes without undercharging any operation. For a sequence consisting of n push operations interspersed with single pops and multi-pops, the total amortized cost is at most 2n, as only the pushes contribute to the amortized expense while pops and multi-pops contribute 0. Since the total actual cost is bounded above by the total amortized cost and the number of items popped cannot exceed n, this establishes an O(1) amortized cost per effective operation (treating each item pushed or popped as a unit of work), demonstrating the efficiency of the data structure over long sequences despite occasional costly multi-pops.

Binary Counter Increment

The binary counter increment operation serves as a classic example for applying the potential method in amortized analysis, where the goal is to show that the average cost per increment remains constant despite occasional long carry propagations. Consider an n-bit binary counter initialized to 0. Each increment operation adds 1 to the counter's value, which in the worst case requires flipping up to n bits due to a chain of carries—for instance, incrementing 111...1 (all 1s) to 1000...0 flips all n bits. The actual cost c_i of the i-th increment is the number of bit flips, which is \Theta(1 + t), where t is the number of trailing 1s in the binary representation before the increment. To analyze this using the potential method, define the potential function \Phi as the number of trailing 1s in the current binary representation of the counter, scaled by a sufficiently large constant c > 0 to ensure non-negativity and the desired bound (i.e., \Phi(D) = c \cdot t, where t is the number of trailing 1s and D is the current state). This choice of \Phi captures the "stored work" associated with potential future carry costs, as trailing 1s indicate bits likely to flip soon. The initial potential is \Phi(D_0) = 0 for the all-zero counter. For each increment:
  • If the counter ends in 0 (so t = 0), the operation flips that bit to 1, costing 1 flip, and \Delta \Phi = +1 (now one trailing 1), yielding an amortized cost \hat{c}_i = c_i + \Delta \Phi = 1 + c \cdot 1 = O(1).
  • If there are k \geq 1 trailing 1s, the operation flips those k bits to 0 and the next bit (a 0) to 1, costing k + 1 flips, but \Delta \Phi = c \cdot (1 - k) (new trailing 1s count is 1), so \hat{c}_i = (k + 1) + c(1 - k) = (1 + c) + k(1 - c). Choosing c \geq 2 ensures \hat{c}_i \leq 3c = O(1), as the decrease in potential offsets the extra flips.
Over a sequence of m = 2^n increments starting from 0 (traversing all possible n-bit values), the total actual cost equals the sum of amortized costs minus the net change in potential: \sum c_i = \sum \hat{c}_i - (\Phi(D_m) - \Phi(D_0)). Since each \hat{c}_i = O(1), \sum \hat{c}_i = O(2^n), and |\Phi(D_m) - \Phi(D_0)| = O(n) (as \Phi \leq c n), the total number of bit flips is O(2^n), implying an amortized cost of O(1) per increment. A looser but verifiable upper bound on the total flips is O(n \cdot 2^n), derived from bounding each of the n bits as flipping at most $2^n times, though the potential method tightens this to O(2^n).

Broader Applications

In Dynamic Data Structures

The potential method plays a key role in analyzing the amortized efficiency of the union-find , which maintains partitions of a set under union and find operations. Potentials are typically defined in terms of tree heights or set sizes to capture the compression costs during finds and the linking costs during unions, especially when using heuristics like union by rank and path compression. This yields an amortized time of nearly O(1) per operation, precisely O(α(n)) where α(n) is the extremely slow-growing inverse , ensuring near-linear total time for m operations on n elements. In Fibonacci heaps, a supporting fast decrease-key operations, Fredman and Tarjan employed a that accounts for the number of trees in the forest, the degrees of internal nodes, and the ranks of trees. This potential bounds the amortized costs of insertions and decrease-keys at O(1), while extract-min remains O(log n), enabling improved algorithms for shortest paths and minimum spanning trees. Splay trees, self-adjusting binary search trees, rely on the for their , as developed by Sleator and Tarjan. The measures the entropy-like sum of logarithms of subtree sizes, particularly along the access path during splaying rotations. This establishes O(log n) amortized time for searches, insertions, and deletions, reflecting how frequent accesses move nodes closer to the root over time. A primary benefit of applying the potential method to these dynamic data structures is its ability to amortize the irregular costs of restructuring—such as path compression in union-find, lazy merging in Fibonacci heaps, or rotations in splay trees—across operation sequences that may exhibit adversarial patterns, thereby revealing competitive performance despite varying per-operation overheads.

Limitations and Extensions

One primary limitation of the potential method lies in the selection of an appropriate potential function Φ, which often requires significant intuition and trial-and-error, making it ad-hoc and the most challenging aspect of amortized analysis. Different choices of Φ can yield varying amortized bounds, and finding one that provides tight results demands deep understanding of the underlying data structure's dynamics. Additionally, the method assumes a non-negative potential function (Φ(D_i) ≥ 0 for all states D_i) to ensure that the amortized cost upper bounds the actual cost, which can restrict its use in contexts where negative potentials might otherwise simplify the analysis. While the potential method excels at establishing upper bounds on amortized costs, it does not inherently provide tight lower bounds, often necessitating complementary techniques such as adversary arguments or lower-bound proofs via for complete . The potential method can be extended by integrating it with other amortized techniques, such as the method, to handle scenarios where direct potential assignment is cumbersome; for instance, credits from can inform the design of Φ in hybrid analyses. Modern developments since 2000 have broadened the 's scope, including potentials for finer-grained in functional programs, enabling amortized bounds beyond linear costs. In cache-oblivious algorithms, the potential facilitates I/O-efficient without machine-specific parameters, as seen in priority queues achieving O((log n / log b) amortized) disk accesses. The "quantum physicist's ," introduced in , extends the approach using quantum-inspired techniques like superposition analogies for worldviews and tunneling, to enable automatic amortized in classical functional programs. For scenarios demanding strict worst-case per-operation guarantees, such as systems, the potential method should be avoided in favor of direct worst-case analysis, as its amortized averages may mask occasional high costs.

References

  1. [1]
    [PDF] Lecture 5: Amortization - MIT OpenCourseWare
    Potential Method​​ amortized cost of an operation = actual cost of this operation + ΔΦ and amortized cost = actual cost + Φ(final DS) − Φ(initial DS).
  2. [2]
    [PDF] Amortized Analysis - Carnegie Mellon University
    In the potential method, we define a potential function Φ, which maps a data structure state S to a real number Φ(S). We will often use the notation Si to ...
  3. [3]
    Lecture 20: Amortized Analysis - Cornell: Computer Science
    The potential (or physicist's ) method, in which we derive a potential function characterizing the amount of extra work we can do in each step. This potential ...
  4. [4]
    Amortized Computational Complexity | SIAM Journal on Matrix ...
    A powerful technique in the complexity analysis of data structures is amortization, or averaging over time. Amortized running time is a realistic but robust ...
  5. [5]
    [PDF] Amortized Efficiency Of List Update and Paging Rules
    Amortized complexity analysis combines aspects of worst-case and average-case analysis, and for many problems provides a measure of algorithmic efficiency that ...
  6. [6]
    [PDF] AmortizedAnalysis.pdf - cs.Princeton
    May 1, 2022 · a potential function so that the amortized cost of each operation is low. Page 23. Potential method (physicist's method). Potential function. Φ ...
  7. [7]
    [PDF] 9 Amortized Analysis - Jeff Erickson
    The trick to using the potential method is to come up with the best possible potential function. A good potential function goes up a little during any cheap/ ...
  8. [8]
    [PDF] Amortized Computational Complexity - cs.Princeton
    The banker's view of amortization was used implicitly by Brown and Tarjan [8] in analyzing the amortized complexity of 2, 3 trees and was developed more fully ...
  9. [9]
    [PDF] Amortized Analysis
    Definition 7.2 A potential function is a function of the state of a system, that generally should be non-negative and start at 0, and is used to smooth out ...
  10. [10]
    [PDF] Lecture 11 - Amortized Analysis - MIT OpenCourseWare
    In this lecture we discuss three methods of amortized analysis: aggregate analysis, the account- ing method, and the potential method.Missing: Cormen | Show results with:Cormen
  11. [11]
    [PDF] 130A: Amortized Analysis - UCSB Computer Science
    Feb 17, 2015 · Potential Method Analysis of MultiPop Stack. Let Φ = number of items on stack. Clearly, Φ(D0) = 0, and Φ(Di) ⩾ Φ(D0) = 0. Amortized cost of Push ...
  12. [12]
    [PDF] Amortized Analysis
    We can tighten our analysis to yield a worst case of O(n) op. ... – If bi > 0, then bi = bi−1 − ti + 1. Amortised Analysis. 11. Page 12. Potential Method: Binary ...
  13. [13]
    [PDF] a good but not linear set union algorithm - Cornell eCommons
    ON THE EFFICIENCY OF. A GOOD BUT NOT LINEAR SET UNION ALGORITHM. Robert Endre Tarjan. TR 72-148. November 1972. Computer Science Dept. Cornell University.
  14. [14]
    [PDF] Fibonacci Heaps and Their Uses in Improved Network Optimization ...
    Society for Industrial and Applied. Mathematics, Philadelphia, Pa., 1983. 25. TARJAN, R. E. Amortized computational complexity. SIAM J. Algebraic Discrete ...
  15. [15]
    [PDF] Self-Adjusting Binary Search Trees
    TARJAN, R.E. Data Structures and Network Algorithms. Society for Industrial and Applied. 35. TARJAN, R.E. Amortized computational complexity. SIAM J. Appl ...
  16. [16]
    [PDF] Amortized Analysis Explained by Rebecca Fiebrink - IME-USP
    The accounting method and potential method are equivalent in terms of their applicability to particular problems and the bounds they provide (Tarjan 1985).
  17. [17]
    [PDF] Automated Amortised Resource Analysis for Term Rewrite Systems
    Nov 26, 2018 · In this section, we establish a novel amortised resource analysis for TRSs. This analysis is based on the potential method and coached in an ...Missing: limitations | Show results with:limitations
  18. [18]
    [PDF] Amortized Resource Analysis with Polynomial Potential
    In this paper we study the problem of statically determining an upper bound on the resource usage of a given first-order functional program as a function of the.
  19. [19]
    [PDF] Cache-Oblivious Shortest Paths in Graphs Using Buffer Heap
    We present the Buffer Heap (BH), a cache-oblivious priority queue that supports Delete-Min, Delete, and Decrease-Key operations in O( _`logb c` damortized ...
  20. [20]
    Automatic amortized resource analysis with the Quantum physicist's ...
    We present a novel method for working with the physicist's method of amortized resource analysis, which we call the quantum physicist's method.