Fact-checked by Grok 2 weeks ago

Least frequently used

The Least Frequently Used (LFU) is a eviction employed in computer systems to manage limited memory by removing the items accessed the fewest times when the capacity is exceeded. This tracks the frequency of access for each cached item, prioritizing the retention of more popular data to optimize hit rates in scenarios with skewed access patterns. In operation, LFU maintains a for every item upon insertion or , incrementing the with each use to reflect ongoing . When is required, the algorithm selects and discards the item(s) with the lowest counter value; in implementations handling ties, secondary mechanisms like recency (e.g., least recently used) may break them. Efficient realizations, such as those using probabilistic counters with logarithmic scaling and decay factors, achieve near-constant for insertions, updates, and evictions, addressing traditional LFU's logarithmic overhead. LFU excels in environments with stable, repetitive access distributions, such as content delivery networks or with persistent "hot" items, by retaining frequently requested data longer than recency-based alternatives like LRU. However, it is susceptible to pollution, where transiently popular items accumulate high counts and resist eviction despite later irrelevance, and it adapts slowly to shifting patterns compared to LRU. Implementation complexity is higher due to counter management, though modern variants mitigate this. Notable extensions include the Least Recently/Frequently Used (LRFU) , which blends and recency via a tunable to form a spectrum subsuming both pure LFU and LRU, improving adaptability across workloads. Hybrid approaches, such as combining LFU with LRU in segmented , further enhance by leveraging for long-term and recency for short-term trends. These developments underscore LFU's role in advancing cache efficiency for diverse applications, from proxies to in-memory stores.

Fundamentals

Definition and Purpose

The Least Frequently Used (LFU) cache replacement is an eviction that removes the cached item with the lowest access frequency count, approximating the optimal offline Belady's algorithm by prioritizing frequency-based predictions of future reuse under stationary request patterns. The core purpose of LFU in and caching is to improve cache hit rates by retaining high-frequency items that are expected to be accessed repeatedly, making it particularly suitable for workloads with skewed or heavy-tailed access distributions, such as web proxies where popular content dominates requests or databases handling recurring queries on hot data. LFU was introduced in the context of paging research in the late 1960s and early 1970s, with formal evaluation techniques including frequency-based policies like LFU described by Mattson et al. in 1970. This policy rests on the fundamental assumption that item access frequencies are stable or slowly varying over time, enabling frequency metrics to serve as reliable proxies for long-term reuse probability. Unlike the recency-focused Least Recently Used (LRU) policy, LFU emphasizes cumulative access counts to address scenarios where item popularity persists beyond immediate temporal locality.

Core Mechanism

In the Least Frequently Used (LFU) replacement policy, each cached item maintains a dedicated to track its , which is incremented by 1 each time the item is accessed or referenced. Upon insertion of a new item—typically triggered by a —its is initialized to 1 to reflect the initial event. This -based tracking prioritizes retaining items that demonstrate higher usage rates over time. The process in standard LFU occurs when the reaches its and a new item must be added. In its basic form, the identifies the item(s) with the minimum value by scanning the contents and selects one for removal to make space. While auxiliary data structures can optimize this selection for in , the core rule remains of the least frequently accessed item. When multiple items share the same minimum , a tie-breaking rule is applied to resolve the selection. A common approach integrates recency by evicting the least recently used item among the tied candidates, effectively combining LFU's frequency focus with an LRU-like for fairness and to mitigate issues like pollution from infrequent but recent insertions. A basic outline of frequency updates and eviction selection in pseudocode illustrates these rules:
function access(key):
    if key in cache:
        cache[key].frequency += 1
        update_recency(key)  // Optional for tie-breaking
    else:
        if cache is full:
            min_freq = minimum frequency in cache
            candidates = items with frequency == min_freq
            evict_key = least recently used among candidates
            remove(evict_key)
        insert(key, frequency=1)
        update_recency(key)

Implementation

Required Data Structures

To implement the Least Frequently Used (LFU) cache efficiently, a hash map is employed for constant-time (O(1)) lookup from keys to corresponding item , with each node storing the associated and such as a . Items are grouped by their access using a secondary hash map that associates each with a (or equivalent structure) containing all at that level; this enables rapid identification and of the minimum-frequency group in O(1) time. When multiple items share the same minimum frequency, tie-breaking favors recency using least recently used (LRU) ordering within the frequency group, achieved by maintaining the such that accessed items are moved to the tail (most recent position), with from the head (least recently used position). The overall is O(c), where c denotes the , as each of the c items requires O(1) additional space for its , pointers, and optional .

Eviction Algorithm

The eviction algorithm in Least Frequently Used (LFU) caching operates by maintaining a count of access frequencies for each item and prioritizing the removal of those with the lowest counts when the cache reaches capacity. Upon a , the frequency of the accessed item is incremented, and the item is repositioned within the frequency-ordered structure to reflect its updated priority. This ensures that more frequently accessed items are protected from . For a cache miss, a new item is inserted with an initial frequency of 1. If the cache is full, the algorithm identifies and evicts the item with the minimum frequency before adding the new one; in case of ties among minimum-frequency items, a secondary policy such as least recently used (LRU) is commonly applied within the frequency group. These operations leverage data structures like frequency-sorted doubly linked lists and hash maps for efficient tracking, as detailed in efficient implementations. The full eviction cycle can be outlined in pseudocode as follows, assuming a hash map for key-to-node mapping, a min-frequency doubly linked list for quick access to the lowest frequency bucket, and per-frequency doubly linked lists for items:
function access(key):
    if key not in cache:
        return miss  // handled by insert
    node = hash_map[key]
    old_freq = node.frequency
    remove node from old_freq list
    node.frequency += 1
    new_freq = node.frequency
    add node to new_freq list
    if old_freq list is empty:
        remove old_freq from min_freq structure

function insert(key, value):
    if cache size == capacity:
        evict()
    node = new Node(key, value, frequency=1)
    add node to frequency 1 list
    hash_map[key] = node
    if frequency 1 list was empty:
        add frequency 1 to min_freq structure

function evict():
    min_freq_bucket = get_min_freq_bucket()  // O(1) via head of min_freq list
    node_to_evict = min_freq_bucket.head  // least recently used in min bucket
    remove node_to_evict from its frequency list
    delete hash_map[node_to_evict.key]
    free node_to_evict
    if min_freq_bucket is now empty:
        remove min_freq_bucket from min_freq structure
        update min_freq to next lowest if needed
This pseudocode represents an amortized O(1) implementation using doubly linked lists per frequency level and a separate structure to track the minimum frequency, avoiding full scans. In a naive implementation without optimized structures, finding the minimum frequency item requires scanning all entries, resulting in O(n) time complexity for eviction, where n is the cache size.

Variants and Extensions

Adaptive LFU

Adaptive LFU variants incorporate mechanisms to dynamically adjust access frequencies, enabling the policy to better handle evolving workloads where access patterns shift over time. These adaptations address limitations in standard LFU, such as cache pollution from items that were once popular but are no longer relevant, by introducing temporal decay or recency-based adjustments. A adaptation is the aging mechanism, which periodically reduces frequency counts to forget outdated accesses and prevent low-frequency items from persisting indefinitely. In the LFU with Dynamic Aging (LFU-DA) policy, frequency counts are halved whenever the average frequency in the exceeds a predefined , effectively decrementing all counts by a factor to prioritize recent activity. This process occurs dynamically based on workload characteristics, with an additional parameter capping maximum frequencies to avoid dominance by exceptionally popular items. Similar variants may decrement frequencies by a fixed value, such as 1, at regular intervals, ensuring that even infrequently accessed items gradually lose priority without abrupt . Promotion policies further enhance adaptability by boosting frequencies for recently accessed items, integrating recency to balance long-term frequency with short-term relevance. The Least Recently/Frequently Used (LRFU) policy exemplifies this by computing a combined score for each item as R(i) = \lambda \cdot F(i) + (1 - \lambda) \cdot recency(i), where F(i) is the access frequency, recency(i) measures time since last access (often exponentially decaying), and \lambda (between 0 and 1) tunes the emphasis on frequency versus recency; upon access, both components are updated to promote the item. This integration allows LFU to respond more quickly to emerging hot items in dynamic environments. Another prominent adaptive approach is the Adaptive Replacement Cache (ARC), which maintains two lists to dynamically balance recency (LRU-like) and frequency (LFU-like) without fixed parameters, adapting to workload changes by adjusting list sizes based on past evictions. These adaptations prove particularly beneficial for non-stationary workloads, where access distributions change frequently, as aging and promotion prevent stagnation and improve hit rates over plain LFU in various traces. Parameter choices, such as the decay rate \alpha in exponential aging (updating freq_{new} = freq_{old} \times \alpha per time unit, with $0 < \alpha < 1), allow fine-tuning for workload volatility; lower \alpha accelerates forgetting for rapid changes, while higher values retain stability in steadier patterns.

LFU with Windowing

LFU with windowing addresses the high maintenance overhead of pure by approximating frequency counts through a limited observation of recent accesses, thereby reducing space and computational costs while preserving much of the frequency-based eviction benefits. In a fixed-size window approach, frequencies are tracked only over the last k accesses, where k is a tunable parameter much smaller than the full access history, allowing the policy to approximate long-term popularity without storing exhaustive records. This method, often termed (), slides the window forward with each new access, updating counts for items within it and discarding older data to focus on recent trends. The sliding window LFU (SW-LFU) variant refines this by maintaining a FIFO queue of recent items as the window, counting hits exclusively within that bounded set to adjust frequencies dynamically. Upon a cache miss or access, the window advances by adding the new item and removing the oldest if full, ensuring only recent interactions influence counts; eviction then selects the item with the minimum adjusted frequency from the window. This recency-limited sampling helps mitigate the popularity skew from one-time bursts but relies on the window size to balance responsiveness and stability. Implementation typically employs a double-ended queue (deque) to manage the FIFO window efficiently, allowing O(1) insertions and deletions at both ends, paired with a hash table to store and update per-item frequency counters within the current window. For eviction, the policy computes an adjusted score as the sum of the window-specific frequency and a historic frequency estimate (often from a compact sketch like a for scalability), selecting the minimum across candidates to remove. This structure supports amortized O(1) operations per access, making it suitable for high-throughput systems. Compared to pure LFU, windowing variants require significantly less space— for instance, TinyLFU implementations use about 5 bytes per cache entry versus nearly 100 bytes for exact window tracking— but may introduce inaccuracies in workloads with sudden bursts, where short-term spikes can overshadow stable long-term frequencies. Such approximations have been adopted in production caches like Google's Caffeine library, where a small probationary window (e.g., 1% of capacity) filters admissions before frequency-based promotion, demonstrating improved hit rates over traditional LFU in diverse traces.

Performance Analysis

Theoretical Guarantees

The Least Frequently Used (LFU) policy approximates Belady's MIN algorithm, the optimal offline eviction strategy that removes the item whose next access occurs farthest in the future, by instead evicting the item with the lowest historical access frequency, which serves as a proxy for low future demand under certain assumptions. This approximation enables online analysis of competitive ratios, defined as the ratio of LFU's cache misses to the minimum achievable by Belady's MIN, though LFU lacks a constant-factor guarantee in the general paging model, unlike LRU's k-competitive bound where k is the cache size. Under the Independent Reference Model (IRM), where accesses to items are independent and follow fixed probabilities p_i, LFU minimizes the expected number of misses by retaining the c items with the highest probabilities, achieving the optimal steady-state hit probability of P_{\text{hit}} = \sum_{i=1}^{c} p_i, where the p_i are sorted in decreasing order. Here, frequencies serve as maximum-likelihood estimates of the p_i, so LFU's eviction aligns with selecting the lowest-probability candidate. Stack distance (the number of distinct items accessed between two references to the same item) and reuse distance (similar but allowing repeats) metrics further support this: under IRM, LFU effectively minimizes the expected reuse distance across the cache by prioritizing high-frequency items, whose geometric inter-arrival times yield shorter average distances compared to low-frequency ones. In worst-case scenarios, LFU's competitive ratio is unbounded, as demonstrated by request sequences with many unique low-frequency items that pollute the cache, forcing repeated evictions of high-demand items; however, in models assuming bounded frequencies or access patterns, variants achieve an O(\log n) ratio relative to Belady's MIN, where n is the number of items. Unlike LRU, which offers a constant-factor guarantee but performs poorly on frequency-skewed workloads, LFU has no such universal bound but excels in probabilistic settings. A proof sketch for LFU's near-optimality under stationary IRM proceeds as follows: assume item probabilities p_1 \geq p_2 \geq \cdots \geq p_m > 0 with \sum p_i = 1, and cache size c < m. In steady state, the probability an access hits the cache is the sum of probabilities for the retained items. LFU, by evicting the lowest-frequency item (estimating the lowest p_i), converges to retaining the top-c probability items, matching Belady's MIN under IRM since future accesses are identically distributed to past ones. For Zipf distributions, common in practice where p_i \propto 1/i^\alpha for \alpha > 0, this yields near-optimal performance, with LFU's hit rate approaching the optimum as frequencies stabilize, often within 5-10% for traces.

Empirical Evaluations

Empirical evaluations of the Least Frequently Used (LFU) eviction have demonstrated its effectiveness in workloads exhibiting strong frequency-based locality, though it often incurs higher computational overhead compared to simpler policies like Least Recently Used (LRU). Early studies using 1990s (DEC) traces, which capture and accesses, showed LFU achieving slightly higher hit ratios than LRU in scenarios with moderate , such as the DEC trace where LFU reached 36.0% hit ratio versus LRU's 35.4% at a 1 GB size. In more skewed access patterns modeled by Zipf distributions, LFU has been found to outperform LRU by 10-20% in absolute hit rate terms, particularly when LRU hit rates range from 10-50%, as LFU better captures long-term popularity in independent (IRM) simulations. Key metrics in these evaluations include cache hit ratio, miss rate, and eviction overhead. For instance, in trace-driven simulations on database and workloads, LFU exhibited lower miss rates than LRU in -dominant scenarios but higher eviction costs due to maintaining and updating counters, often requiring O(log n) operations per access in naive implementations. The Adaptive Replacement (ARC) study on DEC traces (e.g., P4, P8, P12) and OLTP workloads revealed LFU's hit ratios lagging behind adaptive policies, with LFU at 56.22% for a 15,000-entry in OLTP versus ARC's 65.40%, highlighting LFU's vulnerability to evolving access patterns where alone fails to adapt. Eviction overhead for LFU is typically 2-5 times higher than LRU in these benchmarks due to maintenance, though optimized variants mitigate this to near O(1) amortized time. LFU excels in read-heavy workloads following power-law distributions, such as web caches and with Zipf-like access skew (α ≈ 0.8-1.1), where it sustains 15-20% higher hit ratios than LRU by prioritizing persistently popular items. However, it degrades in sequential scan-heavy traces, like SPC1-like benchmarks, where LFU's static frequency focus leads to from one-time accesses, resulting in miss rates up to 44% higher than LRU or adaptive policies. Post-2010 evaluations in caching environments, including approximations like TinyLFU integrated with LRU, confirm LFU's advantages in dynamic, skewed workloads such as YouTube video requests and queries. On real traces from these domains, TinyLFU-augmented policies achieved 5-15% hit ratio improvements over pure LRU, with windowed variants matching or exceeding state-of-the-art like in hit rates up to 50% on accesses at 20x cache sampling. Recent studies as of 2025, such as LRU-LFU strategies, show continued improvements in cache efficiency for modern applications, outperforming pure LFU by 10-20% in thermal-aware and ML-optimized scenarios. These results underscore LFU's edge in Redis-like setups for frequency-skewed applications, though approaches are often needed to balance overhead and adaptability.
Workload TypeExample TraceLFU Hit Ratio Improvement over LRUSource
Skewed (Zipf)IRM Simulations10-20% absoluteHasslinger et al. (2017)
File SystemDEC traces~0.6% (at 1 GB)Cao and Irani (2000)
Scan-HeavySPC1-likeSignificantly worse (higher miss rates)Megiddo & Modha (2003)
Cloud (Video/Search)YouTube/Wikipedia5-15%Einziger et al. (2017)

Applications and Comparisons

Real-World Usage

In software systems, the Least Frequently Used (LFU) policy is deployed for efficient caching in high-traffic environments. The HTTP accelerator employs LFU within its Massive Storage Engine (MSE) to handle terabyte-scale content, where a segmented LFU variant replaces the basic implementation to boost hit rates by prioritizing frequently accessed objects in disk-backed . Similarly, variants of distributed key-value stores like Memcached-inspired systems, such as those in (since version 3.0 in 2017), integrate approximated LFU eviction to retain high-frequency items, enhancing performance in read-heavy workloads. In applications, the library adopts Window TinyLFU, an optimized LFU approximation, for in-memory caching since its initial release in 2016, enabling near-optimal hit rates with low overhead through frequency sketching. Hardware implementations leverage LFU elements for . Translation Lookaside Buffers (TLBs) in research have proposed frequency-based policies like Random-Least-Frequently-Used (RLFU) to evict entries, particularly for workloads with large instruction footprints, reducing miss rates by favoring recurrent translations. Operating system kernels, such as historical versions of (e.g., 2.0), incorporated LFU in page replacement to approximate least-used pages. For database systems, PostgreSQL's buffer manager can benefit from LFU as an alternative eviction policy in extensions or custom setups, though studies often favor adaptive policies like ARC over both LFU and the default LRU for query caches handling skewed access patterns. Configuration of LFU in production involves tuning parameters to suit workload dynamics. A maximum frequency cap limits counter overflow in high-access scenarios, preventing skewed eviction, while window size in variants like Window TinyLFU—typically set to 1% of total cache capacity—balances recency and frequency; increasing it to 20-40% adapts better to bursty traffic, as seen in OLTP traces. In content delivery networks (CDNs), LFU configurations have yielded measurable gains; a study on streaming media CDNs found that LFU-SIZE, a size-aware LFU variant, improved hit ratios over traditional LFU and LRU, reducing bandwidth usage and response times in resource-constrained setups. Despite these advantages, LFU faces challenges in production due to computational overhead from maintaining access counters, which can degrade performance in highly dynamic environments with rapidly shifting patterns. This often leads to hybrid deployments, such as combining LFU frequency estimation with LRU recency in policies like Adaptive Replacement Cache (ARC) or Window TinyLFU, to mitigate overhead while preserving strong hit rates in systems like and . As of 2025, LFU approximations continue to be used in cloud services like AWS ElastiCache for managed Redis instances.

Comparison to Other Policies

The Least Frequently Used (LFU) policy contrasts with the Least Recently Used (LRU) policy primarily in its emphasis on access frequency over recency. LFU excels in workloads exhibiting frequency skew, such as those following a Zipf distribution where a small portion of items (e.g., 20% of objects accounting for 80% of accesses) dominate requests, as it prioritizes retaining high-frequency items regardless of their recent usage. In contrast, LRU performs better in scenarios with recency bursts, where temporal locality drives sudden spikes in access to recently introduced items, as LFU may retain outdated popular items during shifts. For instance, in stable frequency-based patterns like content recommendation systems, LFU achieves higher hit ratios than LRU, but it can degrade by up to 40% relative to LRU during workload transitions due to poor adaptability.
Workload TypeCache SizeLFU Hit RatioLRU Hit RatioSource
Zipf 0.9 (synthetic, frequency-skewed)1,000 items~45% (approximated via )~35%TinyLFU Paper
Wikipedia trace (real, mixed)1,000 items~60% (via variant)~50%TinyLFU Paper
OLTP trace (real, recency-heavy)1,000 pages27.98%32.83%ARC Paper
Compared to the policy, LFU avoids evicting frequently accessed items that enter the early and remain popular, particularly in looping or cyclic workloads where FIFO rigidly removes the oldest entry regardless of usage patterns. This leads to substantial improvements in hit rates for frequency-dominated scenarios; for example, LFU variants can outperform FIFO by over 30% in simulations involving repeated access loops over a fixed set of objects, as FIFO repeatedly discards viable candidates stuck at the head of the queue. , with its low overhead, suits streaming or uniform access patterns but underperforms in any environment with persistent hot items. As an online approximation to Belady's optimal offline policy (which evicts the item with the furthest next access), LFU provides competitive performance under the independent reference model by targeting low-frequency items, but it incurs linear regret in dynamic settings due to its inability to foresee future accesses. Theoretical analyses show LFU's competitive ratio approaches optimality for stationary distributions, with regret bounds scaling with cache size and request horizon, though it remains sub-optimal compared to clairvoyant strategies in adversarial traces. Hybrid approaches, such as combining LFU with LRU elements (e.g., Window-LFU), are recommended for workloads with static popularity distributions, where LFU's frequency focus maximizes long-term hits, whereas pure LRU or adaptive policies like ARC suit highly dynamic environments with frequent shifts in access patterns. Selection depends on workload characteristics: LFU for predictable skew, others for volatility.

References

  1. [1]
    An Improved Cache Eviction Strategy: Combining Least Recently Used and Least Frequently Used Policies
    - **Title**: An Improved Cache Eviction Strategy: Combining Least Recently Used and Least Frequently Used Policies
  2. [2]
    LFU vs. LRU: How to choose the right cache eviction policy - Redis
    Jul 23, 2025 · Least Frequently Used (LFU) and Least Recently Used (LRU) are two of the most common cache eviction policies for determining which data to ...Lfu Vs. Lru: What's The... · Lfu Vs. Lru: How To Know... · Common Lfu And Lru Pitfalls...Missing: origin | Show results with:origin
  3. [3]
    An O(1) algorithm for implementing the LFU cache eviction scheme
    Oct 22, 2021 · There are many policies such as MRU (Most Recently Used), MFU (Most Frequently Used), LRU (Least Recently Used) and LFU (Least Frequently Used) ...Missing: origin | Show results with:origin
  4. [4]
    LRFU: a spectrum of policies that subsumes the least recently used ...
    A spectrum of policies that subsumes the least recently used and least frequently used policies. Published in: IEEE Transactions on Computers ( Volume: 50 , ...
  5. [5]
    Optimized Caching Strategy: A Hybrid of Least Recently Used and ...
    Jun 26, 2025 · In this paper, it is shown that the combination of LRU and LFU evictions with the thermal model is a good way to improve cache efficiency in most cases.
  6. [6]
    [PDF] Cache Replacement Policy - Washington
    - Approximation of MIN. • Least Frequently Used (LFU). - Replace the cache entry used the least often (in the recent past). Page 3. LRU/MIN for Sequential Scan.
  7. [7]
    [PDF] TinyLFU: A Highly Efficient Cache Admission Policy - arXiv
    Dec 3, 2015 · Abstract. This paper proposes to use a frequency based cache admission policy in order to boost the ef- fectiveness of caches subject to ...<|control11|><|separator|>
  8. [8]
    [PDF] LRFU: a spectrum of policies that subsumes the least recently used ...
    The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older ...
  9. [9]
    [PDF] An O(1) algorithm for implementing the LFU cache eviction scheme
    Oct 22, 2021 · This paper presents an LFU cache eviction algorithm with O(1) runtime complexity for all operations, unlike the usual O(log n) for insertion, ...<|control11|><|separator|>
  10. [10]
    [PDF] Learning Cache Replacement with Cacheus - USENIX
    Feb 25, 2021 · LFU-friendly defined by an access sequence that is best handled by the least frequently used (LFU) caching algorithm. • Scan defined by an ...
  11. [11]
    Evaluating content management techniques for Web proxy caches
    Evaluating content management techniques for Web proxy caches. Authors: Martin Arlitt. Martin Arlitt. Hewlett-Packard Laboratories, 1501 Page Mill Road ...
  12. [12]
  13. [13]
    LIRS: an efficient low inter-reference recency set replacement policy ...
    LIRS is a buffer cache replacement policy that uses recency to evaluate Inter-Reference Recency (IRR) for replacement decisions, addressing LRU's limits.
  14. [14]
    None
    ### Summary of sw-LFU (Sliding Window LFU) from https://www.usenix.org/events/usits99/full_papers/bahn/bahn.pdf
  15. [15]
    [PDF] Caching - UCSB Computer Science
    Jan 30, 2020 · Belady proved that the optimal cache eviction policy is this: when the cache is full, evict the item whose next use is farthest in the future. • ...
  16. [16]
    [PDF] Maximizing Cache Performance Under Uncertainty
    Unfortunately, there is a large gap between theory and practice. From a theoretical standpoint, the optimal policy is Belady's. MIN [10,25], which evicts the ...Missing: ratio | Show results with:ratio
  17. [17]
    [PDF] Outperforming lRU with an adaptive replacement cache algorithm
    The adaptive replacement cache algorithm maintains two LRU pages lists: L1 and L2. ... ARC outperforms LRU, LFU, FBR, LIRS, and. MQ. Further, it performs as well ...
  18. [18]
    [PDF] Popularity-Aware GreedyDual-Size Web Proxy Caching Algorithms
    We compare. LRU, LFU, GDS(packets), and GDSP(packets). For the DEC trace, both hit ratios for GDS(packets) and. LRU are close. This is because when the cost is ...<|control11|><|separator|>
  19. [19]
    [PDF] Performance Evaluation for New Web Caching Strategies ...
    jects by a tie breaking decision. ... In order to obtain 50% hit rate, LRU requires between 1.6-fold and 10-fold larger cache size than LFU; LRU caches for ob-.
  20. [20]
    Introducing Varnish Massive Storage Engine - Resources
    Nov 19, 2014 · The somewhat simplistic LFU eviction algorithm used in Varnish can be replaced, increasing cache hit rates. In late September the code was ...
  21. [21]
    ben-manes/caffeine: A high performance caching library for Java
    Caffeine provides an in-memory cache using a Google Guava inspired API. The improvements draw on our experience designing Guava's cache and ...
  22. [22]
    [PDF] Morrigan: A Composite Instruction TLB Prefetcher
    Oct 18, 2021 · IRIP is enhanced with a new replacement policy, named Random-Least-Frequently-Used (RLFU), that drives replace- ments based on a frequency stack ...
  23. [23]
    Page replacement in Linux 2.4 memory management - USENIX
    ... LFU replacement used in older Linux kernels is known to work. Due to the simple clock algorithm in shrink_mmap, sometimes clean, accessed pages can get ...
  24. [24]
    (PDF) Selection of Cache Replacement Algorithm for PostgreSQL
    Apr 7, 2023 · When we compare ARC, LRU, MRU, and LFU, Adaptive Replacement Cache (ARC) is the significant cache replacement policy. So, we check the ...
  25. [25]
    A Study on Cache Strategy of CDN Stream Media | Semantic Scholar
    The experimental results show that the LFU-SIZE algorithm has higher hit ratio and less response time than the traditional LRU and LFU cache replacement ...<|separator|>
  26. [26]
    Comparative Analysis of Distributed Caching Algorithms - arXiv
    Apr 3, 2025 · We evaluate various caching strategies including Least Recently Used (LRU), Least Frequently Used (LFU), Adaptive Replacement Cache (ARC), and ...
  27. [27]
    [PDF] Comparative Analysis of Distributed Caching Algorithms - arXiv
    Apr 3, 2025 · Hazarika and Shah. [3] observed that pure LFU implementations can experience hit ratio degradation of up to 40% during workload transitions.
  28. [28]
    Cache Algorithms: FIFO vs. LRU vs. LFU – A Comprehensive Guide
    It operates on the principle that the least frequently used item should be the first to be evicted when the cache is full.3. Lru (least Recently Used)... · 4. Lfu (least Frequently... · Lfu Implementation Concept<|control11|><|separator|>
  29. [29]
    Optimum caching versus LRU and LFU: Comparison and combined ...
    LFU is more efficient than LRU and can circumvent the issue where periodic or accidental operations cause the cache hit rate to decrease [44]. ... PoSiF: A ...Missing: seminal | Show results with:seminal
  30. [30]
    [PDF] LeadCache: Regret-Optimal Caching in Networks
    Oct 12, 2021 · These two Theorems, taken together, implies that LeadCache is regret optimal up to a factor of ˜O(n3/8).
  31. [31]
    [PDF] LeadCache: Regret-Optimal Caching in Networks - arXiv
    Oct 26, 2021 · The LFU policy works similarly to the LRU policy with the exception that, in the case of a cache-miss, the policy evicts a file that was ...