Fact-checked by Grok 2 weeks ago

CoDel

CoDel, or Controlled Delay, is an active queue management (AQM) algorithm designed to combat bufferbloat in computer networks by precisely controlling the delay packets experience in queues. Developed by Kathleen Nichols of Pollere Inc. and Van Jacobson of PARC, it was first introduced in a 2012 publication and later standardized by the IETF in RFC 8289 as an experimental protocol in 2018. Unlike traditional queue management methods that rely on buffer occupancy, CoDel focuses on the actual sojourn time of packets—the duration from enqueue to dequeue—to distinguish between beneficial transient queues and harmful persistent ones that cause excessive latency. Bufferbloat, the primary problem CoDel addresses, arises from over-buffered networks where large queues lead to high and poor responsiveness, particularly affecting applications like VoIP and gaming, even under low load conditions. CoDel's core mechanism involves tracking the minimum sojourn time of packets over a sliding of 100 milliseconds (the default parameter); if this minimum exceeds a target delay of 5 milliseconds for at least that , it begins dropping packets to signal . Drops are spaced using a probabilistic control law based on the of the number of drops, which increases drop as needed while avoiding unnecessary throughput loss, and it refrains from dropping if the buffer holds fewer than one (MTU) of data to prevent . This approach makes CoDel parameterless for typical use, adapting automatically to varying link speeds and traffic patterns without manual tuning. Key innovations in CoDel include its use of local minimum queue delay as the congestion signal, which is robust to bursty and independent of round-trip time (RTT) variations, and its efficiency, requiring only a single and operating at dequeue time without locks or complex averaging. These features enable low computational overhead, making it suitable for deployment in software routers, devices, and even . Evaluations show CoDel maintaining median delays around 5 milliseconds while achieving near-full link utilization, outperforming drop-tail in reducing without sacrificing throughput. CoDel has seen widespread adoption since its inception, integrated into the Linux kernel's traffic control subsystem (tc) as the codel qdisc since 2012, and often paired with fair queuing in the fq_codel variant for better per-flow fairness. It is the default AQM in distributions like and has influenced related algorithms, such as (Proportional Integral controller Enhanced), while continuing to be studied and adapted for emerging networks including , (SDN), and data centers as of 2025.

Background and Motivation

Bufferbloat

Bufferbloat refers to the phenomenon where excess buffers in network devices, such as routers and switches, lead to excessively high and , degrading overall network responsiveness. This occurs because packets accumulate in queues during periods of , delaying their transmission and violating the timeliness assumptions underlying protocols like , which rely on prompt feedback from to adjust sending rates. The primary cause of bufferbloat is the deployment of overly large, fixed-size buffers designed to absorb bursty traffic and prevent packet drops, a practice rooted in earlier networking assumptions that more buffering equates to better performance under load. These buffers, often found in consumer-grade equipment like home routers, cable modems, and even operating system transmit queues, can hold hundreds of milliseconds or more of data, turning minor congestions into prolonged delays. As high-speed broadband became widespread, these oversized buffers—intended to maximize throughput—exposed the issue by allowing queues to grow unchecked without effective management mechanisms. The impacts of are particularly severe for latency-sensitive applications, such as VoIP calls and online gaming, where even brief spikes in delay (e.g., 1-2 seconds or more) can cause , audio dropouts, or unresponsive interactions. Bulk data transfers, like file downloads, also experience significantly increased due to delayed in the congestion control loop, prolonging overall transfer times despite high instantaneous throughput, while real-world scenarios amplify the problem: in home networks shared among multiple devices, a single high-bandwidth stream can bloat queues and stall everything else; similarly, ISP bottlenecks during peak hours exacerbate delays across entire neighborhoods. Bufferbloat emerged as a prominent concern in the early 2010s, coinciding with the proliferation of gigabit-speed that highlighted the limitations of unmanaged buffering in modern environments. This issue gained prominence through the project, started by Jim Gettys and others in 2010, which demonstrated the problem via real-world measurements and spurred community efforts to address it. This prompted the development of (AQM) techniques as a broad category of solutions to detect and mitigate excess delay proactively.

Traditional Queue Management Challenges

Traditional queue management in packet-switched networks has long relied on simple mechanisms like drop-tail queuing combined with First-In-First-Out () scheduling. In drop-tail queuing, incoming packets are accepted until the buffer reaches capacity, at which point subsequent packets are discarded from the tail of the . This approach leads to persistently full s during congestion, creating "standing queues" that impose excessive delays on all traffic without improving throughput, as the buffer size often exceeds the of the link. Moreover, drop-tail exacerbates global synchronization among flows: when the buffer overflows, multiple flows experience simultaneous packet losses, prompting them to reduce their congestion windows in unison, which results in inefficient underutilization of the link followed by synchronized retransmissions. FIFO scheduling, the default discipline in these queues, treats all packets equally regardless of their flow, allowing a small number of aggressive or high-bandwidth flows to dominate the buffer space and monopolize available . This lack of inflates for -sensitive and smaller flows, as even brief bursts from dominant flows can fill the queue, delaying packets from other sources for extended periods. Such unfairness stems from FIFO's inability to isolate flows, leading to scenarios where non-responsive or misbehaving applications capture disproportionate shares of the network resources, further amplifying delays across the system. Early attempts at (AQM), such as Random Early Detection () introduced in 1993, aimed to mitigate these issues by probabilistically dropping packets before the buffer fills, based on the average queue length estimated via an exponentially weighted moving average. sought to prevent global synchronization by randomizing drops and maintain lower average queue sizes, allowing bursts while signaling congestion to endpoints. However, 's performance proved highly sensitive to its parameters, including minimum and maximum thresholds (min_th and max_th), the maximum drop probability (max_p), and the averaging weight (w_q), often resulting in under-dropping (persistent queues) or over-dropping (low utilization) depending on traffic conditions. Despite recommendations for widespread adoption in the late and implementations in some routers, saw limited practical success due to the difficulty in tuning these parameters across diverse link speeds and traffic mixes, lacking robust guidance for configuration. These challenges in contributed to the persistence of , where unmanaged queues amplify latency in modern networks.

Theoretical Foundations

Good and Bad Queues

In , queues serve as to manage temporary mismatches between packet arrival and service rates, but their behavior determines whether they enhance or degrade . Good queues are short-term that absorb natural bursts in , typically lasting 10-100 milliseconds, without imposing significant additional delay on packets. These queues allow to maintain high link utilization by smoothing out statistical variations in arrival rates, such as those caused by bursty sources, while ensuring that packets drain within roughly one round-trip time (RTT). For instance, a holding 20 packets at full utilization dissipates quickly, acting as a without compromising responsiveness. In contrast, bad queues are persistent and prolonged, often exceeding 100 milliseconds, signaling underlying rather than transient fluctuations. These queues arise when the arrival rate consistently outpaces the service rate over multiple RTTs, leading to standing delays that do not improve throughput but instead cause —the extreme manifestation where excessive buffering inflates across the network. A example occurs when large, sustained "elephant" flows dominate the , unfairly delaying smaller, latency-sensitive "mouse" flows, such as web requests or VoIP packets, thereby harming overall . Theoretically, queues function as key indicators of load, where short queues reflect healthy operation and long ones reveal inefficiencies in , such as TCP's adjustments. An ideal target for standing queue delay is around 5-10 milliseconds—roughly 5-10% of a typical RTT of 100 milliseconds—to balance low with sufficient buffering for bursts, ensuring responsiveness for interactive applications. However, large buffers create a lock-in effect by masking true signals from endpoints, preventing timely rate reductions and perpetuating high delays as senders misinterpret buffered packets as successful transmission. This dynamic underscores the need for that distinguishes arrival-service imbalances to avoid hidden overloads.

Control Principles in CoDel

CoDel represents a in by using sojourn time—the time a packet spends in the —as the primary for maintaining , rather than traditional metrics like length in bytes or packets. This approach directly targets the delay experienced by packets, enabling the algorithm to distinguish between transient bursts and persistent without relying on absolute thresholds. By focusing on sojourn time, CoDel addresses the challenges of "bad queues" in the problem domain of good and bad behavior, where excessive buffering leads to high latency variance. At its core, CoDel functions as a proportional-integral-like controller that minimizes delay variance through a state-space , without requiring explicit loops or complex tuning. It tracks the local minimum sojourn time observed over a recent , using this as a to detect when delays exceed acceptable levels and respond by increasing the rate of packet drops. This control strategy employs stochastic gradient adaptation to adjust drop frequency dynamically, ensuring the queue remains responsive to traffic variations while keeping average delays low. Central to CoDel's operation are two key parameters that serve as setpoints for its logic: a target delay of 5 milliseconds, which represents an optimal balance for typical round-trip times of around 100 milliseconds, and an of 100 milliseconds, which paces drops to prevent across flows and accommodates a wide range of round-trip times from 10 milliseconds to 1 second. The target delay acts as the desired setpoint for sojourn times, triggering actions when exceeded persistently, while the defines the observation window for minimum delay and the initial spacing between drops. Unlike earlier algorithms such as , which depend on queue length thresholds that require manual configuration and perform poorly across varying link speeds and sizes, CoDel is inherently insensitive to these factors due to its delay-based feedback. Similarly, while also uses latency-based control with a proportional-integral mechanism to target an average queue delay (default 15 milliseconds), CoDel's per-packet sojourn time measurement provides finer-grained adaptation without needing periodic queue delay sampling or explicit probability updates, enabling automatic responsiveness to diverse loads. Both CoDel and PIE achieve robustness to link rates and provisioning by prioritizing delay over length, but CoDel's design avoids the parameter sensitivities seen in RED and offers simpler deployment for high-variability environments.

Algorithm Description

Core Mechanics

CoDel operates through a two-phase mechanism designed to detect and mitigate excessive queue delays without relying on explicit congestion notifications or complex parameter tuning. In the initial monitoring phase, the algorithm passively observes packet sojourn times—the duration each packet spends in the queue from enqueue to dequeue—without dropping any packets. This phase allows CoDel to establish a baseline for queue behavior under normal conditions. Upon entering the dropping phase, CoDel actively discards packets to reduce latency, transitioning back to monitoring once delays subside. This phased approach ensures responsiveness to bufferbloat while accommodating temporary traffic bursts. Sojourn time is calculated for each dequeued packet by subtracting the enqueue from the current dequeue time, providing a direct measure of the delay experienced by that individual packet. CoDel maintains a running minimum of these sojourn times over a sliding window defined by the . The minimum is used rather than an to robustly detect persistent high delay, as it ignores outliers from bursty flows and focuses on the lowest observed delay in the recent period. If this minimum sojourn time exceeds the target delay and remains so for the full duration of the (indicating sustained ), CoDel triggers the transition to the dropping phase. During queue idleness—when no packets are present after a dequeue—the algorithm resets its control state, clearing the first_above_time tracker and dropping flag to prevent erroneous reactivation upon the next arrival. In the dropping phase, CoDel performs tail drops to signal congestion, starting with the first drop immediately upon entering the state if the queue exceeds a minimum threshold (typically one maximum transmission unit). Subsequent drops are scheduled deterministically using a control law that increases drop frequency over time. The time until the next drop is set to the base interval divided by the square root of the drop count, where count begins at 1 and increments with each drop: \text{drop_next} = t + \frac{\text{interval}}{\sqrt{\text{count}}} This square-root scaling ensures an initial drop interval equal to the full base interval, followed by progressively shorter intervals (e.g., interval / √2 ≈ 0.707 × interval for the second drop), accelerating the response as persists without overwhelming the . Drops continue until the minimum sojourn time falls below the target or the empties, at which point CoDel exits the dropping phase and resets the count. This logic applies only when the holds sufficient packets (above the ), avoiding unnecessary drops during low-load periods.

Parameters and Tuning

CoDel features two primary configurable parameters: the target delay and the , designed to minimize manual tuning while adapting to varying network conditions. The target delay represents the maximum acceptable persistent queue delay, beyond which CoDel begins dropping packets to control . Its default value is 5 milliseconds, selected to balance high link utilization (typically above 95%) with low delay for typical , as this threshold allows queues to absorb bursts without excessive buildup. For environments with interactive applications, such as VoIP or , lowering the target to 3-5 milliseconds can further reduce perceived delay, though values below 5 milliseconds risk underutilization on slower s by triggering premature drops. The interval parameter defines the time window over which CoDel measures the minimum queue delay and spaces subsequent packet drops, ensuring that short-term bursts do not trigger unnecessary actions while addressing persistent congestion. The default is 100 milliseconds, calibrated for round-trip times (RTTs) ranging from 10 milliseconds to 1 second in terrestrial Internet paths, as it aligns with the timescale of TCP's congestion response and prevents the delay estimator from becoming stale during idle periods. This setting effectively limits drop frequency to avoid bursty behavior, with tuning recommended to match the expected maximum RTT—for instance, for data-center applications with RTTs of 100 μs or less, the interval should be set to 5-10 ms and the target to 100-500 μs to respond faster to microbursts. Unlike earlier algorithms such as , CoDel omits a maximum queue length limit , relying instead solely on sojourn time measurements to decouple decisions from size variations across . This "no-knobs" approach simplifies deployment, as operators need not estimate or configure thresholds based on buffer depths, which often led to misconfiguration in RED. For asymmetric links, such as those in cable or DSL connections with disparate and speeds (e.g., 100 Mbps down / 10 Mbps up), CoDel tuning involves applying the algorithm independently to and queues, setting the and per direction to account for differing burst tolerances and RTT asymmetries without altering core parameters. In advanced configurations, CoDel's control law—where the drop rate escalates proportionally to the of the number of drops since entering the dropping state—can be varied for specialized scenarios, such as linear increases for smoother in high-speed links, though the default stochastic gradient remains robust across bandwidths. Over-tuning, such as excessively reducing the to 10 milliseconds, can lead to overdropping and link underutilization (e.g., dropping below 80% capacity on 10 Mbps links), as reacts too aggressively to transient s; similarly, setting the below 2 milliseconds on variable-rate links may starve bursts, emphasizing the need to adhere to defaults unless environment-specific testing justifies changes.

Performance Evaluation

Simulation Studies

Early simulations of CoDel, as detailed in the 2012 IETF draft and accompanying analyses, demonstrated substantial reductions in scenarios compared to traditional drop-tail queuing. In ns-2-based experiments with dynamic capacities ranging from 64 Kbps to 100 Mbps and mixed traffic including flows with varying RTTs (10-500 ms), drop-tail queues often built up to produce delays exceeding 100 ms and up to 10 seconds under congestion, while CoDel maintained median delays around 2.7 ms and 75th percentile delays at 5 ms, achieving 10-100x lower without sacrificing utilization. These tests highlighted CoDel's ability to adapt to bandwidth changes and traffic bursts, preventing the synchronization artifacts common in drop-tail where flows globally synchronize after losses. Further evaluations using NS-3 confirmed CoDel's stability under varying loads and improved fairness and throughput compared to drop-tail or in scenarios with multiple flows. Comparisons to showed CoDel to be more stable, as 's average queue length-based dropping often led to oscillatory behavior, whereas CoDel's sojourn-time metric provided responsive yet non-punitive control. Despite these promising results, studies have limitations, as they typically assume idealized conditions such as uniform packet sizes, error-free , and controlled topologies, which introduce less variability than real-world networks with heterogeneous hardware, intermittent errors, and diverse application behaviors. These controlled environments validate CoDel's core mechanics but may overestimate performance in highly dynamic or asymmetric settings.

Real-World Deployments and Benchmarks

CoDel has demonstrated substantial latency reductions in real-world home network deployments, particularly through A/B comparisons conducted on bufferbloat.net. In tests using Comcast's "Blast" 50/10 Mbps service with a WNDR3800 router running CeroWRT, baseline buffer-induced latency under full load reached approximately 350 ms due to excessive buffering at the cable modem. Implementing CoDel with ingress and egress rate limiting dropped this latency to around 5 ms while maintaining full throughput of 58 Mbps downlink and 10 Mbps uplink, highlighting its effectiveness in mitigating bufferbloat without sacrificing bandwidth. Similar RRUL benchmarks on bufferbloat.net across cable, DSL, and fiber connections showed CoDel reducing latency under load by orders of magnitude, often from hundreds of milliseconds to under 10 ms in shaped home setups. As of 2025, integrations of CoDel in firewall distributions like and have yielded improved scores in user benchmarks. Configurations using FQ-CoDel limiters in , tested via the tool, consistently achieved A+ grades with latencies of 5-10 ms under load, compared to C grades (50-100 ms spikes) without shaping. In recent releases, CoDel-based similarly elevated scores to A or higher in most home and small office environments, provided limits are tuned to observed speeds rather than advertised rates. RouterOS v7 implementations of FQ-CoDel hybrids, as reported in community benchmarks, enhanced test results by reducing latency spikes during multi-stream saturation, though optimal performance required disabling on high-throughput links exceeding 450 Mbps. Case studies illustrate CoDel's role in larger-scale ISP trials and emerging networks. , an early adopter, funded AQM research including CoDel and deployed related techniques to address , achieving up to 90% reductions in working across congested cable segments by 2021, with ongoing refinements in systems. In mobile contexts, CoDel has been evaluated for 5G edge buffering to support low- services; a study on non-terrestrial 5G networks positioned CoDel at various layers, demonstrating improved management and reduced end-to-end delays in dynamic radio environments without over-bloating radio link control buffers. These deployments confirm simulation baselines, where CoDel stabilized delays during fluctuations, translating to real networks with variable 5G handovers. Despite these gains, CoDel introduces challenges in resource-constrained environments. On low-end hardware like entry-level routers (e.g., or WNDR3800), FQ-CoDel activation can impose CPU overhead, reducing achievable throughput by 20-50% on links over 500 Mbps due to per-flow queuing computations, necessitating hardware upgrades for gigabit speeds. Interactions with variants like BBR require careful tuning; while BBR pairs effectively with FQ-CoDel to mitigate unfairness in mixed flows, aggressive BBR pacing can exacerbate queue drops on shallow buffers, leading to 10-20% throughput variability in lossy edges unless ECN is enabled.

Implementations and Adoption

Kernel and Router Integrations

CoDel was integrated into the Linux kernel starting with version 3.5, released in July 2012, as a queue discipline (qdisc) within the traffic control (tc) subsystem, enabling its use for active queue management in network interfaces. This implementation, contributed by Eric Dumazet, resides in the net/sched/sch_codel.c module and supports CoDel's core mechanics for controlling buffer delay without manual tuning. In BSD variants, CoDel has been adopted through firewall and shaping frameworks. FreeBSD-based systems like , built on , utilize CoDel limiters to mitigate by applying delay-based packet dropping in configurations. Similarly, integrates CoDel within its shaper modules, particularly through pipes and queues that support FQ-CoDel for upload and download traffic management. For router firmware, has included CoDel as a default option in its Smart Queue Management (SQM) scripts since version 14.07 in 2014, allowing easy deployment on consumer and small business routers to address issues. The standardization of CoDel occurred in 2018 via RFC 8289, which defines its framework for controlling -induced delays across diverse networking environments, facilitating broader adoption in kernels and devices. Additionally, the ns-3 network simulator maintains an updated CoDel queue disc model as of October 2025, supporting research evaluations of its performance in simulated topologies.

Open-Source and Commercial Use

CoDel has been prominently featured in open-source networking projects, particularly those under the umbrella, which focus on alleviating excessive buffering in routers and networks. These efforts include integrations in firmware like and CeroWrt, where CoDel serves as a default (AQM) mechanism to maintain low without manual tuning. In commercial deployments, CoDel and its extension FQ-CoDel have been adopted by internet service providers (ISPs) and router manufacturers to enhance end-user experience. For example, the French ISP Free.fr incorporated FQ-CoDel into its Revolution V6 starting in 2012, enabling widespread mitigation across its network. Consumer-grade devices, such as Eero mesh routers, utilize FQ-CoDel for internal traffic management within the mesh topology, while Google Wifi implements it to optimize queue discipline and reduce spikes during high-bandwidth usage. Adoption trends through 2025 reflect growing integration of CoDel in diverse ecosystems, building on its foundational role in kernels and router firmwares. This has extended to tools like Flent, an open-source testing suite developed by the project, which is commonly used to evaluate and fine-tune CoDel configurations in both development and deployment settings. However, challenges remain in broader commercialization, including that restricts access to CoDel parameters in proprietary hardware from manufacturers like and , potentially hindering optimal customization. Additionally, documentation for CoDel implementations on non-Linux platforms, such as BSD-based systems beyond , often lacks completeness, complicating porting and maintenance efforts.

Extensions and Derivatives

FQ-CoDel

FQ-CoDel, or Flow Queue CoDel, is a hybrid packet scheduler and (AQM) algorithm that integrates (FQ) with the CoDel AQM to enhance network fairness and latency control. By isolating individual network flows into separate queues, FQ-CoDel applies CoDel's delay-based packet dropping independently to each queue, preventing any single flow from monopolizing bandwidth and exacerbating . This design addresses limitations in standalone CoDel by ensuring equitable across diverse types, such as bursty high-volume streams and low-rate interactive flows like DNS or video calls. The mechanics of FQ-CoDel rely on hash-based to classify incoming packets. Packets are hashed using a 5-tuple key (source/destination addresses, ports, and ) with a Jenkins and a random for , directing them to one of a configurable number of queues (default ). Within this framework, bandwidth allocation employs via a (DRR) scheduler, where each queue receives a quantum of bytes (default ) per round, promoting proportional sharing while preserving packet order per flow. CoDel operates per queue to monitor sojourn times, dropping tail packets if the delay exceeds a target threshold (default 5 ms) or during control intervals (default 100 ms), thus maintaining low without explicit . FQ-CoDel offers significant advantages in handling complex traffic scenarios. It excels at managing multipath traffic by mixing packets from multiple flows, which improves link utilization and isolates low-rate traffic from high-bandwidth competitors. The per-flow queuing substantially reduces head-of-line (HOL) blocking, where bursty traffic in one flow would otherwise delay packets in others, leading to more consistent performance for applications. Adoption of FQ-CoDel has been widespread, particularly in Linux-based systems, where it was introduced in 3.5 in 2012 and has become the default queue discipline in many distributions and home routers, such as those using , starting around 2015. It is the standard in distributions such as , , and , as well as in router firmware from vendors like and . Recent evaluations in networks highlight its role in AQM frameworks for disaggregated radio access, providing fair bandwidth distribution among concurrent flows in high-mobility environments. CoDel's emphasis on controlling queuing delay through sojourn time measurements has influenced several subsequent active queue management (AQM) algorithms, which adapt or contrast its approach for specific network environments. One prominent example is Proportional Integral controller-Enhanced (PIE), developed by researchers including those from Google, which addresses bufferbloat by marking or dropping packets based on the probability derived from queue length trends and packet loss rates, rather than direct delay measurements. PIE is designed to be lightweight and parameter-light, making it particularly suitable for resource-constrained devices like mobile networks, where it achieves low latency with minimal computational overhead compared to CoDel's delay-based control. Its proportional-integral control loop adjusts drop probability dynamically to stabilize queue occupancy, and it has been standardized for use in DOCSIS 3.1 cable networks via RFC 8034. Another related algorithm is (Common Applications Kept Enhanced), which extends principles with enhanced AQM features, including (DiffServ) awareness for prioritizing traffic classes while mitigating . incorporates a variant of CoDel's control logic but adds bandwidth shaping and per-host fairness, making it effective for home routers and asymmetric links; by 2025, it has become a default choice in distributions for its comprehensive handling of diverse application traffic. In comparisons, CoDel's focus on standing queue delay enables robust latency control across varying loads but can be more computationally intensive, whereas PIE's reliance on loss probability trends offers simpler implementation and better integration with loss-based congestion controls, though it may underperform in high-variability wireless scenarios. These algorithms have evolved alongside congestion control mechanisms like BBR, which models available and RTT to complement AQM by reducing self-induced congestion, leading to hybrid deployments that improve throughput and fairness in modern networks. IETF discussions, such as those in RFC 9743 (published March 2025), provide guidelines for evaluating new congestion control algorithms, including testing their interactions with AQMs like and to ensure fairness, stability, and low latency in diverse topologies.

References

  1. [1]
    RFC 8289 - Controlled Delay Active Queue Management
    This document describes CoDel (Controlled Delay) -- a general framework that controls bufferbloat-generated excess delay in modern networking environments.
  2. [2]
    Controlling Queue Delay
    May 6, 2012 · This article aims to provide part of the bufferbloat solution, proposing an innovative approach to AQM suitable for today's Internet called CoDel.
  3. [3]
    Active queue management in 5G and beyond cellular networks ...
    Apr 15, 2025 · This paper proposes a state-of-the-art framework for adapting Active Queue Management (AQM) in 5G and beyond cellular networks with disaggregated Radio Access ...
  4. [4]
    A Hybrid Active Queue Management Algorithm for Packet ...
    Sep 27, 2025 · This paper introduces a novel hybrid active queue management (HAQM) algorithm, which combines elements of both packet‐oriented and delay‐ ...
  5. [5]
    RFC 8033: Proportional Integral Controller Enhanced (PIE)
    ... Bufferbloat Problem Abstract Bufferbloat is a phenomenon in which excess buffers in the network cause high latency and latency variation. As more and more ...
  6. [6]
    [PDF] Bufferbloat - Dark Buffers in the Internet - IETF
    Jim Gettys. Bell Labs. March 24, 2011 james.gettys@alcatel-lucent.com, jg@freedesktop.org. Page 2. Bufferbloat, March 24, 2011. © Alcatel-Lucent 2010, 2011.Missing: definition impacts
  7. [7]
    Characterization Guidelines for Active Queue Management (AQM)
    Bufferbloat [BB2011] is the consequence of deploying large, unmanaged buffers on the Internet -- the buffering has often been measured to be ten times or a ...
  8. [8]
    Controlling queue delay | Communications of the ACM
    Jacobson, V., Nichols, K. and Poduri, K. RED in a different light, 1999; http://www.cnaf.infn.it/~ferrari/papers/ispn/red_light_9_30.pdf. ... CoDel.html ...
  9. [9]
    [PDF] Random Early Detection Gateways for Congestion Avoidance
    This paper presents Random Early Detection (RED) gateways for congestion avoidance in packet- switched networks. The gateway detects incipient congestion by ...Missing: original | Show results with:original
  10. [10]
    RFC 2309: Recommendations on Queue Management and ...
    ... traffic issues discussed in this memo. Preparation of this memo resulted ... Such applications can grab an unfair share of the network bandwidth. For ...<|separator|>
  11. [11]
    Machine Learning Approaches for Active Queue Management - arXiv
    Oct 3, 2024 · This section elaborates on the history and development of AQM ... RFC 2309 recommended RED as an effective AQM algorithm to solve the above ...Missing: 1990s | Show results with:1990s
  12. [12]
    RFC 8289: Controlled Delay Active Queue Management
    This document describes CoDel (Controlled Delay) -- a general framework that controls bufferbloat-generated excess delay in modern networking environments.
  13. [13]
    [PDF] 1 Controlling Queue Delay - People @EECS
    This article aims to provide part of the bufferbloat solution, proposing an innovative approach to AQM suitable for today's Internet called CoDel (for ...
  14. [14]
    RFC 8033 - Proportional Integral Controller Enhanced (PIE)
    PIE is a lightweight active queue management design that controls average queuing latency to a target value, addressing bufferbloat.Missing: differences | Show results with:differences
  15. [15]
    Appendix: CoDel pseudocode
    ### CoDel Pseudocode Summary
  16. [16]
    draft-nichols-tsvwg-codel-01 - IETF Datatracker
    This is an older version of an Internet-Draft whose latest revision state is "Replaced".
  17. [17]
    A Simulation-Based Comparative Study of Controlled Delay (CoDel ...
    Jul 17, 2023 · CoDel's effectiveness is evaluated by running simulations in ns-3 and comparing its results to that of Random Early Detection (RED), another promising network ...
  18. [18]
    Fixing Bufferbloat on Comcast's "Blast" 50/10Mbps Service
    ### Summary of CoDel Performance and Latency Improvements on Comcast Setup
  19. [19]
    RRUL Rogues Gallery - Bufferbloat.net
    ### Summary of CoDel/FQ-CoDel Benchmark Results
  20. [20]
    FQ_Codel vs FQ_Pie - OPNsense Forum
    Mar 9, 2025 · The bofferbloat results in Waveform bofferbloat Test gave me Most Times a a+ but Sometimes also only a a Grade with 5 - 10 ms more latency in ...Bufferbloat tests lie to me. - OPNsense Forumfq_codel console flood - Page 2 - OPNsense ForumMore results from forum.opnsense.orgMissing: pfSense | Show results with:pfSense
  21. [21]
    Configuring CoDel Limiters for Bufferbloat | pfSense Documentation
    Aug 26, 2025 · In most cases, the new score should be an A or higher. If the score does not improve, or gets worse, there is likely a problem with the ...Missing: Waveform | Show results with:Waveform
  22. [22]
    fq_codel/CAKE stories? - General - MikroTik community forum
    Jan 6, 2025 · I´m always interested in how people are using cake and fq_codel. We are adding some new features to cake in particular of late.
  23. [23]
    [PDF] Improving Latency with Active Queue Management (AQM) During ...
    Jul 30, 2021 · Active Queue Management (AQM) was used to improve latency by addressing 'buffer bloat' and reducing initial buffering time for video ...
  24. [24]
    [PDF] Active Queue Management as Quality of Service Enabler for 5G ...
    5G networks has not been deeply studied before. In this paper, we study the use of CoDel within the 5G domain at different entities and layers in order to ...
  25. [25]
    [PDF] Dynamic Buffer Sizing and Pacing as Enablers of 5G Low-Latency ...
    Bufferbloat is extensively studied within the actual 5G QoS scenario, which presents multiple challenges inherited from the dynamic radio link nature and the ...
  26. [26]
    Which mikrotik for 1Gbps WAN, SOHO, and queue enabled (fqcodel ...
    Oct 13, 2022 · We now are using hEx and can provide full 500 Mbps traffic without any queue. Tried enabling fq-codel and the throughput dropped to around ...
  27. [27]
    An emulation-based evaluation of TCP BBRv2 Alpha for wired ...
    Sep 1, 2020 · Such unfairness can be reduced by using advanced AQM schemes such as FQ-CoDel and CAKE. Regarding fairness among BBRv2 flows, results show that ...Missing: interactions | Show results with:interactions
  28. [28]
    Can I use BBR with fq_codel? - Google Groups
    Yes, it is fine to use BBR with fq_codel on recent kernels. For kernels v4.20 and later, BBR will use the Linux TCP-layer pacing if the connection notices that ...Missing: interactions challenges
  29. [29]
    draft-ietf-aqm-codel-01
    Our code, intended for simulation experiments, is available at http://pollere.net/CoDel.html and being integrated into the ns-2 distribution. Andrew ...
  30. [30]
    Linux_3.5 - Linux Kernel Newbies
    Jul 21, 2012 · Linux 3.5 has been released on 21 Jul 2012. Summary: This release includes support for metadata checksums in ext4, userspace probes for ...Missing: tc qdisc
  31. [31]
    Fighting Bufferbloat with FQ_CoDel - OPNsense documentation
    Detailed FQ-CoDel Tuning . FQ_CoDel is designed to be a “no-knobs” algorithm. After you enter the Download and Upload bandwidth settings, the defaults for the ...Missing: asymmetric | Show results with:asymmetric
  32. [32]
    Codel Wiki - Bufferbloat.net
    ### Real-World Deployment Examples and Benchmark Results for CoDel
  33. [33]
    [PDF] Realizing CoDel AQM for Programmable Switch ASIC - IIIT-Delhi
    Due to these limitations, previous research has been unsuccessful in implementing an RFC-compliant CoDel AQM on programmable switch ASICs. To solve this ...
  34. [34]
    32.8. CoDel queue disc — Model Library - ns-3
    Oct 18, 2025 · CoDel (Controlled Delay Management) is a queuing discipline that uses a packet's sojourn time (time in queue) to make decisions on packet drops.Missing: results stable fairness
  35. [35]
    [Bloat] eero using fq_codel and Cake
    "It's fq-codel inside the mesh, and ... gateway is a second generation eero. If your gateway is a first generation eero, then it's codel on the uplink too.
  36. [36]
  37. [37]
    Best practices for benchmarking Codel and FQ Codel - Bufferbloat.net
    May 26, 2014 · Best Practices for Benchmarking CoDel and FQ CoDel (and almost any other network subsystem!) Document version: 1.5, May 26, 2014.Missing: deployments | Show results with:deployments
  38. [38]
  39. [39]
    draft-ietf-aqm-pie-10
    Proportional Integral Controller Enhanced (PIE): A Lightweight Control Scheme to Address the Bufferbloat Problem · This is an older version of an Internet-Draft ...Missing: Google | Show results with:Google
  40. [40]
    FQ-PIE Queue Discipline in the Linux Kernel - IEEE Xplore
    Feb 17, 2020 · Abstract: Proportional Integral controller Enhanced (PIE) is an Active Queue Management (AQM) mechanism to address the bufferbloat problem.
  41. [41]
    Cake - Bufferbloat.net
    An integral shaper (that can be on or off or tuned dynamically). Is much “tighter” than htb - uses about 30% less cpu on low end hardware (don't take that as a ...
  42. [42]
    [OpenWrt Wiki] SQM (Smart Queue Management)
    Aug 1, 2025 · While Cake is the preferred discipline as it is excellent at mitigating bufferbloat, fq_codel is a faster, albeit less comprehensive option. One ...SQM configuration /etc/config... · Luci-app-sqm · SQM Details · Sqm-scripts
  43. [43]
    Operating ranges, tunability and performance of CoDel and PIE
    May 1, 2017 · CoDel and PIE are two recent Active Queue Management (AQM) algorithms that have been proposed to address bufferbloat by reducing the queuing delay.Missing: adoption | Show results with:adoption<|control11|><|separator|>