Fact-checked by Grok 2 weeks ago

Active queue management

Active queue management (AQM) is a proactive mechanism employed in routers and switches to regulate queue lengths or average queuing by selectively dropping or marking packets before buffers overflow. This approach signals endpoint devices, such as those using , to reduce their transmission rates, thereby preventing —a phenomenon where excessive buffering leads to high and —and maintaining efficient . Unlike passive tail-drop policies, AQM algorithms aim to keep s short enough for low-delay applications while allowing bursts of traffic, ultimately reducing and across diverse link capacities. The foundational AQM algorithm, Random Early Detection (RED), was introduced in 1993 by Sally Floyd and to detect incipient congestion through a of queue size and probabilistically drop packets with increasing likelihood as the queue grows beyond a minimum threshold. sought to avoid global synchronization of flows and bias against bursty traffic, promoting fair bandwidth allocation in packet-switched networks. However, RED's sensitivity to parameter tuning and instability in certain scenarios limited its widespread adoption, prompting the IETF to recommend simpler, more robust alternatives in 2015. Subsequent advancements addressed these challenges, with Controlled Delay (CoDel), proposed in 2012 by Kathleen Nichols and , focusing directly on controlling sojourn time ( plus service time) rather than queue length, using a target delay threshold to drop packets without manual configuration. Similarly, Proportional Integral controller Enhanced (PIE), developed by researchers at in late 2012 and standardized in 8033, employs a feedback control loop based on recent queue delays to adjust drop probabilities, offering lightweight deployment suitable for high-speed links and integration with (ECN). These modern AQMs have gained traction in combating , with the IETF endorsing their implementation in network devices to enhance latency-sensitive applications like video streaming and web browsing.

Introduction

Definition and purpose

In packet-switched networks, queues arise at routers and switches to manage variable traffic rates, absorbing short bursts of data and facilitating statistical multiplexing of flows. Buffers serve a critical role by temporarily storing packets during these bursts, preventing immediate drops and allowing the network to handle transient congestion without excessive loss. However, unmanaged queues can lead to prolonged delays if buffers grow too large under sustained load. Active queue management (AQM) refers to proactive algorithms in network devices that monitor lengths or mean packet sojourn times and intentionally drop or mark packets early to indicate signals before buffers reach full capacity. This approach enables devices to manage buildup dynamically, rather than relying solely on overflow conditions. The core purpose of AQM is to keep standing delays low, thereby reducing end-to-end latency, combating —where oversized buffers inflate delays without boosting —and enhancing responsiveness for interactive applications. Unlike passive queue management, which reactively discards packets only at buffer limits, AQM prevents issues like TCP flow synchronization and lock-out, while improving fairness and throughput for congestion-responsive protocols. The (IETF) in RFC 7567 designates AQM, encompassing both informed dropping and ing (such as with ), as a best current practice for widespread deployment in network infrastructure to address these challenges.

Historical background

In the early days of the during the 1980s and , network routers predominantly employed drop-tail queue management, where incoming packets were accepted until the buffer filled, at which point subsequent packets were discarded. This approach, while simple, contributed to global TCP synchronization, a phenomenon where multiple flows simultaneously reduced their transmission rates upon detecting , leading to inefficient link utilization and increased delays across the network. The issue stemmed from TCP's (AIMD) congestion control mechanism, which reacted uniformly to tail-drop losses, causing synchronized bursts and valleys in traffic. Active queue management (AQM) emerged in the mid-1990s as a response to these limitations, aiming to proactively signal congestion before buffers overflowed and to mitigate biases against short flows or bursty traffic. The seminal proposal was , introduced in a paper by Sally Floyd and , which randomized packet drops at varying probabilities based on average length to desynchronize flows and prevent lock-out problems. This marked a shift from purely reactive drop-tail policies toward intelligent discipline, influencing subsequent router designs. Key milestones in AQM's development included the IETF's formal recommendations in 2309 (1998), which advocated for AQM techniques like to support assured forwarding and improve overall performance by avoiding global synchronization. Integration with (ECN) followed in 3168 (2001), allowing routers to mark packets instead of dropping them to convey congestion signals without loss, enhancing compatibility with . By 2015, 7567 updated these guidelines, emphasizing the need for renewed AQM deployment to combat —excessive buffering causing high —in light of evolving network conditions. The evolution of AQM was driven by the proliferation of and networks in the 2000s, where large buffers in and access links amplified issues, making traditional queue management inadequate for applications and prompting a resurgence in AQM research and standardization. Following 7567, the IETF standardized additional robust AQM mechanisms, including the Proportional Integral controller Enhanced () in 8033 (2017) and Flow Queue CoDel (FQ-CoDel) in 8290 (2018), with further advancements in the Low Latency, Low Loss, Scalable throughput (L4S) architecture incorporating dual-queue coupled AQM as defined in 9332 (2024).

Fundamentals of queue management

Passive queue management

Passive queue management encompasses traditional, reactive approaches to handling packet queues in network routers, primarily relying on first-in-first-out () buffering without proactive intervention. In these methods, packets are accepted into the queue until the buffer reaches its maximum capacity, at which point incoming packets are discarded. The predominant policy is tail-drop, where the most recent arriving packet is dropped when the queue is full, often resulting in bursts of consecutive drops during periods of high traffic load. This simplicity made passive management the standard in early Internet routers, as it requires minimal computational resources. Several inherent behaviors undermine the effectiveness of passive queue management. When queues fill to capacity, tail-drop can induce global synchronization in TCP flows sharing the same bottleneck link; simultaneous packet losses across multiple connections trigger uniform backoff in transmission rates, leading to synchronized slowdowns and subsequent ramp-ups that oscillate network utilization inefficiently. In FIFO queues, head-of-line (HOL) blocking exacerbates delays, as a single delayed or problematic packet at the queue's front prevents all trailing packets from advancing, even if they could be processed independently. Furthermore, lock-out occurs when a small number of aggressive or bursty flows occupy the entire buffer, systematically excluding shorter or more restrained flows and promoting unfair resource allocation. Tail-drop served as the default queue discipline in pre-1990s routers due to its straightforward , with no need for monitoring average queue lengths or probabilistic decisions. An variant, head-drop, discards the packet at the front of the (the oldest one) upon instead of the tail, aiming to alleviate lock-out by favoring newer arrivals; however, it remains fundamentally reactive and does not prevent full-queue conditions or . At its core, passive queue management operates on a basic threshold model: if the current length q exceeds the buffer size B, the arriving packet is dropped; otherwise, it is enqueued. This deterministic rule, expressed as \text{drop if } q > B, lacks mechanisms for early warning or graduated responses, amplifying volatility compared to more sophisticated alternatives.

Congestion signals and control

occurs when the arrival rate of packets exceeds the processing or forwarding capacity of a resource, such as a router or link, resulting in queue buildup, increased , and eventual packet loss. This overload leads to a in , as packets accumulate in buffers, causing delays and reducing overall throughput. In response to congestion, Transmission Control Protocol (TCP) employs congestion control mechanisms to adjust the sending rate dynamically. The core of TCP's approach is the Additive Increase Multiplicative Decrease (AIMD) algorithm, which balances efficiency and fairness among flows sharing a bottleneck link. During the congestion avoidance phase, the congestion window (cwnd), which limits the amount of unacknowledged data in flight, increases linearly by one maximum segment size (MSS) per round-trip time (RTT) to probe for available bandwidth: \text{cwnd}_{\text{new}} = \text{cwnd}_{\text{old}} + 1 Upon detecting congestion, cwnd is halved multiplicatively to quickly back off and alleviate the overload: \text{cwnd}_{\text{new}} = \frac{\text{cwnd}_{\text{old}}}{2} This AIMD strategy ensures convergence to an equitable bandwidth allocation while avoiding oscillations. TCP traditionally relies on implicit congestion signals, primarily packet loss detected via timeouts or duplicate acknowledgments, to trigger these adjustments. However, explicit signals, such as those provided by Explicit Congestion Notification (ECN), allow routers to mark packets with congestion information using bits in the IP header, enabling endpoints to react without dropping packets. Round-trip time (RTT) and throughput play key roles in congestion detection: as queues build, RTT increases due to queuing delay, indirectly signaling overload, while throughput—approximated as cwnd divided by RTT—drops when capacity is exceeded. Loss-based implicit signals in often delay response until buffers overflow and packets are dropped, allowing to grow excessively and exacerbate —a phenomenon related to —thus motivating the need for earlier, proactive notification in advanced queue management.

Core AQM mechanisms

Packet dropping policies

Active queue management (AQM) employs packet dropping policies to proactively signal by discarding packets before overflow, thereby maintaining lower average queue lengths and reducing . These policies typically monitor the average queue length using an exponential weighted moving average (EWMA) to smooth out instantaneous variations and better reflect persistent trends. The EWMA is updated as \text{avg} = (1 - w_q) \cdot \text{avg} + w_q \cdot q, where \text{avg} is the estimated average queue size, q is the instantaneous queue size, and w_q is a small weighting factor (often around 0.002) that determines the responsiveness to recent changes. The core dropping policy calculates a drop probability p based on the average queue length relative to configured thresholds, aiming to mimic random loss patterns that encourage sources to reduce their sending rates without causing global across flows. In the seminal Random Early Detection () approach, no packets are dropped if \text{avg} < \text{min}_{th} (minimum threshold); for \text{min}_{th} \leq \text{avg} < \text{max}_{th} (maximum threshold), the base drop probability is p_b = \max_p \cdot \frac{\text{avg} - \text{min}_{th}}{\text{max}_{th} - \text{min}_{th}}, where \max_p is the maximum marking probability (typically 0.02); and if \text{avg} \geq \text{max}_{th}, packets are dropped with probability 1 (full drop). To further desynchronize drops and ensure even distribution across flows, the actual probability p is adjusted as p = \frac{p_b}{1 - \text{count} \cdot p_b}, where \text{count} tracks the number of packets since the last drop. This probabilistic mechanism spreads losses fairly, preventing any single flow from being disproportionately affected and avoiding the lock-out issues of tail-drop queues. Variants of these policies address specific behaviors in different network conditions. The original RED uses a "hard drop" at \text{max}_{th}, immediately escalating to full dropping, which can lead to abrupt congestion responses. In contrast, the "gentle drop" variant linearly increases the drop probability from \max_p at \text{max}_{th} to 1 at twice \text{max}_{th}, providing a smoother transition and better control during heavy load without overly aggressive drops. Some AQM schemes incorporate deterministic drops, where packets are dropped based on fixed rules (e.g., queue position or flow identifiers) rather than probability, to achieve precise fairness or simplify implementation in resource-constrained environments. While dropping induces loss, alternatives like Explicit Congestion Notification (ECN) marking allow congestion signaling without packet discard.

Packet marking techniques

Packet marking techniques in active queue management (AQM) utilize (ECN) to signal to endpoints without discarding packets, thereby preserving while prompting rate adjustments. ECN, as defined in RFC 3168, incorporates two bits in the —known as the ECN-Capable Transport (ECT) bits—and an additional bit in the header to enable this notification mechanism. When a packet is marked as ECN-capable by setting one of the ECT bits (ECT(0) or ECT(1)), a congested router can set the Experienced () bit in the instead of dropping the packet, allowing the sender to receive feedback via acknowledgments and reduce its transmission rate accordingly. In AQM implementations, marking policies operate analogously to probabilistic dropping schemes but substitute packet loss with CE marking. For instance, when the queue length exceeds a predefined , the router calculates a marking probability p based on the current occupancy, similar to the drop probability in traditional AQM algorithms, and applies the CE mark to eligible ECN-capable packets with that probability. Upon receiving indicating ECT-marked packets with the CE bit set, TCP senders interpret this as an equivalent signal to a dropped packet and invoke control measures, such as halving the window. This approach integrates seamlessly with AQM to provide early warnings, ensuring that marking decisions are made proactively to prevent . The primary advantages of ECN-based marking in AQM stem from its ability to avoid packet drops, which is particularly beneficial for real-time applications sensitive to loss, such as voice or video , by eliminating the need for retransmissions and reducing associated delays. Marking preserves all packets in flight, leading to lower overall and improved throughput compared to drop-based methods, as it minimizes in transport protocols and enhances resource utilization across the network path. Furthermore, the marking probability p, derived from length metrics, mirrors drop-based formulations to maintain , allowing AQM systems to achieve similar points in queue occupancy without incurring the overhead of lost . ECN marking requires mutual support from both endpoints and intermediate routers for full efficacy; if a sender does not negotiate ECN capability during connection setup or if non-supporting devices are encountered, the system falls back to traditional packet dropping to ensure reliable congestion signaling. This compatibility constraint limits widespread deployment in heterogeneous networks, though and protocols increasingly enable ECN by default to broaden its applicability.

AQM algorithms

Random Early Detection (RED) and variants

Random Early Detection () is a foundational active queue management that monitors the average queue length at a router to probabilistically drop packets and signal to endpoints before the queue fills completely. The computes the average queue size using an exponential weighted moving average (EWMA) formula: \text{avg} \leftarrow (1 - w_q) \cdot \text{avg} + w_q \cdot q, where q is the instantaneous length and w_q is the queue weight parameter. When the average exceeds a minimum \min_{th}, RED calculates a drop probability p_a = p_b \cdot (\text{avg} - \min_{th}), where p_b is a base probability derived from the maximum drop probability \max_p and the range between \min_{th} and maximum \max_{th}; drops become certain when the average reaches \max_{th}. This probabilistic early dropping aims to avoid global synchronization of flows and reduce bias against bursty traffic sources. Key parameters in RED include w_q (typically 0.002 for a time constant ), \min_{th} (e.g., 5 packets), \max_{th} (e.g., 15 packets, at least twice \min_{th}), and \max_p (e.g., 0.02). Tuning these is challenging because optimal values depend on link bandwidth, traffic mix, and round-trip times; small w_q improves burst tolerance but increases sensitivity to short-term variations, while high \max_p can cause unnecessary drops during bursts. For instance, RED tolerates bursts up to \min_{th} without drops, but improper settings lead to queue oscillations or underutilization. Variants of RED address these tuning issues and enhance functionality. Adaptive RED (ARED) automatically adjusts \max_p using an additive-increase multiplicative-decrease (AIMD) mechanism every 500 ms to maintain the average queue length within \min_{th} and \max_{th}, with increments of at most 0.01 and decrements by a factor of 0.9, bounded between 0.01 and 0.5; it also sets w_q based on link speed for a 1-second time constant. Stabilized RED (SRED) incorporates a hash-based "zombie list" of up to 1000 recent flows to estimate the number of active connections N via hit rates (e.g., using a moving average with \alpha = 1/1000), adjusting drop probability proportionally to $1/N^2 alongside queue occupancy to stabilize the queue at a fraction of buffer size (e.g., 1/3) independent of flow count. Early simulations of RED demonstrated reduced bias against bursty TCP traffic compared to tail-drop queues, achieving higher throughput (e.g., up to 95% link utilization) and avoiding synchronization, but results were sensitive to traffic mixes like web-like short flows, where queue lengths oscillated between 5-15 packets without careful parameter selection.

Controlled Delay (CoDel) and derivatives

Controlled Delay () is an active queue management algorithm that focuses on controlling the sojourn time, or queue delay, of packets rather than queue length to mitigate in networks. Unlike traditional approaches, CoDel does not monitor or use queue occupancy as a signal for congestion; instead, it tracks the minimum sojourn time observed over a rolling and initiates packet drops only when this minimum exceeds a predefined target delay, ensuring low without unnecessary throughput loss. The default target delay is 5 milliseconds, representing an acceptable standing queue delay that balances high link utilization with minimal added latency, while the default is 100 milliseconds, tuned to handle round-trip times (RTTs) from 10 to 300 milliseconds effectively. In operation, CoDel continuously measures the sojourn time of each dequeued packet and maintains a running minimum of these times over the interval. If the minimum sojourn time remains above the target for at least one full interval, CoDel enters a dropping state and drops the head-of-line packet during dequeue if its sojourn time exceeds the target, thereby targeting the oldest enqueued packet to signal congestion promptly. Subsequent drops are scheduled using a control law that sets the time until the next drop as the current time plus the interval divided by the square root of the drop count since entering the dropping state, formulated as t + \frac{\text{interval}}{\sqrt{\text{count}}}, where t is the current time, interval is 100 ms, and count begins at 1 after the initial drop. This square-root-based adjustment increases the drop rate gradually, reflecting the inverse-square-root relationship between TCP throughput and loss probability, and prevents over-dropping during transient overloads; drops cease when the minimum sojourn time falls below the target. Derivatives of CoDel extend its delay-control principles to address fairness and deployment challenges in diverse environments. FQ-CoDel (Flow Queue CoDel) integrates CoDel with fair flow queuing using (DRR) scheduling, classifying packets into per-flow queues (default 1024) based on a hashed 5-tuple (source/destination IP, ports, and protocol) to isolate traffic and prevent one flow from dominating others. This combination ensures fairness by applying CoDel independently to each flow queue, prioritizing low-rate "new" flows over established high-rate ones, and using byte-based DRR quanta (default 1514 bytes) to handle variable packet sizes equitably, thereby reducing variations and . CAKE (Common Applications Kept Enhanced) further evolves for home gateway scenarios by incorporating bandwidth shaping, per-host and per-flow fairness, and (DiffServ) awareness. It extends CoDel's delay management with a rate-based shaper that compensates for link-layer overheads (e.g., Ethernet, PPPoE) to precisely limit output rates, while supporting DiffServ codepoints to deprioritize bulk traffic like downloads relative to interactive flows such as VoIP or . CAKE also enhances flow isolation through improved hashing and includes TCP ACK filtering to optimize upstream performance, making it suitable for asymmetric last-mile connections without manual configuration. CoDel and its derivatives offer key advantages through their self-tuning nature, requiring no manual parameter adjustments to adapt to varying bandwidths or RTTs, which simplifies deployment across heterogeneous networks. They effectively accommodate bursty traffic by permitting short-term queues up to the interval duration without drops, maintaining high utilization (near 100%) while capping latency, as demonstrated in simulations where CoDel reduced queue delays to under 10 ms even under heavy load without significant throughput penalties.

Other algorithms

BLUE is an active queue management algorithm that adjusts its packet drop probability based on link utilization and or marking events, reacting to events by increasing the probability and to link idle periods by decreasing it, without directly monitoring queue lengths. Developed to achieve high link utilization with minimal queueing delay, BLUE increases the drop rate upon detecting events and decreases it when the link becomes idle, thereby adapting to varying conditions without relying on average queue length estimates. Proportional Integral controller Enhanced (PIE) is an AQM scheme designed primarily for cable modem networks like , where it estimates queueing delay using one-way delay samples from packets and computes the drop probability through a proportional-integral controller. The drop probability is updated as drop_prob += \alpha (qdelay - target) + \beta (qdelay - qdelay_old), with defaults \alpha = 1/8, \beta = 1 + 1/4, and target = 15 ms. This approach enables lightweight control of average queueing latency while supporting (ECN) for marking packets in congested scenarios. PIE's integration with ECN allows it to signal without drops, aligning with modern transport protocols. Random Marking (REM) is an AQM that aims to maximize network utility by decoupling congestion control from queue length management, using a "price" signal based on aggregate link prices to compute exponential marking probabilities for packets. In REM, the marking probability is updated as p = 1 - \gamma^{\text{price}}, where \gamma < 1 is a parameter and the price aggregates and information to achieve proportional fairness in . This utility-maximizing framework makes REM suitable for environments with diverse traffic classes, ensuring low loss and delay under heavy loads. Stochastic Fair Blue (SFB) extends the BLUE algorithm to enforce fairness among flows by probabilistically isolating misbehaving or non-responsive flows through hashed bins that track flow statistics, allowing per-flow drop rates while maintaining scalability for large numbers of connections. SFB divides the into virtual bins using multiple hash functions, marking or dropping packets from bins with high drop rates to penalize unresponsive flows without per-flow state. This enables fair bandwidth sharing in the presence of aggressive traffic, such as UDP streams, while preserving BLUE's simplicity.
AlgorithmKey ParametersPrimary TargetDeployment Focus
BLUEDrop probability, freeze timersHigh utilization, low delayGeneral IP networks [low parameter count]
PIE\alpha, \beta, target delayQueueing latency controlDOCSIS cable modems [ECN support]
REM\gamma, price update rateUtility maximization, fairnessDiverse traffic environments
SFBNumber of bins/hashes, bin sizeFlow isolation and fairnessHigh-speed routers with mixed flows

Benefits and challenges

Advantages in network performance

Active queue management (AQM) significantly reduces by maintaining short s and combating , where excessive buffering in routers leads to high under load. Unlike passive drop-tail queuing, which allows queues to grow unchecked and can result in of hundreds of milliseconds or more, AQM algorithms proactively signal to keep average queue lengths low. For instance, the Controlled Delay () algorithm targets a maximum sojourn time of 5 ms for packets, ensuring that latency-sensitive applications like VoIP and experience minimal queuing even during bursts. Studies in networks have shown AQM implementations achieving 15-30 ms under heavy load, compared to over 250 ms without AQM, representing an 8-16x improvement. In environments, has reduced queueing latencies to approach 5 ms median across bandwidths from 3 Mbps to 100 Mbps, while preserving near-100% link utilization. AQM enhances throughput stability by preventing global of and promoting fair between short and long . Passive queues often lead to synchronized packet drops that cause oscillations in throughput, reducing overall network efficiency. AQM mitigates this through randomized early dropping or marking, maintaining higher and more consistent utilization; for example, algorithms like Proportional Integral (PI) with (ECN) achieve response times comparable to unloaded networks at 90% load, with over 90% of web responses under 500 ms. This stability extends to avoiding lock-out effects, where a single monopolizes buffers, ensuring all shares capacity equitably without significant throughput penalties. AQM is particularly compatible with modern transport protocols, providing timely congestion signals that enhance their in diverse scenarios. For protocols like TCP BBR and , AQM integrates with Low Latency, Low Loss, Scalable Throughput (L4S) architectures to deliver sub-millisecond queuing delays—under 1 ms on average and 2 ms at the 99th percentile—compared to 5-20 ms with classic AQMs. This enables scalable throughput beyond 100 Gbps while minimizing loss through ECN marking, benefiting real-time applications like video streaming and web browsing by reducing initial buffering needs and improving quality metrics, such as a 2.7-point increase in VoIP . Overall, these gains underscore AQM's role in optimizing end-to-end without requiring endpoint modifications.

Limitations and deployment issues

Early active queue management (AQM) algorithms, such as Random Early Detection (RED), require careful tuning of parameters like the minimum and maximum queue thresholds, marking probability, and queue averaging weight to maintain stable performance. Improper settings can lead to under-dropping, resulting in excessive queue buildup and high , or over-dropping, which reduces throughput by unnecessarily discarding packets even under light load. This sensitivity to configuration makes deployment challenging in dynamic environments, as optimal parameters vary with traffic load and network conditions, often necessitating ongoing adjustments by operators. AQM introduces computational overhead through continuous queue monitoring, averaging calculations, and probabilistic dropping or marking decisions, which can strain router resources. In hardware-constrained environments like cable modems, algorithms requiring per-packet timestamping or complex scheduling—such as CoDel's head-of-queue processing—increase complexity and may exceed processing capabilities at high line rates. Legacy hardware often lacks support for these operations, limiting compatibility and requiring costly upgrades for full implementation. Deployment faces resistance from network engineers due to concerns over induced packet loss, as AQM proactively drops or marks packets to signal congestion, potentially degrading performance on non-AQM routers in mixed topologies. Additionally, incomplete support for Explicit Congestion Notification (ECN)—a key enabler for loss-free signaling—hampers effectiveness, with only around 2-3% of end clients attempting ECN negotiation across diverse internet paths as of mid-2025. Recent advancements, including machine learning-based AQMs and implementations in 5G networks as of 2025, aim to mitigate tuning and overhead issues, though widespread adoption remains ongoing. While modern self-tuning algorithms like CoDel mitigate some tuning issues, broader adoption remains limited by these interoperability barriers. AQM performance is sensitive to varying patterns, such as bursts or self-similar flows, where dynamics can lead to if parameters are not adapted to the degree of predictability. In multi-bottleneck scenarios, AQM can exacerbate unfairness among flows with differing round-trip times, as shorter-RTT flows receive disproportionate bandwidth due to more frequent congestion signals, reducing overall equity compared to simple drop-tail queuing.

Evaluation and deployment

Simulation methods

Simulation methods for evaluating active queue management (AQM) rely on discrete-event simulators to replicate packet-level interactions in virtual networks, allowing researchers to test algorithms under controlled conditions without real hardware. These approaches enable the assessment of AQM behaviors in scenarios ranging from simple bottlenecks to complex topologies, focusing on metrics such as queue length and response to varying traffic loads. NS-2 and NS-3 are widely adopted platforms for of AQM, offering extensible modules for implementing protocols like and queue disciplines including and . NS-3, in particular, supports high-fidelity modeling of modern networks, with built-in support for AQM evaluation through its traffic control library. For example, studies using NS-3 have compared AQM schemes in large-scale settings, reporting improvements in average delay and rates under hybrid / traffic. NS-2 remains prevalent for legacy analyses, such as cable modem simulations incorporating AQM to mitigate . OMNeT++ serves as a modular for simulating large-scale topologies, leveraging its component-based to model intricate AQM interactions across distributed systems. It has been employed to multiple AQM policies, demonstrating differences in throughput and fairness for flows in event-driven environments. Prominent studies utilize specialized platforms like the AQM& simulation environment, built on NS-2, to evaluate and SFB under denial-of-service attacks, quantifying impacts on throughput, , and drop rates to highlight robustness. Such evaluations often reveal SFB's superior isolation of malicious flows compared to , with drop rates reduced by up to 50% in attack scenarios. Common methodologies incorporate generators to emulate diverse flows, frequently using the dumbbell topology—a pair of edge routers connected by a link—to isolate AQM effects at points. sweeps vary AQM settings, such as drop probabilities or thresholds, to assess sensitivity and optimal configurations for metrics like queue oscillation. Simulation validation involves cross-referencing outputs with real network traces to verify realism, often through frameworks that automate reproducible experiments on AQM and congestion control. Fluid models complement discrete simulations by approximating aggregate queue dynamics via the differential equation \frac{dq(t)}{dt} = \lambda(t) - \mu(t) - d(t), where q(t) denotes queue length, \lambda(t) the arrival rate, \mu(t) the service rate, and d(t) the drop rate, providing insights into stability without packet-level detail.

Real-world implementations and recent advances

Active queue management (AQM) techniques have seen widespread deployment across operating systems, network hardware, and consumer devices to mitigate and improve latency. In , the traffic control (tc) subsystem has supported FQ-CoDel as a core queuing discipline since kernel version 3.11 in 2013, with it becoming the default for many distributions starting from 4.12 in 2017, enabling easy configuration for servers and routers. Enterprise routers from implement Proportional Integral controller Enhanced (PIE) AQM, particularly in cable modem termination systems (CMTS), where it has been standardized for low-latency operation since RFC 8034 in 2017. Juniper Networks routers support (RED) as a foundational AQM mechanism through configurable drop profiles, allowing early packet drops to prevent congestion in high-throughput environments. For home networks, firmware integrates (Common Applications Kept Enhanced) AQM via its Smart Queue Management (SQM) system, available since version 18.06 in 2018, providing bandwidth shaping and for consumer routers. Recent IETF efforts following RFC 7567 (2015) have advanced AQM integration with congestion control, notably through Low Latency, Low Loss, Scalable throughput (L4S) in RFC 9332 (2022), which employs dual-queue coupled AQM to separate classic and scalable flows, reducing queue delays to sub-millisecond levels while preserving high throughput. Google's BBRv3 congestion control algorithm, released in 2023, demonstrates enhanced synergy with AQMs like FQ-CoDel, achieving better fairness, faster convergence, and lower flow completion times in and wired networks by pacing packets to avoid buffer-induced losses. Advances in self-tuning AQMs leverage for adaptive parameter adjustment, as explored in 2024 surveys of ML-based algorithms that dynamically respond to traffic patterns without manual configuration, improving robustness in variable environments. Case studies highlight AQM's impact in real networks. The Bufferbloat project has driven deployments of FQ-CoDel and in home gateways, reducing under load from hundreds of milliseconds to below 25 , as evidenced by widespread adoption in and community testing tools that measure severity. Comcast's nationwide rollout of DOCSIS-PIE AQM in 2021 across its cable infrastructure achieved a 90% reduction in working latency for millions of users, validating its effectiveness in ISP-scale environments through preemptive queue management. In networks, for Ultra-Reliable Low-Latency Communication (URLLC) incorporates ML-driven AQM in disaggregated architectures, minimizing delays for industrial applications by adapting to high- fronthaul links, as demonstrated in 2025 frameworks targeting sub-1 end-to-end . Trends indicate a shift toward self-tuning AQMs in diverse deployments, with increasing integration in ISPs and edge networks to handle heterogeneous traffic, though adoption in cloud providers remains driven by specific low-latency services rather than universal metrics.

References

  1. [1]
    RFC 7567: IETF Recommendations Regarding Active Queue ...
    RFC 7567 Active Queue Management Recommendations July 2015 Deploying AQM in the network can significantly reduce the latency across an Internet path, and ...
  2. [2]
    [PDF] Random Early Detection Gateways for Congestion Avoidance
    This paper presents Random Early Detection (RED) gate- ways for congestion avoidance in packet-switched net- works. The gateway detects incipient congestion ...
  3. [3]
    Random early detection gateways for congestion avoidance
    The authors present random early detection (RED) gateways for congestion avoidance in packet-switched networks. The gateway detects incipient congestion by ...
  4. [4]
    Controlling Queue Delay
    May 6, 2012 · This article aims to provide part of the bufferbloat solution, proposing an innovative approach to AQM suitable for today's Internet called CoDel.
  5. [5]
    RFC 8033 - Proportional Integral Controller Enhanced (PIE)
    Mar 5, 2020 · This document presents a lightweight active queue management design called "PIE" (Proportional Integral controller Enhanced) that can effectively control the ...Missing: Google | Show results with:Google
  6. [6]
    RFC 2309 - Recommendations on Queue Management and ...
    [RED93] Floyd, S., and Jacobson, V., Random Early Detection gateways for Congestion Avoidance, IEEE/ACM Transactions on Networking, V.1 N.4, August 1993, pp.
  7. [7]
    [PDF] Congestion Avoidance and Control - LBNL's Network Research Group
    Congestion Avoidance and Control. ∗. Van Jacobson†. Lawrence Berkeley Laboratory. Michael J. Karels‡. University of California at Berkeley. November, 1988.
  8. [8]
    Random early detection gateways for congestion avoidance
    Abstract-This paper presents Random Early Detection (RED) gateways for congestion avoidance in packet-switched networks.
  9. [9]
    RFC 3168 - The Addition of Explicit Congestion Notification (ECN) to ...
    This memo specifies the incorporation of ECN (Explicit Congestion Notification) to TCP and IP, including ECN's use of two bits in the IP header.
  10. [10]
    RFC 7567 - IETF Recommendations Regarding Active Queue ...
    It presents a strong recommendation for testing, standardization, and widespread deployment of active queue management (AQM) in network devices.
  11. [11]
    RFC 2309: Recommendations on Queue Management and ...
    First, with a shared queue and the tail drop discipline, an unnecessary global synchronization of flows cutting back can result in lowered average link ...Missing: history | Show results with:history
  12. [12]
  13. [13]
    Congestion Control Principles - » RFC Editor
    We say that TCP flows are "responsive" to congestion signals (i.e., dropped packets) from the network. It is these TCP congestion avoidance algorithms that ...Missing: seminal | Show results with:seminal
  14. [14]
    [PDF] Analysis of the Increase/Decrease Algorithms for Congestion ...
    Jain, K.K. Ramakrishnan and D.M. Chiu, Congestion. Avoidance in Computer Networks with a Connectionless. Network Layer, Technical Report DEC-TR-506, Digital.
  15. [15]
    [PDF] Congestion Avoidance and Control - CS 162
    Congestion Avoidance and Control. Van Jacobson*. University of California. Lawrence Berkeley Laboratory. Berkeley, CA 94720 van@helios.ee.lbl.gov. In October of ...Missing: seminal | Show results with:seminal
  16. [16]
    RFC 2581: TCP Congestion Control
    RFC 2581 defines TCP's four congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  17. [17]
    RED (Random Early Detection) Queue Management
    In the "gentle_" modification to RED in NS, the packet-dropping probability varies from "max_p" to 1 as the average queue size varies from "maxthresh" to twice ...
  18. [18]
    RFC 7928 - Characterization Guidelines for Active Queue ...
    This document describes various criteria for performing characterizations of AQM schemes that can be used in lab testing during development, prior to ...
  19. [19]
    The Benefits of Using Explicit Congestion Notification (ECN)
    Jan 25, 2024 · The goal of this document is to describe the potential benefits of applications using a transport that enables Explicit Congestion Notification (ECN).
  20. [20]
    [PDF] Random Early Detection Gateways for Congestion Avoidance
    This paper presents Random Early Detection (RED) gateways for congestion avoidance in packet- switched networks. The gateway detects incipient congestion by ...
  21. [21]
    [PDF] Adaptive RED: An Algorithm for Increasing the Robustness of RED's ...
    The RED active queue management algorithm allows net- work operators to simultaneously achieve high throughput and low average delay.
  22. [22]
    [PDF] an Optimisation Approach to Internet Congestion Control
    We then describe the techniques we use to implement REM in an ECN capable TCP/IP network. We present in §3 the re- sults of our simulation investigations and ...
  23. [23]
    [PDF] SRED: Stabilized RED - Teunis Ott
    SRED: Stabilized RED. Teunis J. Ott. T.V. Lakshman. Larry Wong. Bellcore. Bell Labs. Bellcore. Abstract—. This paper describes a mechanism we call “SRED ...
  24. [24]
    Appendix: CoDel pseudocode
    ### Summary of Drop Interval Calculation in CoDel
  25. [25]
    RFC 8290: The Flow Queue CoDel Packet Scheduler and Active Queue Management Algorithm
    ### Summary of FQ-CoDel: How It Combines Flow Queuing with CoDel, Key Features for Fairness
  26. [26]
    Piece of CAKE: A Comprehensive Queue Management Solution for ...
    Apr 20, 2018 · In this paper we present Common Applications Kept Enhanced (CAKE), a comprehensive network queue management system designed specifically for home Internet ...
  27. [27]
    [PDF] 1 Controlling Queue Delay - People @EECS
    Active Queue Management (AQM) is a solution for full buffers, which cause delays. A robust algorithm can manage buffer delay regardless of buffer size.
  28. [28]
    [PDF] Improving Latency with Active Queue Management (AQM) During ...
    Jul 30, 2021 · Comcast moved quickly to optimize Active Queue Management (AQM) on the CMTS affecting downstream latency as a first step to ensure best QoE ...
  29. [29]
    [PDF] The Effects of Active Queue Management on Web Performance
    on the area of active queue management (AQM). The common goal of all AQM designs is to keep the average queue size small in routers. This has a number of ...
  30. [30]
    [PDF] ACTIVE QUEUE MANAGEMENT IN DOCSIS 3.X CABLE MODEMS
    Active Queue Management (AQM) provides a solution to the problem of providing good application layer Quality of Experience when multiple applications share a ...
  31. [31]
    On the deployment of AQM algorithms in the internet - ResearchGate
    Before the wide deployment of the AQM in the Internet it should be checked first if it is beneficial to implement an AQM algorithm on only one router in a large ...
  32. [32]
    Explicit Congestion Notification (ECN) Deployment Observations
    Mar 8, 2021 · 1.44% of TCP flows attempted to initiate ECN, across 390 member IPs (45%). · The acceptance rate for ECN flows was likely >50%. · 382 member IPs ( ...
  33. [33]
    Adaptive Hurst-Sensitive Active Queue Management - MDPI
    In our paper, we propose to use an additional traffic parameter to adjust the AQM parameters—degree of self-similarity—expressed using the Hurst parameter ...
  34. [34]
    The Good, the Bad and the WiFi: Modern AQMs in a residential setting
    Oct 4, 2015 · We find that the AQM algorithms exacerbate the tendency of unfairness between the TCP flows compared to FIFO queueing. We also look at the ...
  35. [35]
  36. [36]
    [PDF] ns-3 Model Library
    Jan 9, 2021 · This manual compiles documentation for ns-3 models and supporting software that enable users to construct network simulations.
  37. [37]
    [PDF] ACTIVE QUEUE MANAGEMENT ALGORITHMS FOR DOCSIS 3.0
    The third algorithm we've studied is called PIE, proportional integral enhanced [Pan]. This is being developed by Cisco and was first reported on in the October ...
  38. [38]
    Simulation Models and Tools - Omnetpp.org
    Here is a list of selected simulation models, model frameworks and other software available for OMNeT++. With a few exceptions, models have been developed by ...
  39. [39]
    (PDF) Simulation comparison of active queue management ...
    The main objective of this paper is to present the comparative analysis of the performance of 10 different queue management policies using the OMNeT++ simulator ...
  40. [40]
    Changwang Zhang - AQM&DoS Simulation Platform - Google Sites
    The experiments (or simulations) are conducted on the AQM&DoS Simulation Platform that was created for the Robust Random Early Detection (RRED) algorithm ...
  41. [41]
    A Simulation-Based Survey of Active Queue Management Algorithms
    The scope of this paper is to analyze the most recent and well-known AQM algorithms and evaluate their performance under heavy hybrid traffic load (TCP and UDP ...
  42. [42]
    Dumbbell simulation topology. | Download Scientific Diagram
    In the context of the DiffServ architecture, active queue management (AQM) algorithms are used for the differentiated forwarding of packets.
  43. [43]
    A Simulation-Based Survey of Active Queue Management Algorithms
    The scope of this paper is to analyze the most recent and well-known AQM algorithms and evaluate their performance under heavy hybrid traffic load (TCP and UDP ...Missing: studies | Show results with:studies
  44. [44]
    NSOC2020L4SEvaluation - Nsnam - ns-3
    Oct 11, 2020 · Abstract: This project aims to design a software framework to conduct reproducible experiments on congestion control and active queue management ...<|separator|>
  45. [45]
    tc-fq_codel(8) - Linux manual page - man7.org
    Each such flow is managed by the CoDel queuing discipline. Reordering within a flow is avoided since Codel internally uses a FIFO queue.Missing: doubling | Show results with:doubling
  46. [46]
    Packet Pacing - Fasterdata
    Jan 15, 2025 · fq_codel became the default queuing discipline starting with the 4.12 kernel in 2017. However, for high-throughput TCP, we recommend fq over ...Missing: CoDel | Show results with:CoDel
  47. [47]
    RFC 8034: Active Queue Management (AQM) Based on ...
    Security Considerations This document describes an active queue management algorithm based on [RFC8033] for implementation in DOCSIS cable modem devices.
  48. [48]
    RED Drop Profiles for Congestion Management | Junos OS
    The ACX5448 router supports configuring drop profiles for loss-priority low , medium-high and high . You can specify two fill levels in each drop profile map.Missing: AQM | Show results with:AQM
  49. [49]
    [OpenWrt Wiki] SQM Details
    Dec 29, 2024 · Current versions of OpenWrt/LEDE have SQM, fq_codel, and cake built in. These algorithms were developed as part of the CeroWrt project. They ...
  50. [50]
    RFC 9332 - Dual-Queue Coupled Active Queue Management (AQM ...
    RFC 9332. Dual-Queue Coupled Active Queue Management (AQM) for Low Latency, Low Loss, and Scalable Throughput (L4S). Abstract.
  51. [51]
    Understanding BBRv3 Performance in AQM-Enabled WiFi Networks
    Sep 7, 2025 · Results show that BBRv3 significantly improves fairness and convergence under AQM, especially with FQ-CoDel. Our visualization tool and modular ...
  52. [52]
    What Can I Do About Bufferbloat?
    If the test shows latency below 15-25 msec, it means that bufferbloat is under control. If a test shows higher latency, you likely have bufferbloat. For more ...
  53. [53]
    Active queue management in 5G and beyond cellular networks ...
    Apr 15, 2025 · This paper proposes a state-of-the-art framework for adapting Active Queue Management (AQM) in 5G and beyond cellular networks with disaggregated Radio Access ...
  54. [54]
    Bandwidth is Dead. Long Live Data Logistics
    Sep 29, 2025 · “Internet service providers can now take steps to optimize network responsiveness. This can involve deploying newer active queue management ...