Fact-checked by Grok 2 weeks ago

Network scheduler

A network scheduler, also known as a packet scheduler, is a critical component in computer networking devices such as routers and switches that determines the order and timing for transmitting packets from output queues to manage bandwidth allocation, ensure fairness among flows, and deliver (QoS) guarantees. By reordering packets based on predefined policies, it addresses , prioritizes traffic types (e.g., voice over data), and prevents issues like where certain flows are indefinitely delayed. This mechanism is essential in shared networks where multiple data streams compete for limited resources, enabling efficient resource utilization and performance optimization. Network schedulers operate within the broader framework of QoS architectures, integrating with , marking, and queuing disciplines to enforce agreements. Common algorithms include , which processes packets in arrival order but can lead to unfairness during bursts; , which approximates fluid fair sharing by assigning weights to flows for proportional bandwidth distribution; and Class-Based Queuing (CBQ), which groups traffic into classes for hierarchical scheduling. More advanced variants, such as Deficit Weighted Round Robin (DWRR), enhance fairness by accounting for packet size variations, while recent programmable approaches allow dynamic algorithm deployment in software-defined networks. The evolution of network schedulers traces back to early integrated services proposals in the 1990s, aiming to support diverse traffic in networks beyond . Today, they play a pivotal role in modern environments like data centers and networks, where low-latency and high-throughput demands drive innovations in scalable, hardware-efficient designs. Challenges include balancing complexity with speed in high-speed links (e.g., 100 Gbps+), and ongoing research focuses on universal schedulers that approximate optimal performance across scenarios without prior knowledge of traffic patterns.

Fundamentals

Definition and Purpose

A network scheduler, often referred to as a packet scheduler, serves as an arbiter in packet-switching s, determining the order and timing of packet transmission from queues to output interfaces in order to manage efficiently. It operates primarily at network nodes such as routers and switches, where it handles outgoing packets aggregated from multiple input queues, selecting which packets to forward next based on configured policies. This process ensures that network resources like are allocated effectively amid varying demands. The core purpose of a scheduler is to deliver (QoS) by enabling the prioritization of traffic classes, such as assigning higher precedence to real-time applications like less urgent data transfers, thereby minimizing and for critical flows. By regulating packet dispatch, it prevents network overload and , which could otherwise lead to or degraded performance during peak loads. Additionally, it promotes fair among competing users or flows, ensuring equitable access to limited link capacity. To illustrate, a basic First-In-First-Out () queuing approach transmits packets strictly in arrival order without differentiation, which can result in unfair treatment where bursty traffic dominates and starves delay-sensitive streams. In contrast, a network scheduler introduces advanced mechanisms to enforce QoS policies, dynamically adjusting transmission to balance competing needs and maintain overall network stability.

Historical Development

The origins of network schedulers trace back to the 1970s with the development of the , the precursor to the modern , where early packet-switched networks relied on simple queuing mechanisms to manage data transmission across Interface Message Processors (IMPs). These basic schedulers were designed to handle initial congestion issues through rudimentary flow control and backpressure techniques, as network traffic grew from the first connections in 1969 to broader adoption by the mid-1970s. By the late , the early 's expansion highlighted limitations of queuing, including unfair and vulnerability to congestion collapse, prompting foundational research into more equitable queuing strategies. The 1990s marked a pivotal shift toward quality-of-service (QoS)-aware scheduling, driven by the Internet Engineering Task Force (IETF) efforts to support diverse traffic types beyond best-effort delivery. The Integrated Services (IntServ) model, outlined in RFC 1633 (1994), introduced per-flow resource reservations using protocols like RSVP to enable guaranteed bandwidth and delay bounds through advanced schedulers. Complementing this, the Differentiated Services (DiffServ) architecture, formalized in RFC 2475 (1998), emphasized scalable aggregation of traffic classes with edge-based marking and core-based scheduling to prioritize flows without per-flow state. A key milestone was the proposal of Weighted Fair Queuing (WFQ) in 1989 by Demers, Keshav, and Shenker, which approximated generalized processor sharing to provide weighted bandwidth allocation and isolation among flows, influencing subsequent QoS implementations. In the 2000s, the widespread deployment of technologies like DSL and exacerbated issues with oversized buffers in routers, leading to the recognition of —a phenomenon where excessive queuing delays degraded interactive applications such as VoIP and gaming. This awareness, gaining traction around 2010 through analyses by researchers like Jim Gettys, underscored the need for smarter scheduling to mitigate spikes under load, though solutions like (AQM) emerged gradually. The 2010s ushered in software-defined networking (SDN), which decoupled control and data planes to enable programmable schedulers via centralized controllers and protocols like OpenFlow, allowing dynamic reconfiguration for traffic engineering. This paradigm, gaining momentum post-2011 with NSF-funded projects like GENI, facilitated fine-grained scheduling in data centers and wide-area networks, addressing scalability limitations of traditional hardware-bound approaches. By the 2020s, integration of (AI) and (ML) into network scheduling has become prominent, particularly for and emerging networks, where dynamic algorithms optimize in . Seminal works, such as reinforcement learning-based schedulers in 5G-LENA simulations (2024), demonstrate AI's role in predicting traffic patterns and adapting to heterogeneous demands, enhancing efficiency in ultra-reliable low-latency scenarios. As of 2025, advancements emphasize AI-native architectures for , incorporating continual learning to handle non-stationary environments and support AI-driven traffic slicing.

Terminology and Responsibilities

Key Terminology

In network scheduling, a queueing discipline (qdisc) refers to the or set of rules that governs how packets are managed in a , including mechanisms for enqueueing arriving packets, dequeuing packets for transmission, and handling through dropping or marking. This discipline determines the order in which packets are served from the queue to the outgoing link, ensuring efficient while mitigating . A key distinction exists between a packet scheduler and a traffic shaper. The packet scheduler selects and orders packets from one or more queues for transmission based on a service discipline, such as or weighted fair sharing, to allocate among competing flows. In contrast, a traffic shaper enforces rate limits by delaying packets to conform to a specified profile, often smoothing bursty using a non-work-conserving mechanism, whereas the scheduler focuses primarily on service ordering without inherent rate enforcement. Common metrics and policies in queue management include the , which measures the current length of a queue in terms of packets or bytes waiting to be transmitted, providing an indicator of levels. The drop tail policy is a drop where, upon queue , the arriving packet is discarded from the end (tail) of the queue, potentially leading to synchronized bursts in flows. For fair resource sharing, a virtual queue conceptualizes the hypothetical queue length for a flow if it were served continuously at its allocated fair share rate, enabling schedulers like weighted to approximate ideal bit-by-bit service without maintaining separate physical queues for each flow. Schedulers are further classified as work-conserving or non-work-conserving. A work-conserving scheduler transmits packets whenever the outgoing is idle and packets are available in any , maximizing link utilization without introducing artificial delays. Non-work-conserving schedulers, by contrast, may idle the link even with pending packets to enforce timing constraints, such as in shaping or policing scenarios. Additional foundational terms include class-based queuing, which partitions traffic into multiple classes, each associated with a dedicated queue, allowing a scheduler to allocate predictably among classes based on administrative policies for . The token bucket is a metering used in shaping and policing, where tokens are added to a bucket at a constant rate; packets can only be transmitted if sufficient tokens are available, permitting bursts up to the bucket depth while enforcing a long-term average rate. These terms collectively underpin the mechanisms for achieving and congestion control in packet-switched networks.

Core Responsibilities

A network scheduler plays a pivotal role in managing multiple queues to accommodate different classes and enforce priorities among them. Packets are classified into distinct queues based on attributes such as priority levels, service types (e.g., voice versus bulk data), or user requirements, allowing the scheduler to dequeue and transmit higher-priority packets ahead of others during periods of contention. This mechanism supports by allocating transmission opportunities proportionally to queue priorities, thereby ensuring that critical meets its performance objectives without undue interference from lower-priority flows. Ensuring fairness in bandwidth allocation among competing flows is another core responsibility, designed to prevent any individual flow from monopolizing available resources and starving others. The scheduler monitors flow rates and adjusts transmission schedules to distribute equitably, often using weighted or proportional sharing to maintain balance across diverse traffic sources. This fair allocation promotes efficient resource utilization and sustains consistent performance for all active sessions, mitigating issues like flow isolation failures in shared links. Network schedulers also handle congestion signaling and selective packet dropping to preserve system stability under load. When incoming traffic exceeds processing capacity, the scheduler detects queue buildup and either drops excess packets or marks them with congestion notifications, prompting senders to reduce their rates via mechanisms like (ECN). These actions prevent buffer overflows and cascading delays, enabling the network to recover quickly and avoid widespread degradation. Integration with admission control represents a further essential duty, where the scheduler collaborates to reserve resources for approved flows and enforce their bounds. Prior to accepting new connections, admission control assesses available capacity against requested guarantees (e.g., or delay limits), and the scheduler then implements these by shaping or policing the of admitted flows to prevent over-subscription. This ensures predictable resource availability and upholds agreements for or mission-critical applications. Ultimately, network schedulers optimize key metrics including delay, , and throughput to deliver reliable end-to-end performance. By sequencing packets to minimize queuing delays and variations in inter-arrival times (), while maximizing sustained data rates (throughput), schedulers align transmission behavior with application needs, such as low-latency requirements for interactive sessions or high-volume transfers for . These optimizations directly enhance and network efficiency across varied workloads.

Scheduling Algorithms

Classification

Network scheduling algorithms can be classified along several key dimensions based on their design principles and objectives, such as resource utilization, prioritization strategies, grouping mechanisms, and scheduling . These classifications help in selecting appropriate methods to meet goals like fairness in allocation.

Work-Conserving vs. Non-Work-Conserving

Work-conserving schedulers are designed to never the output link as long as there are packets waiting to be served, thereby maximizing throughput by continuously processing available traffic. In contrast, non-work-conserving schedulers intentionally delay eligible packets even when the link is , often to enforce timing constraints or reduce . The primary advantage of work-conserving approaches is efficient utilization, allowing best-effort traffic to fill periods, though they may lead to higher mean delays and lack control. Non-work-conserving schedulers, however, offer benefits like reduced delay , fewer required buffers, and penalties for misbehaving sources, at the cost of wasted and increased .

Strict Priority vs. Weighted Scheduling

Strict priority scheduling assigns absolute precedence to higher-priority queues, serving them exhaustively before lower ones, which is ideal for traffic requiring low , such as voice or video. However, it risks starving lower-priority if high-priority queues remain non-empty. Weighted scheduling, on the other hand, allocates proportionally to assigned weights across queues, enabling fair sharing while supporting varying service levels. This approach avoids but requires knowledge of packet sizes for accuracy and has constant-time complexity suitable for high-speed networks.

Class-Based vs. Flow-Based

Class-based scheduling groups packets into aggregate classes based on attributes like Code Point (DSCP) values, treating them collectively for scalable, static QoS management in large networks. Flow-based scheduling, conversely, identifies and treats individual flows using criteria such as the 5-tuple (source/destination , ports, ), enabling fine-grained per-flow fairness but demanding dynamic state maintenance. Class-based methods, as in DiffServ, reduce overhead through aggregate handling, while flow-based approaches, like Fair Queueing (FQ), provide precise isolation at the expense of scalability.

Per-Packet vs. Per-Flow Scheduling

Per-packet scheduling decides the order of individual packets independently, often using simple rules like , which offers fine but can lead to and reordering overhead. Per-flow scheduling, by , applies policies to entire flows by maintaining state for each, ensuring in-order delivery and better fairness but increasing computational demands due to flow tracking. The trade-off involves per-packet's lower state requirements versus per-flow's higher memory and processing needs, especially in routers handling thousands of flows, where per-flow can require O(log n) operations without optimizations.
Classification DimensionDescriptionExamples
Work-ConservingMaximizes throughput by avoiding idleness, WFQ
Non-Work-ConservingEnforces delays for jitter controlStop-and-wait variants
Strict PriorityAbsolute precedence to high-priority queuesPQ
WeightedProportional bandwidth allocationWRR, DRR
Class-BasedAggregates packets by classDiffServ PHBs
Flow-BasedTreats individual flows separately
Per-PacketSchedules each packet independently
Per-FlowSchedules based on flow statepFabric

Specific Algorithms

Weighted Fair Queuing (WFQ) is a packet scheduling that approximates the behavior of a fluid flow system, allocating to flows proportionally to their assigned weights while ensuring worst-case fairness guarantees. In WFQ, each packet is assigned a virtual finish time based on the weighted round-trip time in a hypothetical Generalized Sharing (GPS) system, enabling the scheduler to emulate bit-by-bit service among flows. The finish time F for a packet is calculated as F = S + \frac{L}{w \cdot R}, where S is the start time, L is the packet length, w is the flow's weight, and R is the link rate; packets are then dequeued in increasing order of these finish times. This mechanism provides isolation between flows, preventing any single flow from monopolizing , and bounds the delay for high-priority flows even under adversarial conditions. WFQ was introduced as an improvement over earlier schemes to handle variable packet lengths efficiently. Deficit Round Robin (DRR) is a credit-based of designed to handle variable-length packets in a simple, low-complexity manner, making it suitable for high-speed routers. In DRR, flows are organized in a order, and each flow receives a quantum of service credits per round; a deficit counter tracks unused credits, allowing carry-over to subsequent rounds to compensate for short packets. The update rule for the deficit counter is \text{deficit} = \text{deficit} + \text{quantum} - L, where L is the size of the served packet; if the deficit is insufficient, the packet is postponed to the next round. This approach achieves near-ideal fairness with O(1) complexity per packet, outperforming earlier variants by reducing for small packets without requiring per-flow time calculations. DRR is particularly effective in scenarios with heterogeneous flow sizes, such as networks. Stochastic Fair Queuing (SFQ) provides an efficient approximation of fair queuing by using hashing to map flows to a fixed number of queues, thereby isolating traffic without maintaining state for every individual flow. Packets from the same flow are directed to the same queue via a hash function on flow identifiers (e.g., source/destination IP and ports), and queues are served in round-robin fashion with equal quanta; by using a hash function to map flows to a small fixed number of queues, approximating fair sharing through probabilistic distribution and round-robin service among the queues. SFQ ensures that no flow receives more than its fair share on average, with worst-case deviations bounded by the number of queues, typically achieving fairness within a factor of 2-3 compared to ideal fair queuing. This algorithm balances simplicity and performance, making it ideal for core routers handling thousands of short-lived flows. Hierarchical Token Bucket (HTB) combines and through a class-based structure, where is allocated using organized in a parent-child to support complex policy enforcement. Each class has a (CIR) for guaranteed and a peak rate for borrowing excess capacity from siblings or parents, with added at configurable rates to buckets that regulate packet . The allows inner classes to share or borrow from outer classes, enabling fine-grained control such as prioritizing VoIP over bulk data while shaping overall link utilization. HTB extends the basic by resolving borrowing ambiguities through inner-to-outer class , providing both and flexibility in provisioning. It is widely used in classful queuing systems for and ISP networks.
AlgorithmComplexity (per packet)Fairness GuaranteeLatency Characteristics
WFQO(\log N) (with )Worst-case bounded delay, proportional to weightLow for weighted flows; emulates GPS closely, but higher for small packets due to overhead
DRRO(1)Near-ideal, with bounded unfairness O(\max L / \min \phi) where L is max packet size and \phi is min quantumLow and predictable for variable-length packets; deficit carry-over reduces short-packet
SFQO(1)Statistical, average fairness within factor of 3; hash collisions may cause minor deviationsModerate; equalizes latency across flows but no strict bounds for individual flows
HTBO(\log C) (with class tree)Hierarchical proportional sharing; borrowing ensures no underutilizationVariable based on class ; low for high-priority classes, higher for borrowers under contention

Challenges

Bufferbloat

refers to the excessive buffering of packets within network devices, which leads to high and , as well as reduced overall throughput, even when the network is not heavily loaded. This phenomenon occurs because oversized buffers allow to grow persistently without timely feedback to senders, creating "standing s" where the queue fills and drains at the same rate, maintaining a constant high delay. The primary causes of bufferbloat include overprovisioned buffers in consumer-grade equipment such as home routers and cable/DSL modems, where manufacturers add large memory allocations—often in the range—to prevent packet drops and maximize throughput under bursty traffic. Additionally, many network operators and device configurations ignore congestion signals from transport protocols like , allowing buffers to fill unchecked and delaying the detection of . Poor scheduling algorithms can exacerbate this by failing to prioritize low-latency traffic, further prolonging queue buildup. The effects of bufferbloat are particularly detrimental to latency-sensitive applications, such as online gaming and (VoIP), where increased round-trip time (RTT)—for example, latencies exceeding 1 second compared to typical sub-150 ms thresholds—results in poor responsiveness and that disrupts interactions. Even under low load, these standing queues inflate end-to-end delays, making networks feel sluggish and impairing overall across connections. Bufferbloat was identified in the late 2000s by Jim Gettys during of his , where he observed extreme spikes attributable to excessive buffering in DSL equipment, a problem also noted earlier in networks. It became widespread in networks during the 2010s as high-speed internet proliferated and devices incorporated larger, cheaper without corresponding controls. This issue gained formal recognition in standards like RFC 7567, which highlights excessive buffering as a key driver of high in modern networks. One common symptom of bufferbloat is the presence of standing queues, detectable through network diagnostics showing persistent high RTT without corresponding packet loss. Tools like Flent, a flexible network tester, are used to measure bufferbloat by simulating loaded conditions and graphing latency variations, such as during TCP stream tests combined with ping measurements.

Active Queue Management

Active queue management (AQM) encompasses mechanisms in network routers that proactively drop or mark packets before queues become full, thereby signaling endpoint senders to reduce their transmission rates and preventing excessive congestion. These techniques address bufferbloat, where large unmanaged buffers lead to high latency and poor performance for delay-sensitive applications. One seminal AQM algorithm is Random Early Detection (RED), introduced in 1993, which uses a probability-based approach to drop packets based on the average size to avoid global synchronization among flows. In RED, the gateway maintains an exponentially weighted of the instantaneous size, denoted as \avg, and compares it against minimum (\min_{th}) and maximum (\max_{th}) thresholds. No drops occur if \avg < \min_{th}; if \avg > \max_{th}, packets are dropped with probability 1; otherwise, the base drop probability p_b is calculated linearly as p_b = \max_p \cdot \frac{\avg - \min_{th}}{\max_{th} - \min_{th}}, where \max_p is the maximum drop probability (typically 0.02). To account for bursty traffic, the actual drop probability p_a is adjusted as p_a = p_b / (1 - \count \cdot p_b), with \count tracking consecutive unmarked packets since the last drop. Controlled Delay (CoDel), proposed in 2012, shifts focus to controlling sojourn time—the delay packets experience in the —rather than queue length, aiming for low without manual tuning. CoDel monitors the minimum sojourn time over a recent interval and drops the packet at the head of the if this minimum exceeds a target delay (default 5 ms) for longer than an interval (default 100 ms), providing endpoints sufficient time to react. It handles bursts effectively by ignoring transient high delays (e.g., when the queue has fewer than one of data) and resetting the interval dynamically during idle periods or after drops, ensuring high utilization during short overloads while penalizing persistent s. Proportional Integral controller Enhanced (PIE), standardized in 2017, employs a control-theoretic approach to mark or drop packets based on queueing delay and link utilization, targeting an average latency of 15 ms with minimal configuration. Every 15 ms, PIE updates the drop probability using a proportional-integral controller: the change incorporates the deviation of current queue delay from the target (proportional term, scaled by \alpha = 0.125) plus the accumulated error (integral term, scaled by \beta = 1.25), further adjusted by a (e.g., reduced aggressiveness when utilization is below 0.8 to maintain throughput). This "no-knobs" design autotunes parameters, making it suitable for diverse link speeds. Flow Queue CoDel (FQ-CoDel), specified in 8290 in 2017, combines 's AQM with per-flow to combat more effectively in scenarios with multiple concurrent flows. By isolating into flow-specific queues and applying CoDel independently, FQ-CoDel prevents any single flow from dominating the link, ensures fairness, and maintains low even under bursty or unfair conditions. It is widely implemented in systems like the control subsystem and recommended for home and enterprise routers to mitigate . AQM algorithms often integrate with (ECN), standardized in , which enables routers to mark packets in the (using the codepoint) instead of dropping them during early congestion detection, allowing endpoints to reduce rates without retransmission overhead. For ECN-capable flows (indicated by ECT bits), , , and preferentially mark rather than drop when thresholds are met, preserving packet delivery while signaling congestion equivalently to a drop. Comparisons of these AQMs highlight trade-offs in effectiveness and deployment: RED, while foundational, requires careful parameter tuning (e.g., thresholds and \max_p) for across mixes, limiting its widespread adoption due to to . excels in low- scenarios with its parameter-free operation and superior performance over in reducing delay for variants under varying loads, though it may underperform during sustained high utilization. offers balanced effectiveness with easier deployment than —via its lightweight PI control and ECN support—maintaining low delays and high throughput better than in overload conditions, as evidenced by simulations showing retained performance under increasing loads. FQ- addresses limitations of standalone by adding flow isolation, providing better fairness and latency control in multi-flow environments compared to or alone, and is favored in practical deployments for its robustness against . Overall, and are more deployable in modern networks due to autotuning, with 's utilization awareness providing robustness for edges.

Implementations

Linux Kernel

The Linux kernel's network scheduling is primarily managed through the Traffic Control (TC) subsystem, which provides a flexible, hierarchical framework for queuing disciplines (qdiscs) to shape, schedule, and police network traffic. Introduced in kernel version 2.1 in 1996 by , this subsystem allows for the attachment of qdiscs to network interfaces, enabling fine-grained control over packet transmission rates and priorities. The hierarchical structure supports classful qdiscs, where traffic can be classified into multiple classes, each with its own child qdisc, facilitating complex topologies for bandwidth allocation and delay management. By default, the Linux kernel assigns the pfifo_fast qdisc to each network interface upon creation, which implements a simple three-band priority FIFO queue based on the Type of Service (ToS) bits in packet headers, prioritizing interactive traffic like SSH over bulk transfers. Since kernel version 4.12 in 2017, fq_codel has become the default qdisc for new interfaces, combining fair queuing with the Controlled Delay (CoDel) active queue management algorithm to mitigate bufferbloat by dropping packets from flows exceeding a target delay threshold, typically 5 ms. FQ-CoDel was first integrated into the kernel in version 3.5 in 2012, marking a significant advancement in default buffer management. Configuration of the TC subsystem is performed using the tc command-line utility from the package, which allows users to attach qdiscs to interfaces via commands like tc qdisc add dev eth0 root handle 1: htb. For classful qdiscs, classes can be created to subdivide —e.g., tc class add dev eth0 parent 1: classid 1:1 htb rate 1mbit—and filters direct packets to specific classes based on criteria such as IP addresses, ports, or protocols, using classifiers like u32 or flow. This setup supports hierarchical token bucket (HTB) for sharing among classes with borrow and overlimit behaviors, stochastic fairness queuing (SFQ) for approximating across flows with low overhead, and token bucket filter (TBF) for by accumulating tokens at a specified rate to enforce peak and committed information rates. Over time, the TC subsystem has evolved to incorporate modern congestion control mechanisms, with FQ-CoDel's adoption as default in 2017 enhancing latency-sensitive applications by default. Additionally, support for the Bottleneck Bandwidth and Round-trip propagation time (BBR) congestion control algorithm was added in kernel version 4.9 in late 2016, allowing senders to estimate available bandwidth and RTT for proactive rate adjustment without relying solely on signals. This integration complements qdiscs like fq_codel, improving throughput in lossy or variable networks when enabled via settings such as net.ipv4.tcp_congestion_control = bbr.

BSD Derivatives

The ALTQ (Alternate Queuing) framework serves as the primary network scheduling system in several BSD derivatives, enabling advanced packet queuing and on network interfaces. Developed by Kenjiro Cho at Laboratory, ALTQ originated as an extension to the BSD kernel to support diverse queuing disciplines for managing outgoing . It was first demonstrated in a 1998 USENIX paper and subsequently integrated into starting with version 5.3-RELEASE in 2004, imported from the project snapshot. ALTQ is also available in and , providing a unified approach to quality-of-service mechanisms across these systems. Key queue disciplines in ALTQ include Class-Based Queuing (CBQ), which allocates to classes of based on administrative policies, allowing flexible division of link capacity among competing flows. Hierarchical Fair Service Curve (HFSC) offers link-sharing and delay equalization through service curves that guarantee and bounds for hierarchical classes, making it suitable for applications. Priority Queueing (PRIQ) implements strict priority scheduling across multiple queues, serving higher-priority first to minimize for critical packets. These disciplines are configured via options and attachments, with filters directing packets to appropriate queues. In , ALTQ integrates closely with the (Packet Filter) , introduced in alongside ALTQ support, enabling rule-based packet classification and shaping directly within firewall configurations. This combination allows administrators to define queues in pf.conf and assign traffic via match rules, supporting bandwidth limiting and prioritization without external tools. initially adopted ALTQ with for similar firewall-integrated shaping but enhanced it over time; by 2018, following ALTQ's removal in 5.6 (2014), the PF subsystem incorporated built-in queuing with (Controlled Delay) as an active queue management variant, including elements to combat while maintaining compatibility with PRIQ and HFSC semantics. These features differ in syntax from Linux's but achieve comparable policy-based scheduling. ALTQ and its integrations find use in high-performance routing scenarios, such as firewalls in networks and devices like routers, where precise traffic control ensures reliable service for VoIP, streaming, and bulk transfers under varying loads. For instance, in (a derivative), ALTQ-based shaping handles multi-gigabit links with low overhead, prioritizing latency-sensitive s in constrained environments.

Advanced Systems

(SDN) controllers like ONOS and facilitate advanced network scheduling through protocols, enabling programmable traffic management in distributed environments. Introduced around 2011 with the maturation of specifications, these controllers allow centralized control over switch-level scheduling, supporting dynamic rules for bandwidth allocation and priority queuing across multi-vendor networks. For instance, 's modular architecture abstracts APIs to implement custom scheduling policies, such as and load balancing, in fabrics. ONOS, developed by the Open Networking Lab, extends this to carrier-grade networks with intent-based scheduling, where high-level policies are translated into low-level instructions for scalable orchestration. Hardware implementations of network schedulers in ASICs and FPGAs provide high-speed, low-latency processing in enterprise switches, often integrating weighted fair queuing (WFQ) for QoS enforcement. Cisco's Silicon One ASICs, deployed in Catalyst 9000 series switches, embed WFQ schedulers to allocate bandwidth proportionally across traffic classes, supporting terabit-scale forwarding with microsecond precision. These hardware-accelerated designs handle complex queuing hierarchies, such as strict priority over WFQ, in DOCSIS cable modem termination systems (CMTS), ensuring service level agreements for upstream/downstream flows. FPGA-based prototypes further enable reconfigurable scheduling, approximating WFQ with O(1) complexity for programmable data planes in next-generation switches. Scalability challenges in network scheduling for massive arise from the in device density projected for /, demanding efficient handling of millions of simultaneous connections. In paradigms, schedulers must contend with ultra-reliable low-latency communications (URLLC) alongside massive machine-type communications (mMTC), where traditional centralized algorithms suffer from signaling overhead and computational bottlenecks. Energy-efficient frameworks, such as sustainable massive maximization schemes, employ greedy approximations to prioritize grant-free access, achieving near-optimal throughput while limiting power consumption in dense scenarios. These challenges underscore the need for distributed AI-driven schedulers to maintain fairness and reliability across space-air-ground integrated networks.

References

  1. [1]
    RFC 3670 - Information Model for Describing Network Device QoS ...
    Its functions include the classification and policing of individual flows, and scheduling admitted packets for the outbound link.<|control11|><|separator|>
  2. [2]
    [PDF] Implementing Scheduling Algorithms in High-Speed Networks
    The uid Generalized Processor Sharing (GPS) algorithm has desirable properties for integrated services networks and many Packet Fair Queueing (PFQ) ...
  3. [3]
    [PDF] Towards Programmable Packet Scheduling - Anirudh Sivaraman
    The most common scheduling algorithm is a simple First-In First-Out (FIFO) queue. Addition- ally, some switches support strict priorities, deficit weighted.
  4. [4]
    [PDF] Programmable Packet Scheduling - arXiv
    Feb 19, 2016 · This paper presents a design for a programmable packet scheduler, which allows scheduling algorithms—potentially algorithms that are un- known ...Missing: definition | Show results with:definition
  5. [5]
    [PDF] Fast, Scalable, and Programmable Packet Scheduler in Hardware
    The focus of this paper is to design an efficient packet scheduler in hardware, which could be programmed to express a wide range of packet scheduling algo-.
  6. [6]
    [PDF] Universal Packet Scheduling - Cornell: Computer Science
    In this context, we consider a packet scheduling algorithm to be both how packets are served inside the network (based on their time of arrival and their packet ...
  7. [7]
    Chapter: QoS Scheduling - Configuration Guides - Cisco
    Jan 18, 2018 · This chapter outlines the process of selecting the next packet to exit an interface and when it should happen (henceforth termed Scheduling).
  8. [8]
    23 Queuing and Scheduling - An Introduction to Computer Networks
    This chapter is mostly concerned with so-called fair queuing, in which the bandwidth assigned to idle senders is reapportioned to the other, active senders.
  9. [9]
    [PDF] A History of the ARPANET: The First Decade - DTIC
    Apr 1, 1981 · The techniques for remote control of computers in the field developed within the ARPANET project are probably more broadly applicable to the ...
  10. [10]
    [PDF] The Evolution of Internet Congestion - Research
    Abstract. This paper discusses the evolution of the congestion controls that govern all Internet traffic. In particular we chronicle and discuss the ...
  11. [11]
    BBN and the Development of the ARPAnet - Good Science Project
    Jan 27, 2024 · As the usage of the network grew, BBN feared that even the new algorithms being used to handle routing, flow control, and congestion control ...
  12. [12]
    Bufferbloat: Dark Buffers in the Internet - Communications of the ACM
    Jan 1, 2012 · Bufferbloat is the existence of excessively large and frequently full buffers inside the network, causing unnecessary latency and poor ...Missing: scheduling | Show results with:scheduling
  13. [13]
    [PDF] Bufferbloat - Dark Buffers in the Internet - nanog
    Jun 14, 2011 · This is a personal history – but I've provided only a few pieces of the puzzle, and assembled the puzzles. It's not a pretty picture.<|separator|>
  14. [14]
    [PDF] The Road to SDN: An Intellectual History of Programmable Networks
    Software Defined Networking (SDN) is changing the way we design and manage networks. SDN has two defin- ing characteristics. First, an SDN separates the control.
  15. [15]
    How the U.S. National Science Foundation Enabled Software ...
    Oct 24, 2025 · Google adopted SDN to control how traffic is routed in its B4 backbone,, using OpenFlow switches, controlled by ONIX, the first distributed ...
  16. [16]
    Evolving 5G-LENA Towards 6G: Integrating AI for Intelligent ...
    Aug 19, 2025 · In this paper, we propose a modular Reinforcement Learning. (RL)-based scheduling framework built on the 5G-LENA simulator, a state-of-the-art ...
  17. [17]
  18. [18]
  19. [19]
  20. [20]
  21. [21]
  22. [22]
    Analysis and simulation of a fair queueing algorithm
    A fair queueing algorithm, based on an earlier suggestion by Nagle, is proposed. Analysis and simulations are used to compare this algorithm to other congestion ...
  23. [23]
  24. [24]
  25. [25]
  26. [26]
    Schedulers Overview | Junos OS - Juniper Networks
    Junos OS schedulers allow you to define the priority, bandwidth, delay buffer size, rate control status, and RED drop profiles to be applied to a particular ...
  27. [27]
    RFC 2309 - Recommendations on Queue Management and ...
    This memo presents two recommendations to the Internet community concerning measures to improve and preserve Internet performance.
  28. [28]
    [PDF] Start-time Fair Queuing: A Scheduling Algorithm for ... - acm sigcomm
    Demers, S. Keshav, and S. Shenker. Analysis and. Simulation of a Fair Queueing Algorithm. In Proceed- ings of ACM SIGCOMM, pages 1{12, September 1989. [6] S.
  29. [29]
    Packet Scheduling and Admission Control
    Admission control and resource allocation are tightly related to the packet scheduling algorithms in the switches and routers. In order to simplify the ...Missing: integration | Show results with:integration
  30. [30]
    Packet Scheduling - an overview | ScienceDirect Topics
    Packet scheduling is a fundamental mechanism in computer networks, determining the order in which packets are transmitted and directly influencing the qualities ...
  31. [31]
    [PDF] Scheduling
    Work conserving vs. non-work- conserving n Work conserving discipline is never idle when packets await service n Why bother with non-work conserving? ... networks ...
  32. [32]
    [PDF] Simulating Strict Priority Queueing, Weighted Round Robin, and ...
    Each class has a virtual queue, with which packets are associated. For the actual packet buffering, they are inserted into a sorted queue based on their ...
  33. [33]
    [PDF] Eiffel: Efficient and Flexible Software Packet Scheduling
    (i.e., PIFO can't express per flow scheduling). OpenQueue is an example ... scheduled according to per packet scheduling transactions. We realize that ...
  34. [34]
    Efficient fair queueing using deficit round robin - ACM Digital Library
    In this paper, we describe a new approximation of fair queuing, that we call Deficit Round Robin. Our scheme achieves nearly perfect fairness in terms of ...Missing: DRR | Show results with:DRR
  35. [35]
    Stochastic fairness queueing | IEEE Conference Publication
    Stochastic fairness queueing. Abstract: A class of algorithms called stochastic fairness queuing is presented. The algorithms are probabilistic variants of ...
  36. [36]
    Bufferbloat: What's Wrong With the Internet?
    Feb 1, 2012 · Bufferbloat refers to excess buffering inside a network, resulting in high latency and reduced throughput. Some buffering is needed; it provides ...Missing: definition | Show results with:definition
  37. [37]
    Controlling Queue Delay
    May 6, 2012 · This standing queue, resulting from a mismatch between the window and pipe size, is the essence of bufferbloat. It creates large delays but no ...
  38. [38]
    RFC 7567 - IETF Recommendations Regarding Active Queue ...
    This builds a queue in the network, inducing latency in the flow and other flows that share this queue. Once a drop-tail queue fills, there will also be loss.
  39. [39]
    TechnicalIntro - Bufferbloat.net
    Bufferbloat is a phenomenon whereby buffering of packets causes high latency and jitter, as well as reducing the overall network throughput.Missing: definition effects
  40. [40]
    Overview — Flent: The FLExible Network Tester
    Flent is a network benchmarking tool which allows you to: Easily run network tests composing multiple well-known benchmarking tools into aggregate, ...
  41. [41]
    RFC 7928 - Characterization Guidelines for Active Queue ...
    This document describes various criteria for performing characterizations of AQM schemes that can be used in lab testing during development, prior to ...
  42. [42]
    [PDF] Random Early Detection Gateways for Congestion Avoidance
    This paper presents Random Early Detection (RED) gate- ways for congestion avoidance in packet-switched net- works. The gateway detects incipient congestion ...
  43. [43]
    RFC 8033 - Proportional Integral Controller Enhanced (PIE)
    Mar 5, 2020 · RFC 8033 is about Proportional Integral Controller Enhanced (PIE), a lightweight control scheme to address the bufferbloat problem.
  44. [44]
  45. [45]
    Study on performance of AQM schemes over TCP variants in ...
    Dec 10, 2020 · The analytical results show that CoDel outperformed RED in most aspects over variants of TCP because of its auto‐tuning and auto‐adjustment ...
  46. [46]
  47. [47]
    [PDF] Linux Traffic Control Classifier-Action Subsystem Architecture - people
    This paper describes the Linux Traffic Control (TC) Classifier-. Action(CA) subsystem architecture. The subsystem has been around for over a decade (long before ...Missing: 1996 | Show results with:1996
  48. [48]
    tc(8) - Linux manual page - man7.org
    qdisc is short for 'queueing discipline' and it is elementary to understanding traffic control. Whenever the kernel needs to send a packet to an interface, it ...
  49. [49]
    tc-pfifo_fast(8) - Linux manual page - man7.org
    pfifo_fast is the default qdisc of each interface. Whenever an interface is created, the pfifo_fast qdisc is automatically used as a queue. If another qdisc ...Missing: fq_codel | Show results with:fq_codel
  50. [50]
    Linux Tuning - Fasterdata
    Mar 6, 2025 · Note that fq_codel became the default starting with the 4.12 kernel in 2017. Both fq and fq_codel work well, and support pacing, although fq is ...Latest Linux kernels · 100G Network Tuning · UDP Tuning · Packet Pacing
  51. [51]
    Codel Wiki - Bufferbloat.net
    All Linux systems that use systemd, now default to fq_codel. That includes but is not limited to, debian, Ubuntu, redhat, fedora, and arch. fq_codel is the ...Deployments · Simulations · Linux Code
  52. [52]
    A Framework for Alternate Queueing
    ALTQ is implemented as simple extension to the FreeBSD kernel including minor fixes to the device drivers. Several queueing disciplines including CBQ, RED, and ...
  53. [53]
  54. [54]
    ALTQ Scheduler Types | pfSense Documentation
    Sep 2, 2025 · PRIQ, CBQ, and HFSC are selectable in the shaper wizards and the wizards will show the proper options and create the queues based on the ...
  55. [55]
    OpenFlow Controller - ADMIN Magazine
    An SDN controller assumes the central role of communicating with the OpenFlow switches and abstracting the OpenFlow API.
  56. [56]
    [PDF] Comparative Analysis of SDN Controllers - IIETA
    Sep 11, 2023 · ONOS: Open Network Operating System is an open- source controller written in JAVA and developed by the Open. Networking Laboratory (ON Lab) ...
  57. [57]
    [PDF] Robust Geometry-Based User Scheduling for Large MIMO Systems ...
    Apr 24, 2018 · The proposed scheme significantly reduces the overhead channel estimation in Massive MIMO systems. 3) The robustness of the proposed algorithm ...
  58. [58]
    [PDF] Robust User Scheduling with COST 2100 Channel Model for ... - arXiv
    Apr 20, 2018 · However, in this paper we focus on a simplified and robust user scheduling algorithm, by considering Massive MIMO simplifications and the effect ...
  59. [59]
    User Scheduling for Cell-Free Wireless Networks - ResearchGate
    Sep 13, 2024 · Two notable user scheduling algorithms, namely Geometry-based User Scheduling (GUS) and Greedy Weight Clique (GWC) are employed. The ...
  60. [60]
    Study of Multiuser Scheduling with Enhanced Greedy Techniques ...
    Mar 12, 2023 · This paper studies sum-rate performance of multicell and cell-free massive MIMO systems using multiuser scheduling and a greedy algorithm, with ...
  61. [61]
    Chapter: Configuring Weighted Fair Queueing - Cisco
    Jan 30, 2008 · This module describes the tasks for configuring flow-based weighted fair queueing (WFQ), distributed WFQ (DWFQ), and class-based WFQ (CBWFQ) ...Missing: implementations ASIC
  62. [62]
    Cisco Catalyst 9000 Switching Platforms: QoS and Queuing White ...
    This document describes the Quality-of-Service (QoS) and queuing architecture of the Cisco Catalyst 9000 family of switches.
  63. [63]
    Cisco 8000 Silicon One™ QOS Architecture - xrdocs
    Dec 3, 2024 · While system is offering different scheduling models like priority profiles, LLQ, WFQ, BRR etc. for user data traffic prioritization of ...<|control11|><|separator|>
  64. [64]
    [PDF] DOCSIS WFQ Scheduler on the Cisco CMTS Routers
    The DOCSIS WFQ scheduling engine is used to provide output packet scheduling services, including absolute priority queueing, weighted fair queueing, minimum ...
  65. [65]
    [PDF] A Hierarchical Packet Scheduler for Approximate Weighted Fair ...
    Apr 6, 2022 · Weighted Fair Queuing (WFQ) can achieve cus- tomized bandwidth allocation and flow isolation; however, its implementation in large-scale ...
  66. [66]
    Custom Networking - Amazon EKS - AWS Documentation
    Custom networking in EKS assigns node and Pod IPs from secondary VPC address spaces, using ENIConfig to define a subnet for Pods.
  67. [67]
    Work with Traffic Mirroring to copy network traffic
    With Traffic Mirroring, you can create and delete traffic mirror targets, view and modify their configurations, and even share them with other AWS accounts. You ...Missing: schedulers | Show results with:schedulers
  68. [68]
    [PDF] AI-Based and Mobility-Aware Energy Efficient Resource Allocation ...
    May 21, 2021 · The optimal resource allocation, function placement, and scheduling in NFV-enabled networks need more attention since the NFV technology ...
  69. [69]
    Network Diffuser for Placing-Scheduling Service Function Chains ...
    Jan 10, 2025 · In this paper, we consider the SFC placing-scheduling optimization and propose a novel network diffuser using conditional generative modeling ...
  70. [70]
    Five Facets of 6G: Research Challenges and Opportunities
    We survey five main research facets of this field, namely Facet 1: next-generation architectures, spectrum, and services; Facet 2: next-generation networking.<|control11|><|separator|>
  71. [71]
    On challenges of sixth-generation (6G) wireless networks
    This survey provides a comprehensive examination of specifications, requirements, applications, and enabling technologies related to 6G.
  72. [72]
    Interplay Between AI and Space-Air-Ground Integrated Network - arXiv
    May 14, 2025 · Software-defined networking (SDN) and network function virtualization (NFV) represent promising technologies for achieving flexible network ...Interplay Between Ai And... · Ii Ai Applications For Sagin... · Iii Ai Applications For...