Fact-checked by Grok 2 weeks ago

Out-of-order delivery

Out-of-order delivery, in the context of computer networking, occurs when data packets arrive at their destination in a different sequence than the order in which they were originally sent, violating the expected monotonic increase in sequence numbers. This phenomenon arises in IP-based networks, which provide without guarantees of packet ordering, as packets may take multiple paths with varying delays or be processed in parallel. Common causes include route changes leading to differing path lengths, load balancing across links, layer-2 retransmissions, or buffer management issues in routers. The effects of out-of-order delivery can degrade , particularly for transport protocols like , which interpret significant reordering as potential , triggering unnecessary retransmissions and reducing throughput. For instance, 's default duplicate acknowledgment threshold of three can cause premature fast retransmit if packets are reordered beyond this limit, leading to congestion window reductions. In contrast, does not inherently reorder packets but delivers them as received, leaving reassembly to the , which may result in errors for time-sensitive applications like voice or video streaming if buffering is insufficient. Metrics such as the reordered packet ratio (total reordered packets divided by total received) and reordering extent (the maximum gap in sequence numbers) are used to quantify and evaluate the severity of reordering in network paths. To mitigate out-of-order delivery, receivers employ reordering buffers to hold early-arriving packets until their predecessors arrive, though buffer size and delay limits constrain effectiveness. Extensions like TCP (Selective Acknowledgment) and D-SACK help distinguish reordering from loss, improving robustness, while protocols such as Deterministic Networking (DetNet) incorporate packet ordering functions for applications requiring strict sequencing. Overall, while minor reordering is common and often tolerable, excessive instances highlight underlying network inefficiencies that can impact reliability across diverse applications.

Overview

Definition

Out-of-order delivery in computer networking refers to the phenomenon where data packets arrive at the destination in a sequence different from the order in which they were transmitted by the source , particularly within IP-based networks that operate on a model. The () does not provide any guarantees regarding packet ordering, as it treats packets independently and routes them through potentially diverse paths, leading to possible reordering without inherent mechanisms to enforce sequential arrival. This contrasts with , where packets fail to arrive entirely, or duplication, where identical packets are received multiple times; out-of-order delivery involves all packets reaching the destination but in an incorrect sequence. A simple illustration of out-of-order delivery occurs when three consecutively numbered packets—labeled 1, 2, and 3—are sent in that order from the source but arrive at the destination as 1, 3, then 2, due to varying delays or paths taken by each packet. To detect and manage such disorder, transport-layer protocols commonly employ sequence numbers assigned to packets or their constituent data units, allowing the receiver to identify deviations from the expected order and reassemble the original sequence. In the /IP protocol stack, out-of-order delivery is a challenge addressed primarily at the , where protocols like use sequence numbers to ensure reliable, ordered data delivery to applications despite the underlying IP layer's lack of ordering guarantees.

Historical context

The concept of out-of-order packet delivery became prominent with the development of protocols in the late 1970s, building on the foundations of early packet-switched networks like the (1969–1990). While ARPANET's initial Network Control Protocol (NCP), deployed from 1970 to 1983, supported host-to-host communication and relied on the network's Interface Message Processors for reliable, in-order delivery, the introduction of the marked a shift to connectionless, across heterogeneous systems. RFC 791, published in 1981, explicitly defined as a connectionless that provides no guarantees for packet ordering, allowing datagrams to arrive in any sequence or not at all due to the decentralized nature of the internetwork. To mitigate this, the companion Transmission Control (TCP) specification in RFC 793, also from 1981, introduced sequence numbers and reassembly mechanisms, enabling receivers to reorder packets and reconstruct the original stream despite potential disorder introduced by the underlying layer. Subsequent IETF discussions in the 2000s further highlighted out-of-order delivery in congested environments, as noted in RFC 3366 (2002), which advised link designers on minimizing reordering in (ARQ) protocols to avoid exacerbating and throughput issues in flows. The gained increased visibility as a practical challenge with the widespread adoption of Equal-Cost Multi-Path (ECMP) in the 1990s, where traffic load balancing across equal-cost paths inherently risked reordering due to differential latencies. In more recent milestones, multipath capabilities in networks, as specified in TS 38.300 (2023), and the protocol's multipath extensions (draft-ietf-quic-multipath, ongoing), have amplified the issue by leveraging multiple simultaneous paths for resilience and throughput, necessitating advanced handling at higher layers.

Causes

Network routing factors

Network routing factors play a significant role in out-of-order delivery, primarily through mechanisms that distribute traffic across multiple paths with varying latencies. Equal-Cost Multi-Path (ECMP) routing, commonly used in modern networks to balance load, selects paths of equal metric cost by hashing packet headers, such as the 5-tuple for s. While standard ECMP implementations keep packets from the same on a single path to avoid reordering, variations or misconfigurations that split packets across paths can lead to different transit times due to queueing differences or link speeds, resulting in reordering. Load balancing in routers and switches often employs hash-based distribution to spread , but when configured for per-packet rather than per-flow balancing, sequential packets may follow uneven paths with disparate delays. This , typically based on source/destination addresses or ports, can direct consecutive packets to links with varying levels, exacerbating reordering especially in high-throughput environments. Such practices, though less common due to their impact on transport protocols, are noted in enterprise and setups for maximizing utilization. Asymmetric routing, where forward and return paths differ, further contributes to reordering in bidirectional flows, as packets in one direction may traverse faster routes while the reverse takes longer paths influenced by arrangements. This is prevalent in the due to policy-based routing decisions by autonomous systems. In backbone networks, interactions between points and transit providers amplify this; for instance, at exchange points like , parallel components such as hunt groups and high traffic volumes led to frequent reordering observed in end-to-end measurements. In modern wireless networks, such as and emerging systems, frequent handovers due to high-speed mobility (e.g., in vehicular networks) can cause reordering as packets switch between base stations or paths mid-flow. Quantitatively, reordering probability escalates with path diversity; in Internet-scale tests near major backbones, over 90% of paths exhibited reordering for probe packets, with only a small fraction arriving in strict order. In fabric topologies employing ECMP, path multiplicity (e.g., multiple equal-cost routes in Clos networks) can increase reordering rates significantly under packet spraying variants, though standard per-flow hashing mitigates this within individual flows. These factors highlight how infrastructure inherently introduces variability in packet arrival order.

Device processing factors

In multi-core routers, parallel processing of packets across multiple cores can lead to out-of-order delivery due to variations in processing times and queue depths among cores. When packets from the same flow are assigned to different cores for handling, disparities in workload, cache efficiency, or queue management can cause later packets to complete processing and depart before earlier ones, disrupting sequence. For instance, dynamic core allocation schemes in network processors aim to mitigate this by balancing loads, but inherent variations in queue depths—such as deeper queues on busier cores delaying dequeued packets—still contribute to reordering in high-throughput environments. Interrupt coalescing in network interface cards (NICs) and switches introduces delays that exacerbate out-of-order arrivals by batching multiple packets before generating a single to the host CPU. This mechanism reduces CPU overhead in high-bandwidth scenarios by waiting for a or packet , but it can allow subsequent packets to arrive and be processed faster if they bypass the coalescing delay, earlier batched ones. Studies show that such coalescing alters packet inter-arrival times, leading to reordering metrics like reorder increasing under bursty traffic, particularly when combined with variable buffering in device pipelines. Quality of Service (QoS) scheduling mechanisms in devices prioritize packets based on traffic classes, intentionally reordering them to favor latency-sensitive like data, which can result in out-of-order delivery for non-prioritized streams. Devices employ multiple queues with strict priority or , where high-priority packets (e.g., VoIP) are dequeued and forwarded ahead of lower-priority ones from the same , causing sequence disruptions downstream. This reordering is a deliberate trade-off for guarantees, but it increases the reordering extent in mixed-traffic networks, as measured by buffer-occupancy density in affected . Hardware offloading features, such as Segmentation Offload (TSO) and Receive Side Scaling (), contribute to out-of-order delivery by generating bursty transmissions or uneven flow distribution across receive queues. TSO allows the to segment large payloads into multiple packets, but variations in segmentation timing or host buffering can lead to bursts where later segments arrive out of sequence relative to prior flows. Similarly, hashes packet flows to distribute them across multiple CPU cores and queues for parallel processing, but hash collisions or uneven load balancing can cause packets from the same flow to be processed at different speeds, resulting in reordering upon reassembly. Proper tuning of indirection tables is essential to minimize this, as out-of-order arrivals can degrade throughput by triggering unnecessary retransmissions. In firewalls and devices, (DPI) processes packets unevenly by holding them in reorder buffers during stateful analysis, allowing faster non-inspected or lightly inspected packets to overtake those undergoing thorough scrutiny. requires reassembly of streams to inspect content, and if out-of-order packets arrive, the device buffers them until the full sequence is complete, but incomplete reassembly or buffer overflows can release packets in altered order. implementations, for example, support configurable out-of-order packet caching in zone-based firewalls to handle this, preventing drops but still permitting reordering in high-volume inspections.

Protocol handling

TCP mechanisms

TCP employs 32-bit sequence numbers in its header to uniquely identify and order segments, enabling the to detect and reassemble out-of-order packets. These sequence numbers, ranging from 0 to 2^{32}-1 with , are assigned to each byte of transmitted, allowing the sender to track progress with variables like SND.NXT (next sequence number to send) and the to expect order via RCV.NXT (next expected sequence number). A segment is considered acceptable if its sequence number falls within the receive window, defined as RCV.NXT to RCV.NXT + RCV.WND - 1, where RCV.WND is the advertised receive window size. On the receiver side, out-of-order packets are buffered in a reassembly until the missing segments arrive, ensuring is delivered to the application in the correct sequence. The receiver queues segments that are within the window but not contiguous with RCV.NXT, holding them for later processing once gaps are filled. This buffering prevents premature delivery of disordered , maintaining TCP's reliability guarantee, though it requires sufficient allocation for the reassembly . To detect losses causing out-of-order arrivals, uses duplicate acknowledgments (s), where the receiver sends an for the last in-order upon receiving an out-of-order one, signaling a "" in the sequence. Upon receiving three such duplicate s—indicating a missing without intervening packets—the sender triggers the fast retransmit algorithm, retransmitting the presumed lost immediately without waiting for the retransmission timer. This selective retransmission targets only the gap, allowing continued transmission of new data and improving efficiency over timeout-based recovery. The Selective Acknowledgment () extension, defined in RFC 2018 (1996), enhances this by permitting the receiver to report multiple non-contiguous blocks of successfully received beyond the cumulative . SACK options include up to four blocks, each defined by left and right edge sequence numbers, enabling the sender to retransmit only truly missing segments rather than assuming all data after the gap is lost. This optimizes recovery from multiple losses or significant reordering, reducing unnecessary retransmissions and improving throughput in reordered environments. The Duplicate Selective Acknowledgment (D-SACK) extension, defined in RFC 2883 (2000), builds on by using the first SACK block to report receipt of duplicate segments. This allows the sender to detect cases where fast retransmit was triggered by reordering rather than —for example, when a delayed original packet arrives after its retransmission—avoiding erroneous congestion control responses like unnecessary reductions. Window scaling, introduced in RFC 1323, addresses limitations in handling large reordering by expanding the 16-bit field to an effective 32-bit size via a shift factor (up to 14 bits, yielding windows up to 1 GB), which is negotiated during setup. This larger receive allows buffering more out-of-order packets—up to the scaled RCV.WND size—before discarding due to , though standard SACK limits reporting to four blocks, constraining recovery for extreme reordering. The 32-bit fundamentally limits the effective to half the space (2^{31} bytes) to distinguish new data from wrapped-around duplicates, providing a basic bound on tolerable reordering without advanced extensions.

UDP and other protocols

Unlike TCP, which enforces in-order delivery through sequence numbers and retransmissions, operates as a connectionless transport protocol without any built-in sequencing or reordering mechanisms. simply delivers datagrams to the in the order they are received by the receiving host, which can result in out-of-order arrival if packets take different network paths or experience varying delays. This design choice prioritizes low latency and minimal overhead, making suitable for applications where occasional out-of-order packets are tolerable or can be handled at the application level. The responsibility for managing out-of-order delivery in UDP-based systems falls to the overlying application protocols. For instance, the (RTP), commonly used over for , incorporates a 16-bit sequence number in each packet header to allow receivers to detect and reorder or discard out-of-order packets, ensuring synchronized playback despite network . Similarly, applications like DNS or VoIP may implement custom buffering or ignore ordering for non-critical data, but for order-sensitive scenarios, developers must add explicit sequencing logic to reassemble payloads correctly. The protocol, defined in 9000, addresses out-of-order delivery more robustly while building on UDP's foundation to support reliable, multiplexed connections. QUIC uses monotonically increasing packet numbers for ordering, combined with acknowledgment (ACK) frames that include ranges of received packet numbers, enabling explicit detection and handling of reorders without across streams. This mechanism, particularly useful in multipath environments like mobile networks, allows QUIC to tolerate packet reordering by delaying ACKs until gaps are filled or using provisional ACKs for reordered packets. Other protocols layered over UDP or similar transports exhibit varied approaches to out-of-order delivery. The (SCTP) supports multi-streaming, where each stream maintains independent partial ordering—delivering data within a stream in sequence but allowing inter-stream reordering to avoid blocking—via stream identifiers and per-stream sequence numbers. In contrast, encapsulation in tunnels can exacerbate reordering by introducing additional processing delays or path variations, potentially fragmenting or delaying packets without native correction, requiring upper-layer protocols to compensate. These protocols highlight a key : UDP's simplicity and lower overhead facilitate faster initial delivery and reduced CPU usage compared to ordered protocols, but it shifts the burden of reordering to applications, which must implement corrections for in scenarios like communication or file transfers. This app-level flexibility enables tailored handling but increases development complexity for reliability.

Impacts

Performance degradation

Out-of-order packet delivery in leads to head-of-line (HOL) blocking, where the receiver must wait for a missing packet before processing subsequent out-of-order packets, thereby stalling data delivery and reducing effective throughput. This mechanism ensures in-order delivery but introduces inefficiencies, as buffered packets remain idle until the gap is filled via retransmission or late arrival. The phenomenon exacerbates at the , with buffering and subsequent retransmissions adding delays that can reach hundreds of milliseconds in scenarios involving persistent reordering or timeouts. For instance, in high-speed networks, reordering can extend the effective round-trip time by forcing into modes, where the time until a missing packet is detected and retransmitted—often governed by duplicate acknowledgments or timers—compounds the initial disorder. Empirical simulations indicate average reordering delay times equivalent to 1-2 packet intervals at rates above 100 Mbps, scaling with network load. Bandwidth inefficiency arises from the overhead of handling reordering, including the generation of duplicate acknowledgments (DUPACKs) to signal gaps and selective retransmissions of only missing segments, which consume additional network resources without advancing useful data transfer. This overhead can trigger spurious congestion control invocations, as misinterprets reordering as loss, leading to unnecessary window reductions. Key metrics for quantifying out-of-order delivery include the reorder ratio, defined as the percentage of packets arriving out of sequence relative to the total packets in a , and the reordering extent, which measures the maximum displacement or gap size (e.g., number of positions a packet is reordered). These metrics help assess severity; even low reorder ratios can lead to noticeable disruptions. Empirical studies of backbones reveal reorder ratios typically ranging from 0.3% to 2%, with occasional peaks up to 1.65% in high-load flows, leading to significant throughput reductions in affected sessions due to repeated fast retransmits and window halving. In simulated high-speed environments mimicking backbone conditions, even low reordering (e.g., 0.04% affecting events) can reduce throughput from hundreds of Mbps to below 10 Mbps for standard variants.

Application-level effects

Out-of-order delivery of packets can significantly disrupt real-time applications that rely on timely and sequential data arrival, such as (VoIP) systems. In VoIP, reordered audio packets lead to , causing garbled sound, clipping, and dropped audio bits, which degrade call quality and . For instance, when packets containing sequential audio samples arrive out of sequence, the receiver may play incomplete or incorrect segments, resulting in audible distortions that become perceptible even at low reordering rates. Video streaming protocols like (HLS) and (DASH) are similarly affected, where out-of-order packets contribute to frame drops and playback interruptions. Reordered video packets can cause visual artifacts, such as frozen frames or desynchronized audio-video sync, reducing the overall (QoE) as the decoder struggles to reconstruct the stream correctly. Studies have shown that packet reordering due to network traffic directly lowers perceived video , with users reporting noticeable degradation in smoothness and clarity. In UDP-based multiplayer games, out-of-order delivery exacerbates position glitches and issues among players. Game state updates, such as player movements or actions, arriving out of sequence can lead to inconsistent world views, causing erratic behavior like teleporting characters or mismatched collisions, which frustrate users and disrupt flow. Since provides no inherent reordering, applications must implement custom sequencing to mitigate these effects, but persistent reordering still introduces latency in reconciling the state. Bulk transfer applications, such as FTP or HTTP downloads, experience less noticeable impacts from out-of-order delivery due to TCP's built-in reordering at the . While reordering may trigger temporary buffering and retransmissions, slowing overall throughput, the effects are typically imperceptible to users as the protocol reassembles data before application delivery, prioritizing reliability over immediacy. Modern protocols like can also suffer from reordering, interpreting it as loss and reducing performance compared to in some cases. To counteract these issues, applications often employ buffering strategies to reorder packets, trading increased for improved smoothness. Many media players use jitter buffers ranging from tens of milliseconds to several seconds, allowing time for delayed or reordered packets to arrive before playback. In WebRTC-based video calls, the NetEQ jitter buffer handles reordering by temporarily storing packets and reassembling them, but excessive reordering increases and can lead to perceptible quality loss, with user satisfaction declining as reorder rates rise.

Detection and mitigation

Measurement techniques

Passive monitoring techniques involve capturing network traffic and analyzing packet sequence numbers to identify reordering events without injecting additional traffic. Tools such as and are commonly used for this purpose. 's TCP analysis feature tracks session states and flags packets as "TCP Out-Of-Order" when a packet arrives with a sequence number that does not follow the expected order, allowing users to quantify reordering by filtering and counting such events in capture files. Similarly, captures raw packets, after which scripts or post-processing tools like tshark (Wireshark's command-line variant) can parse sequence numbers to detect and measure reordering ratios in TCP flows. These methods are effective for real-world but may be influenced by protocol-specific behaviors, such as TCP retransmissions, requiring careful interpretation to distinguish true reordering from other anomalies. Active probing methods send controlled probe packets to measure reordering directly, providing quantifiable metrics under specific conditions. ICMP echo requests and replies, as well as probes, are standard for this; for instance, sending bursts of ICMP pings or packets with embedded sequence numbers allows calculation of the reorder percentage by comparing send and receive orders at the endpoint. These probes can also capture associated delays, revealing the extent of reordering, such as the number of positions a packet is displaced. Bennett et al. demonstrated that over 90% of probe bursts exhibited reordering using ICMP, though results vary by path and must account for potential ICMP filtering in networks. Key metrics for quantifying out-of-order delivery include the Reorder Free Ratio, which measures the proportion of packets arriving in sequence without reordering, and the Duplicate Tolerance parameter from the Performance Metrics (IPPM) framework, which accounts for allowable duplicates in reordering assessments to avoid false positives from retransmissions. Inter-packet arrival time variance serves as an indirect indicator, where increased variability in arrival times between consecutive packets signals potential reordering events disrupting expected timing. The required reorder size, often derived from the Reorder Buffer-Occupancy Density (RBD), estimates the maximum number of packets a must to restore order, helping evaluate tolerance thresholds. For controlled environments, in UDP mode sends sequenced probes and reports out-of-order packet counts directly in its output, enabling precise measurement of reordering percentages during bandwidth tests. These tools align with IPPM standards, such as RFC 4737 from the IETF's IPPM working group (2006), which formalizes reordering metrics including tolerance for duplicates to ensure robust evaluations.

Strategies to reduce occurrence

Several strategies exist to minimize out-of-order packet delivery in networks, focusing on routing configurations, device settings, overall architecture, optimizations, and capabilities. These approaches aim to ensure that packets within the same follow consistent paths or are processed in a manner that preserves sequence, thereby reducing reordering incidents without relying on extensive application-level corrections. Flow-based hashing in Equal-Cost Multi-Path (ECMP) routing is a key method to direct all packets of the same flow along the identical network path. By computing a using the 5-tuple—source and destination addresses, source and destination ports, and type—routers assign consistent forwarding decisions, preventing the that leads to reordering in per-packet load balancing. This technique is widely implemented in modern routers to maintain order while achieving load distribution across multiple paths. In environments sensitive to reordering, such as applications, disabling features like Receive Side Scaling () and TCP Segmentation Offload (TSO) on network interfaces can help. RSS distributes incoming packets across multiple CPU cores using flow hashing, but misconfigurations or hardware limitations may occasionally disrupt order; similarly, TSO segments large TCP payloads in the NIC, potentially causing inconsistencies when combined with other network elements like firewalls. Turning these off forces sequential processing on a single core or without offload, eliminating such risks at the cost of reduced throughput. Network design plays a crucial role in avoiding reordering by enforcing symmetric paths and structured forwarding. Asymmetric , where inbound and outbound take different routes, often results in packets arriving out of due to varying latencies; implementing symmetric path policies through route symmetry checks or BGP attributes mitigates this. Additionally, (MPLS) provides strict ordering by labeling packets for deterministic paths in label-switched networks, bypassing variability and ensuring in-order delivery, particularly in backbones. Protocol enhancements in TCP further reduce the impact of minor reordering by improving recovery mechanisms. Enabling Selective Acknowledgment (SACK), as defined in RFC 2018, allows receivers to acknowledge non-contiguous byte ranges, enabling senders to retransmit only missing segments rather than assuming losses from gaps caused by reordering. Similarly, TCP timestamps (RFC 1323) provide precise sequencing information, aiding in duplicate detection and accurate reassembly even when packets arrive slightly out of order, thus tolerating low-level disruptions without performance penalties. Hardware solutions, such as switches supporting per-flow queuing, offer fine-grained control to preserve order at the device level. These switches maintain separate queues for individual , preventing where a delayed packet in one flow stalls others; instead, each flow's packets are buffered and dequeued in sequence. This is particularly effective in (TSN) environments, where dynamic allocation of queues per flow ensures low reordering delays across high-speed links.

References

  1. [1]
    RFC 4737: Packet Reordering Metrics
    ### Summary of RFC 4737 - Packet Reordering Metrics
  2. [2]
    [PDF] COS 318: Internetworking - cs.Princeton
    Best-Effort Packet-Delivery Service. ◇ Best-effort delivery. ○ Packets may be lost. ○ Packets may be corrupted. ○ Packets may be delivered out of order.
  3. [3]
    RFC 5236 - Improved Packet Reordering Metrics - IETF Datatracker
    ... out-of-order delivery of packets. If an arriving packet is early, it is added to a hypothetical buffer until it can be released in order [Ban02]. The ...
  4. [4]
    RFC 5236: Improved Packet Reordering Metrics
    ... out-of-order delivery of packets. If an arriving packet is early, it is added to a hypothetical buffer until it can be released in order [Ban02]. The ...
  5. [5]
    A Brief History of the Internet & Related Networks
    The objective was to develop communication protocols which would allow networked computers to communicate transparently across multiple, linked packet networks.
  6. [6]
    RFC 791: Internet Protocol
    The internet protocol is designed for use in interconnected systems of packet-switched computer communication networks. Such a system has been called a catenet.
  7. [7]
    BGP and equal-cost multipath (ECMP) - Noction
    Mar 25, 2016 · It's very common to use parallel links to increase bandwidth. This mechanism is often called equal-cost multipath (ECMP).
  8. [8]
    [PDF] ETSI TS 138 300 V18.5.0 (2025-04)
    3GPP TS 38.300 version 18.5.0 Release 18. - Reordering and in-order delivery;. - Out-of-order delivery;. - Duplicate discarding. Since PDCP does not allow ...
  9. [9]
    draft-ietf-quic-multipath-17 - Managing multiple paths for a QUIC ...
    This document specifies a multipath extension for the QUIC protocol to enable the simultaneous usage of multiple paths for a single connection.
  10. [10]
  11. [11]
  12. [12]
    Packet reordering is not pathological network behavior
    Packet reordering is not pathological network behavior. Editor: Mostafa H. Ammar. Mostafa H. Ammar Georgia Institute Technology, Atlanta.
  13. [13]
    On the efficacy of fine-grained traffic splitting protocolsin data center ...
    More fine-grained traffic splitting techniques are typically not preferred because they can cause packet reordering that can, according to conventional wisdom, ...
  14. [14]
    Scaling multi-core network processors without the reordering ...
    Today, designers of network processors strive to keep the packet reception and transmission orders identical, and therefore avoid any possible out-of-order
  15. [15]
    An efficient packet scheduling algorithm in network processors
    However, such multiprocessing also gives rise to increased out-of-order departure of processed packets. In this paper, we first propose a dynamic batch co ...Missing: core | Show results with:core
  16. [16]
    Dynamic Core Allocation and Packet Scheduling in Multicore ...
    Dec 1, 2016 · In this paper, we propose a packet scheduling scheme that considers the multiple dimensions of locality to improve the throughput of a network ...
  17. [17]
    Sorting Reordered Packets with Interrupt Coalescing - ScienceDirect
    Oct 12, 2009 · We propose a new strategy, Sorting Reordered Packets with Interrupt Coalescing (SRPIC), to reduce packet reordering in the receiver.
  18. [18]
    [PDF] Effects of Interrupt Coalescence on Network Measurements *
    Abstract. Several high-bandwidth network interfaces use Interrupt Co- alescence (IC), i.e., they generate a single interrupt for multiple packets.
  19. [19]
    Priority Scheduling Algorithms for QoS support in WDM PON-based ...
    This is achieved through dynamic packet reordering and scheduling in different priority queues and wavelengths in a ring-based WDM-PON architecture that is ...
  20. [20]
    A Bandwidth Aggregation-Aware QoS Negotiation Mechanism for ...
    To cope with packet reordering, a new scheduling strategy is presented. The performance evaluation of the proposed bandwidth aggregation-aware QoS ...
  21. [21]
    Network Adapter Performance Tuning in Windows Server
    Jul 7, 2025 · Common offload features include TCP checksum offload, Large Send Offload (LSO), and Receive Side Scaling (RSS). Enabling network adapter ...Missing: TSO | Show results with:TSO
  22. [22]
    TCP/IP performance tuning for Azure VMs - Microsoft Learn
    Apr 21, 2025 · Receive side scaling (RSS) is a network driver technology that ... packets arriving out of order, which can affect the delivery of packets.Missing: TSO | Show results with:TSO
  23. [23]
    [PDF] TCP Out-of-Order Packet Support for Cisco IOS Firewall and Cisco ...
    Nov 17, 2006 · This feature allows out-of-order packets in TCP streams to be cached and reassembled before they are inspected by Cisco IOS Intrusion Prevention ...
  24. [24]
    Security Configuration Guide: Zone-Based Policy Firewall, Cisco ...
    Nov 26, 2017 · Layer 7 inspection is a stateful packet inspection and it does not work when TCP packets are out of order. In Cisco IOS XE Release 3.5S, if ...
  25. [25]
    RFC 793 - Transmission Control Protocol - IETF Datatracker
    This document describes the DoD Standard Transmission Control Protocol (TCP). There have been nine earlier editions of the ARPA TCP specification on which this ...Missing: history | Show results with:history
  26. [26]
    RFC 9293 - Transmission Control Protocol (TCP) - IETF Datatracker
    The send window is the portion of the sequence space labeled 3 in Figure 3. ... reorder them if they arrive out of order. This is not a serious problem ...
  27. [27]
    RFC 2581 - TCP Congestion Control - IETF Datatracker
    The fast retransmit algorithm uses the arrival of 3 duplicate ACKs (4 identical ACKs without the arrival of any other intervening packets) as an indication ...<|control11|><|separator|>
  28. [28]
    RFC 2018 - TCP Selective Acknowledgment Options
    A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.<|control11|><|separator|>
  29. [29]
    RFC 1323 - TCP Extensions for High Performance - IETF Datatracker
    ... Window Scale options in their SYN segments to enable window scaling in either direction. If window scaling is enabled, then the TCP that sent this option ...<|control11|><|separator|>
  30. [30]
    [PDF] Packet reordering, high speed networks and transport protocol ...
    In this paper we study the occurrence of packet reordering on a commercial IP backbone network, reporting on the variation in reordering rate dependent on the ...
  31. [31]
    [PDF] Packet Reordering in High-Speed Networks and Its Impact on High ...
    Abstract—Several recent Internet measurement studies show that the higher the packet sending rate, the higher the packet reordering probability.Missing: ratio | Show results with:ratio
  32. [32]
    [PDF] A New TCP for Persistent Packet Reordering - UCSB ECE
    Today's implementations of TCP are not compatible with networks that reorder packets and suffer great reductions in throughput when faced with persis- tent ...
  33. [33]
    [PDF] Measuring Packet Reordering
    Simi- larly, recent empirical analyses of packet corruption have suggested that many errors may be undetected in long- lived TCP streams due to the design ...
  34. [34]
    [PDF] Novel approaches to end-to-end packet reordering measurement
    In this paper we propose three new methods to end- to-end packet reordering measurement, which can detect all four reordering cases: no-reordering, forward-path ...
  35. [35]
    (PDF) Out of order packets analysis on a real network environment
    QoS such as end-to-end delay might be important for other types of multimedia communications that involve real-time traffic such as voice and video.
  36. [36]
    Measuring Effect of Packet Reordering on Quality of Experience ...
    Aug 7, 2025 · From our experiments, we found that QoE of users is decreased when video quality is reordered due to network traffic. This work will help ...
  37. [37]
    UDP vs. TCP | Gaffer On Games
    Oct 1, 2008 · There is also no guarantee of ordering of packets with UDP. You could send 5 packets in order 1,2,3,4,5 and they could arrive completely out of ...
  38. [38]
    Out-of-order Packets Trash Voice and Video | Network World
    Mar 11, 2009 · Voice and video endpoints are very sensitive to packet reordering and may run out of buffering or the necessary CPU power to reorder packets ...
  39. [39]
    How WebRTC's NetEQ Jitter Buffer Provides Smooth Audio
    Jun 3, 2025 · Packet reorder (out-of-order) is when packets arrive at the receiver in a different order than they were sent. One might assume this is a ...
  40. [40]
    Effect of Packet Loss and Reorder on Quality of Audio Streaming
    Sep 23, 2019 · The results show the user's satisfaction level is decreased when packet loss and reorder level is increased in audio streams. The user accepted ...
  41. [41]
    7.5. TCP Analysis - Wireshark
    Wireshark's TCP dissector tracks the state of each TCP session and provides additional information when problems or potential problems are detected.
  42. [42]
    tcpdump(1) man page | TCPDUMP & LIBPCAP
    Jun 30, 2025 · tcpdump prints out a description of the contents of packets on a network interface that match the Boolean expression (see pcap-filter(7) for the expression ...Options · Output Format · Particular Tcp Flag...<|control11|><|separator|>
  43. [43]
    RFC 4737 - Packet Reordering Metrics - IETF Datatracker
    Oct 14, 2015 · This memo defines metrics to evaluate whether a network has maintained packet order on a packet-by-packet basis.
  44. [44]
    Packet Reordering (Out-of-Order Packets) & How to Detect It - Obkio
    Rating 4.9 (161) Jul 22, 2024 · Packet reordering, also known as out-of-order packets, refers to the phenomenon where network packets arrive at their destination out of sequence.
  45. [45]
    iPerf3 and iPerf2 user documentation - iPerf
    Since TCP does not report loss to the user, I find UDP tests helpful to see packet loss along a path. Jitter calculations are continuously computed by the ...iPerf 3 user documentation · iPerf 2 user documentation · Tuning a UDP connection
  46. [46]
    Packet Reordering in the Era of 6G: Techniques, Challenges ... - MDPI
    Packet reordering can lead to increased latency, decreased throughput ... A network-layer proxy for bandwidth aggregation and reduction of IP packet reordering.
  47. [47]
    Using the IPv6 flow label for equal cost multipath routing and link ...
    If the header fields included in the hash are consistent, all packets from a given flow will generate the same hash, so out-of-order delivery will not occur.Missing: reduce | Show results with:reduce
  48. [48]
    Disable TCP offloading and RSS settings - AWS Prescriptive Guidance
    TCP offloading moves packet processing to the network adapter, and RSS distributes network traffic processing. Disabling them may help with connectivity issues ...
  49. [49]
    need to disable RSS to verify packet reordering problem in 2.8.0.
    Jul 20, 2025 · I have diagnosed a packet reordering issue in 2.8.0, its not if_pppoe, the only other major change on networking since 2.7.2 is that now the ...Understanding TCP Segmentation Offload (TSO) and Guest OSOpinion of TCP offload. : r/sysadmin - RedditMore results from www.reddit.com
  50. [50]
    Asymetric routing - causes and effects?
    Jun 25, 2013 · Asymmetric routing can be bad, mainly because you risk packets being delivered in the wrong order, but again, depends greatly on the topology ...Missing: prevent | Show results with:prevent
  51. [51]
    Load Balancing MPLS Traffic | Junos OS - Juniper Networks
    Load balancing can become skewed as a result, or the incidence of out-of-order packet delivery may rise. For these cases, labels from the bottom of the ...Router Configurations For... · Sample Route Resolution... · Understanding Ip-Based...<|control11|><|separator|>
  52. [52]
    RFC 1323: TCP Extensions for High Performance
    RFC-1072 defined a new TCP "SACK" option to send a selective acknowledgment. ... For efficiency, we combine the timestamp and timestamp reply fields into a single ...<|control11|><|separator|>
  53. [53]
    [PDF] On Packet Reordering in Time-Sensitive Networks - arXiv
    Jun 21, 2021 · Re-sequencing buffers are then used to provide in-order delivery, the parameters of which (timeout, buffer size) may affect worst-case delay and.Missing: variance | Show results with:variance
  54. [54]
    Dynamic Per-Flow Queues in Shared Buffer TSN Switches
    Mar 19, 2025 · This article aims to dynamically maintain the mapping between flows and queues to implement dynamic per-flow queuing. Fig. 3. In a fully ...