Fact-checked by Grok 2 weeks ago

Sliding window protocol

The Sliding window protocol is a bidirectional used in computer networking to enable reliable, ordered, and flow-controlled data transmission over unreliable channels by allowing a to transmit multiple data units (such as packets or frames) within a defined window size before requiring acknowledgments, thereby improving channel utilization compared to stop-and-wait methods. This operates at both the and the , employing sequence numbers to track transmitted units and cumulative acknowledgments to confirm receipt, with timers triggering retransmissions for lost or corrupted units to ensure reliability. The window size, which determines the maximum number of unacknowledged units in transit, is dynamically advertised by the based on its buffer availability, providing end-to-end flow control to prevent overwhelming the recipient. In practice, the maintains and windows that "slide" forward as acknowledgments arrive, keeping the communication efficiently filled while handling errors through mechanisms like timeouts. Key variants of the sliding window protocol address different trade-offs in error recovery and efficiency: Go-Back-N, which uses cumulative acknowledgments and retransmits all frames from the erroneous one onward upon detecting a loss, simplifying receiver logic but potentially wasting on high-error channels; and Selective Repeat, which employs individual acknowledgments and retransmits only lost frames, offering better performance in lossy environments at the cost of increased complexity for buffering out-of-order frames at the receiver. These variants typically require sequence numbers sufficient to distinguish frames within the window (e.g., at least one more than the window size to avoid ambiguity). The protocol has been foundational since the early development of packet-switched networks, influencing designs like the ARPANET's flow control using end-to-end window mechanisms with Ready For Next Message (RFNM) permits, and modern implementations in protocols such as at the , where the receive window field in the header advertises buffer space in octets, and HDLC at the , which supports sliding window modes for frame-level reliability over serial links. Its efficiency stems from pipelining data transmission, making it essential for high-throughput applications while adapting to varying network conditions through window scaling extensions in later evolutions like .

Introduction

Basic Concept

The sliding window protocol is a fundamental technique for achieving reliable data transmission over unreliable communication channels, functioning as both a flow control mechanism to regulate data flow and an error recovery method to handle lost or corrupted packets in the and layers. It enables the sender to transmit multiple packets without waiting for individual acknowledgments, thereby pipelining data transfer to enhance efficiency. Central to the protocol is the concept of a , defined as a contiguous of numbers that identifies the permissible set of unacknowledged packets the sender may have outstanding at any time. The window size, typically denoted as W, limits the maximum number of such packets (e.g., W = 4 allows up to four unacknowledged packets), and both sender and receiver maintain synchronized windows using numbers to track progress. This ensures ordered delivery and prevents by coordinating the pace between sender and receiver. The protocol operates by sliding these windows forward upon successful acknowledgments: when the receiver confirms receipt of packets (via cumulative ACKs indicating all prior packets up to a certain number have arrived), the sender advances its window by releasing the acknowledged slots and opening new ones for fresh packets. Similarly, the receiver's window shifts to accept subsequent numbers. For example, if the sender's initial window spans numbers 1 through 5 and an ACK for packet 1 arrives, the window slides to 2 through 6, permitting transmission of packet 6 while the earlier slots are freed. This sliding mechanism, often illustrated in diagrams showing parallel timelines of packet transmissions and ACK returns, maintains reliability through timeouts and retransmissions if acknowledgments are not received within expected bounds. It is particularly beneficial in high-latency networks, where waiting for each would severely limit throughput.

Historical Development

The sliding window protocol traces its roots to early advancements in (ARQ) techniques during the 1960s, driven by the need for efficient data transmission in high-latency environments such as communications. Researchers explored pipelining multiple frames to overcome the inefficiencies of , laying the theoretical groundwork for window-based mechanisms that allowed senders to transmit a sequence of packets without immediate . These efforts highlighted the challenges in long-distance links, influencing subsequent protocol designs. A pivotal milestone occurred in the early 1970s with the French project, where Gérard Le Lann proposed the sliding window scheme for end-to-end error and flow control in packet-switched networks. Le Lann's 1973 presentation of simulation results demonstrated the mechanism's ability to manage reliable communications by dynamically adjusting the transmission window, directly inspiring international protocol developments. This work was instrumental in shaping the Transmission Control Protocol (TCP), as detailed in the seminal 1974 paper by Vinton Cerf and Robert Kahn, which incorporated a window strategy for inter-networking, allowing up to a specified number of bytes to be sent before acknowledgment while supporting variable window sizes up to half the sequence space. In parallel, practical implementations emerged in the experiments around 1970, where initial host-to-host protocols began incorporating ARQ elements, evolving toward windowed operations by the mid-1970s with early deployments that enabled pipelined data transfer across heterogeneous networks. The protocol gained standardization through IBM's Synchronous Data Link Control (SDLC) in the mid-1970s, which introduced sliding window support for (SNA) environments, emphasizing bit-oriented framing and sequence numbering for reliable link-layer transmission. This was followed by the adoption of (HDLC) by the (ISO) in 1979, standardizing the sliding window for balanced and unbalanced modes in ISO 4335 and ISO 3309. Further refinements appeared in ISO standards like X.25 in 1976, where the Link Access Procedure Balanced (LAPB), a subset of HDLC, integrated sliding window ARQ for virtual circuit-based in wide-area networks, supporting window sizes up to 7 or 127 frames. Over time, the protocol evolved to accommodate larger windows in modern implementations, shifting from fixed sizes in early HDLC variants to dynamic sizing in , where receivers advertise variable windows based on buffer availability. This adaptability was enhanced by TCP window scaling in RFC 1323 (1992), allowing effective windows up to 1 through a scaling factor, addressing high-bandwidth-delay product links in contemporary networks.

Motivation and Fundamentals

Need for Efficient Data Transfer

In early data transmission protocols, such as , the sender transmits a single packet and must await an before sending the next one, resulting in significant idle time on the transmission link during the round-trip time (RTT). This approach leads to low throughput, particularly in networks with high latency, where the propagation delay dominates the packet transmission time. For instance, on satellite links with RTTs exceeding 500 ms, the sender remains idle for most of the time, utilizing only a fraction of the available —often less than 1% efficiency in scenarios where packet transmission is brief compared to the delay. Similarly, wide-area networks (WANs) with RTTs around 100 ms exhibit comparable inefficiencies, making stop-and-wait impractical for modern high-speed links. The (BDP) quantifies this inefficiency by representing the amount of data that must be "in flight" to fully utilize the network capacity, calculated as the product of the and the minimum RTT. In low-delay environments, such as local area networks (LANs) with RTTs under 1 ms, stop-and-wait can achieve near-full throughput since the BDP is small relative to packet size. However, in high-delay settings like or transcontinental WANs, the BDP can reach gigabits, far exceeding a single packet's size, leaving the underutilized without mechanisms to multiple packets. Sliding protocols address this by allowing the sender to transmit a configurable number of unacknowledged packets, effectively matching the size to the BDP and enabling continuous data flow without excessive waiting. Sliding windows incorporate to prevent receiver by dynamically advertising available buffer space, distinct from mechanisms that handle through retransmissions. This separation ensures efficient pipelining: the limits outstanding data to the receiver's while permitting selective or cumulative acknowledgments for , avoiding the need to halt entirely after errors. In , this dual role enhances throughput in bandwidth-constrained, high-latency networks by balancing sender aggression with receiver readiness, as exemplified in protocols like where scaling accommodates varying BDPs.

Core Principles of Windowing

The sliding window protocol operates on the principle of maintaining synchronized at both the sender and receiver to enable efficient, pipelined while ensuring reliability. The sender's window defines the maximum number of unacknowledged packets that can be outstanding, limited by the receiver's advertised window (rwnd), which indicates the receiver's available space. This approach allows the sender to transmit multiple packets without waiting for individual acknowledgments, sliding the window forward as acknowledgments arrive to reflect progress. Acknowledgments in the sliding window protocol confirm receipt of data in ranges of sequence numbers, with two primary strategies: cumulative acknowledgments, which signal the highest consecutive sequence number received correctly, thereby implicitly acknowledging all prior packets in the window; and selective acknowledgments, which explicitly identify non-contiguous ranges of successfully received packets, allowing the sender to retransmit only missing ones. Cumulative ACKs provide simplicity and robustness in basic implementations by assuming in-order delivery up to the acknowledged point, while selective ACKs enhance efficiency in scenarios with scattered losses by pinpointing gaps without requiring full window retransmissions. The protocol assumes reliable ordering of packets within the current window, leveraging sequence numbers to maintain sequence integrity and buffer out-of-order arrivals at the receiver until gaps are filled. Losses are detected through timeouts or, in some variants, negative acknowledgments or indications of missing packets, triggering targeted recovery without full window resets. A single retransmission timer, typically set for the oldest unacknowledged packet and based on estimated round-trip time plus variance, ensures timely detection of losses across the window, restarting upon new transmissions to cover the entire outstanding set.

Protocol Operation

Transmitter Procedures

In the sliding window protocol, the transmitter initializes the send with a predefined size W, which determines the maximum number of unacknowledged data units (such as or segments) that can be outstanding at any time. This is typically represented by a range of sequence numbers starting from the initial send sequence number (often 0 or a randomly chosen value to avoid ambiguity), and the transmitter buffers the corresponding data units in a retransmission before begins. The window size W is negotiated or set based on link capacity and error rates to balance throughput and reliability, ensuring the transmitter does not overwhelm the receiver. Upon initialization, the transmitter begins sending units sequentially within the current boundaries, advancing the next number pointer (SND.NXT) for each transmitted unit while the left edge of the (SND.UNA, the oldest unacknowledged number) remains fixed until acknowledgments arrive. All sent but unacknowledged units are stored in a sender to support potential retransmissions, with the size at least equal to W to hold the entire 's worth of . The transmitter continues transmitting new units as long as the right edge of the (SND.UNA + W) has not been reached, thereby pipelining multiple transmissions without waiting for individual acknowledgments. This buffering ensures that lost or corrupted units can be recovered without discarding subsequent . When the transmitter receives a cumulative acknowledgment (ACK) from the receiver indicating successful delivery up to a specific number, it advances the left edge of the send (SND.UNA) to that acknowledged number, effectively sliding the window forward and freeing space for newly generated . If the ACK covers multiple outstanding units, the transmitter removes the corresponding buffered from the retransmission and may expand the right edge if additional space is available. This sliding mechanism maintains efficient by allowing continuous as long as the window does not shrink to zero due to insufficient receiver advertisement. The receiver's role in generating these ACKs is to confirm receipt of in-order , enabling the transmitter to progress. Retransmission at the transmitter is triggered primarily by timeout expiration, where a timer associated with the oldest unacknowledged unit expires if no arrives within the estimated round-trip time plus variance, prompting the retransmission of buffered units starting from SND.UNA. Additionally, receipt of duplicate s (indicating a gap in received sequence numbers) can trigger selective or go-back retransmissions of suspected lost units to accelerate without waiting for the full timeout. Upon retransmission, the transmitter restarts the for the affected units and may adjust the if persistent losses are detected, ensuring reliability over error-prone channels while minimizing unnecessary delays. Buffer management during retransmissions involves retaining all unacknowledged units until explicitly ed, preventing and supporting the protocol's ordered delivery guarantees.

Receiver Procedures

In the sliding window protocol, the receiver maintains a receive to manage the flow of incoming packets, ensuring reliable and ordered data delivery to the upper layers. The receive is defined by the next expected number (often denoted as R_n) and the receive window size (RWS), which determines the range of acceptable numbers from R_n to R_n + RWS - 1. Packets arriving with numbers within this window are considered valid for processing, while those outside the window are discarded to prevent and maintain with the sender. Upon receiving a packet, the checks its number against the current position. If the number matches R_n, the packet is accepted as the next in-order packet, delivered to the application, and the slides forward by incrementing R_n. For out-of-order packets within the , the either buffers them (in protocols supporting selective acknowledgment, such as selective repeat) or discards them (in simpler variants like go-back-N), depending on the specific to balance and . Packets with numbers preceding R_n or exceeding the upper bound are immediately discarded without buffering. The receiver generates cumulative acknowledgments (ACKs) to inform the sender of progress, typically acknowledging the highest sequence number received in-order up to that point, which implicitly confirms all prior packets. These ACKs are sent promptly after accepting an in-order packet or upon receiving an out-of-window packet that prompts feedback. The cumulative nature allows the sender to slide its window based on the ACK, releasing buffered data at the receiver once the entire window is acknowledged. In some designs, the ACK also includes the current receive window size to advertise available buffer space, preventing the sender from overwhelming the receiver. Duplicates, identified by sequence numbers already acknowledged (i.e., less than or equal to the last acknowledged number), are ignored by the to avoid redundant processing. The simply discards the duplicate and may retransmit the last cumulative to reinforce the acknowledgment state, aiding the sender in detecting losses without unnecessary retransmissions. This handling ensures robustness against network duplicates while minimizing overhead.

Sequence Number Management

In sliding window protocols, sequence numbers are essential for distinguishing between new data packets and potential retransmissions, ensuring reliable identification amid possible delays or losses. These numbers are assigned to packets in a modular fashion, typically using arithmetic M, where M represents the total sequence number space. If M is insufficient—specifically, if M ≤ 2W where W is the maximum window size—ambiguity arises because the sequence numbers from a new transmission window could overlap with those from a previous window that has not yet been fully acknowledged or cleared. For instance, consider a where the sender transmits a window of W packets with numbers 0 to W-1, all acknowledgments are lost, and the sender retransmits the same numbers; a receiver might confuse a delayed acknowledgment for the old window with one for the new, leading to incorrect window advancement or duplicate processing. To resolve such ambiguities, the sequence number space must be sufficiently large to separate consecutive windows uniquely. In the Go-Back-N variant, where the accepts only in-order packets and discards out-of-order ones (effectively a window of size 1), the minimum sequence number space S must satisfy S ≥ W + 1. This ensures that the sender's window of W outstanding packets can be distinguished from potential duplicates of the previous window, preventing the from accepting old packets as new after the window slides. The additional +1 accounts for distinguishing the boundary between the acknowledged packets and the current across wrap-arounds. Mathematically, this requirement can be expressed as: S \geq W + 1 for Go-Back-N, allowing the protocol to maintain unambiguous packet ordering without excessive buffering. In contrast, the Selective Repeat variant, which permits the receiver to buffer out-of-order packets and request retransmissions only for lost ones, requires a larger allocation due to symmetric sender and receiver windows of size up to W. Here, the minimum sequence number space is S ≥ 2W, as the receiver must distinguish up to 2W potential packets: W in the current window and W from possible delayed duplicates of the prior window. This is derived from the need to avoid overlap in the receiver's buffer space, ensuring each sequence number uniquely identifies whether a packet belongs to the active window or an obsolete one. The equation is: S \geq 2W for Selective Repeat, enabling efficient selective acknowledgments without misinterpreting old packets as new. These requirements highlight the trade-off in protocol design: a larger M increases header overhead but prevents errors in high-latency environments, as recommended in link-level ARQ guidelines. In practice, sequence numbers are implemented with a fixed bit length (e.g., 3 bits for M=8 in HDLC), limiting W accordingly to meet these bounds.

Protocol Variants

Stop-and-Wait as Baseline

The stop-and-wait protocol serves as the foundational variant of the sliding window mechanism, operating with a window size of , where the transmitter sends a single frame and awaits acknowledgment before proceeding. In this approach, the transmitter dispatches one data frame containing a sequence number, typically using a 1-bit field alternating between and to distinguish consecutive transmissions. Upon transmission, the transmitter starts a ; if a positive (ACK) for the expected sequence number arrives before the timer expires, the window slides forward by one position, allowing the next frame to be sent. This ensures reliable delivery over error-prone channels by confirming receipt of each frame individually. Error handling in stop-and-wait relies on timeouts and retransmissions, without pipelining multiple . If the is lost, corrupted (detected via error-detection codes like ), or its fails to arrive in time, the expires, prompting the transmitter to retransmit the same with the original sequence number. The receiver, upon detecting an or receiving an out-of-sequence , discards it and sends no , further enforcing the wait state at the transmitter. This simplicity limits complexity but introduces inefficiency, particularly on links with significant propagation delays, as the channel remains idle during the wait for . A key challenge addressed by sequence numbers in stop-and-wait is the ambiguity arising from delayed or duplicate ACKs. Consider a scenario with 1-bit sequence numbers: the transmitter sends frame 0, which arrives correctly, and the receiver responds with ACK 1 (acknowledging the next expected sequence). If this ACK is delayed while the transmitter times out and retransmits frame 0, the receiver—now expecting sequence 1—discards the duplicate frame 0 but still sends ACK 1. The delayed original ACK 1 may then arrive at the transmitter, which interprets it as confirmation for the retransmitted frame, allowing the window to slide correctly without delivering duplicates to the higher layer. Without sequence numbers, such delays could lead to unnecessary retransmissions or out-of-order delivery. The throughput of stop-and-wait is fundamentally constrained by the round-trip time (RTT), yielding a maximum of \eta = \frac{1}{1 + 2a}, where a = \frac{t_p}{t_x} is the of time t_p to time t_x for a (assuming negligible and queuing , and error-free for upper bound). To derive this, note that successful occupies t_x for sending and $2t_p for the round-trip , totaling one RTT per ; thus, the utilization is the time divided by the full : \eta = \frac{t_x}{t_x + 2t_p} = \frac{1}{1 + 2a}. For example, on a link where a = 1 (e.g., links with RTT comparable to ), drops to 33%, underscoring the need for larger windows in high-delay environments.

Go-Back-N Mechanism

The Go-Back-N mechanism is a sliding window protocol variant that allows the sender to transmit up to N unacknowledged packets (the size) before requiring an , enabling pipelined transmission while the accepts only in-order packets and discards any out-of-order ones. The sends cumulative acknowledgments (ACKs) indicating the next expected sequence number (SN), effectively acknowledging all prior packets in sequence. For instance, with a window size N=3, the sender might transmit packets SN=0, 1, and 2; if the gets SN=0 and ACKs it with SN=1 (next expected), the sender slides the to allow SN=3 next, but only after the window advances based on the cumulative ACK. Loss detection occurs through timeouts at the sender or duplicate ACKs from the receiver, prompting retransmission of all packets from the point of the detected gap in sequence numbers. If a packet with SN=k is lost, the receiver discards subsequent packets (e.g., SN=k+1 onward) and continues sending ACKs for SN=k, signaling the gap; upon timeout, the sender retransmits the entire outstanding window starting from SN=k, ensuring simplicity in but potentially duplicating correctly received packets. This "go-back" approach resets the window to the lost packet's position, blocking further advancement until resolution. A key requirement for correct operation in Go-Back-N is a sequence number space of at least the window size plus one (modulus M >= ). This ensures that sequence numbers do not wrap around in a way that causes ambiguity between new and old frames or ACKs, allowing the sender to properly distinguish and advance the window based on cumulative acknowledgments. With insufficient sequence numbers, delayed ACKs could potentially be misinterpreted, though the protocol's design with receiver window size of 1 and cumulative ACK handling mitigates many overlap issues up to the maximum window of 2^k - 1 for k-bit sequence numbers. The Go-Back-N mechanism offers simplicity in implementation, particularly at the receiver which needs only to track the next expected and send cumulative ACKs, making it easier to deploy than more complex variants. However, it trades efficiency for this ease, as burst errors require retransmitting the entire —even successfully received packets after the loss—wasting and reducing throughput in error-prone channels compared to stop-and-wait baselines.

Selective Repeat Approach

The selective repeat approach to the sliding window protocol is a variant of (ARQ) that enhances efficiency by allowing the to buffer out-of-order packets and request retransmissions only for those that are lost or corrupted, rather than retransmitting entire blocks. In this mechanism, transmits frames within its , and the accepts and buffers correctly received frames even if they arrive out of sequence, up to the size of its receive . Upon detecting gaps in the sequence, the sends individual negative acknowledgments (NAKs) or selective acknowledgments (SACKs) to indicate the specific missing frames, enabling to retransmit solely the affected packets without interrupting the flow of new data. This selective recovery minimizes unnecessary retransmissions, making it particularly suitable for channels with moderate to high error rates. On the sender side, upon receiving a NAK or , the protocol resends only the requested frames while continuing to advance its send window past those that have been individually acknowledged, thereby maintaining pipeline efficiency. The receiver delivers buffered packets to the upper layer in sequence order once gaps are filled, discarding duplicates if they arrive. Both sender and receiver must maintain buffers to hold unacknowledged or out-of-order frames, with buffer sizes typically matching the window to accommodate the of the link. This approach contrasts with cumulative acknowledgments used in simpler variants by focusing on granular to reduce overhead. A key requirement for correct operation is a sequence number space of at least twice the window size (2W), ensuring that the sender and receiver can distinguish between new frames and duplicates from previous windows after wrap-around. Buffering at both ends is essential to manage out-of-order arrivals and pending retransmissions. For instance, consider a sequence number space of 8 (numbers 0 through 7) and a window size of 4: the sender might transmit frames 0, 1, 2, and 3. If frame 2 is lost and acknowledged via NAK, the sender retransmits only 2 and advances to send 3 (if not yet resent), 4, 5, and 6. As the window slides to include 7, 0, 1, and 2, a delayed duplicate of the original frame 0 could arrive, potentially confusing the receiver into treating it as the new frame 0 unless the sequence space prevents overlap between active and retired windows. With exactly 2W, this minimum avoids ambiguity by ensuring no two frames with the same number are outstanding simultaneously.

Performance and Analysis

Window Size Optimization

The optimal window size in sliding window protocols is primarily determined by the (BDP), defined as the product of the available bandwidth and the round-trip time (RTT), which represents the amount of data that can be in transit to fully utilize path. To achieve maximum throughput without underutilizing the link, the window size W should be at least the BDP divided by the packet size, ensuring the sender can keep the "pipe" filled with outstanding packets during the RTT. For example, on a 10 Mbps link with a 100 ms RTT, the BDP is 125 , so if packets are 1 KB each, W \geq 125 packets would be needed to saturate the link. Error rates and buffer constraints further influence window sizing, as higher probabilities increase the risk of retransmissions and inefficient resource use, necessitating smaller windows to mitigate overload. In error-prone environments, the optimal window balances increased throughput from larger sizes against higher rates of spurious transmissions and rejected duplicates, often derived from models incorporating loss probability L and RTT T. Buffer limits at the sender and receiver impose practical caps, assuming no out-of-order buffering to simplify sequence management and avoid excessive memory demands. Dynamic window sizing adapts to varying network conditions through mechanisms like slow-start, which begins with a small window and exponentially increases it to probe available capacity, and congestion avoidance, which linearly adjusts the window to maintain stability once the optimal size is approached. These high-level strategies prevent overload by responding to implicit such as delays or losses, ensuring the window evolves without prior knowledge of exact . Larger windows enhance efficiency in high-BDP scenarios but introduce trade-offs, including heightened risk of sequence number ambiguity during losses or reordering, which demands a larger sequence number space—typically at least twice the window size to distinguish new from retransmitted packets. This requirement scales with window growth, potentially limiting applicability in constrained systems.

Throughput and Efficiency Metrics

The throughput of sliding window protocols is typically expressed as the η multiplied by the link R, where η represents the fraction of time the link is utilized for useful data transmission. In ideal conditions with no errors or losses, the for the stop-and-wait (equivalent to a sliding window of size W=1) is given by \eta = \frac{1}{1 + 2a}, where a = t_p / t_t is the of one-way delay t_p to packet transmission time t_t. This formula arises from the total time per packet being t_t + 2t_p (transmission plus ), with only t_t carrying useful data. For general sliding window protocols with sender window size W, the efficiency becomes \eta = \min\left(1, \frac{W}{1 + 2a}\right). This follows from the sender transmitting W packets continuously, taking time W t_t, followed by the delay $2 t_p; if W t_t < 2 t_p, the link idles, yielding the ratio above, while larger W saturates the link at full utilization. Both Go-Back-N and Selective Repeat achieve this ideal performance equivalently under error-free assumptions, approaching η=1 for sufficiently large W that covers the bandwidth-delay product. Under error conditions with packet error probability P (assuming independent errors and negligible ACK errors), throughput degrades due to retransmissions. For Go-Back-N, the efficiency is approximately \eta = \frac{1}{1 + \frac{N P}{1 - P}}, where N is the window size (often set to cover the round-trip time). This incorporates the propagation delay implicitly when N ≥ 1 + 2a. A more detailed form accounting for delay is \eta = \frac{1 - P}{1 + 2a \left(1 + \frac{(N-1) P}{1 - P}\right)}, reflecting reduced initial utilization from delay and increased retransmissions. The derivation for Go-Back-N expected transmissions per successful packet begins with the geometric distribution of successes: the probability of successful transmission of a window is (1-P)^N, but upon the first error (probability P at any position), the entire window of up to N packets is retransmitted after timeout. The expected number of transmissions E[X] until a full successful window is E[X] = 1 / (1-P)^N for the basic case, but approximating for small P and large N, it simplifies to 1 + N P / (1 - P), as each error triggers N retransmissions on average before success. Including timeout and ACK delays (2 t_p + t_t per attempt), the total expected time per successful packet scales the denominator accordingly. For Selective Repeat, efficiency is higher, approximately η ≈ 1 - P for low P, as only erroneous packets are retransmitted individually, without affecting others in the window. The derivation computes expected transmissions per packet as 1 / (1 - P), since each packet succeeds geometrically independently, with retransmission delay 2 t_p + t_t only for failures; for large windows covering delays, overhead is minimal, yielding near-full utilization minus the error rate. This step-by-step accounts for buffering at the receiver to reorder packets. Performance metrics from analyses and simulations highlight Selective Repeat's superiority in high-error environments (P > 0.01), where Go-Back-N drops sharply due to cumulative retransmissions—for instance, at P=0.05 and N=10, Go-Back-N η ≈ 0.66 while Selective Repeat maintains η ≈ 0.95—demonstrating reduced waste and better scalability for error-prone links like wireless channels.

Extensions and Applications

Advanced Protocol Features

Advanced sliding window protocols incorporate mechanisms to address , ensuring stable performance in dynamic environments. Congestion avoidance integrates with algorithms that dynamically adjust the congestion window based on detected . For instance, Tahoe resets the congestion window to one segment upon loss detection via timeout or duplicate acknowledgments, entering slow start to probe available bandwidth gradually. This approach, introduced in early implementations, prevents global synchronization by avoiding aggressive retransmissions. Reno refines this by distinguishing between timeout-detected losses and those inferred from three duplicate acknowledgments; in the latter case, it halves the window without full slow start, enabling faster recovery while maintaining fairness. These adjustments treat the sliding window as a congestion signal, multiplicatively decreasing it on loss to back off from overload while additively increasing during avoidance phases. Zero-window probing addresses scenarios where the receiver advertises a zero receive window, stalling the sender's transmission. In such cases, the sender initiates periodic probes—typically one data byte in —to check if the has freed buffer space without violating flow control. The probing interval starts at one round-trip time (RTT) and doubles with each unanswered probe up to a maximum of 60 seconds, resuming normal transmission upon receiving a non-zero window advertisement. This mechanism prevents indefinite stalls, as the must respond to probes even with a zero , ensuring the connection remains viable. Clarifications in later specifications emphasize that probes carry minimal payload to minimize overhead while reliably detecting window updates. Extensions for adapt the sliding window to support reliable one-to-many delivery, where from multiple receivers complicates traditional assumptions. The Reliable Multicast Protocol (RMP) modifies the sliding window scheme for group communications by maintaining per-receiver windows at the sender, aggregating acknowledgments to advance the overall window conservatively. This prevents overflow from stragglers while allowing pipelined transmission; negative acknowledgments (NAKs) from receivers trigger selective repairs, with the window size tuned to balance latency and throughput in lossy trees. Such adaptations ensure ordered delivery across the group without centralized coordination, scaling to larger sessions by damping implosion through probabilistic suppression. Security features leverage sequence numbers inherent to sliding windows for replay protection, particularly in protocols like IPsec. Sequence numbers provide a monotonically increasing counter per security association, enabling receivers to maintain a replay window that discards packets with numbers falling outside the expected range. In IPsec's Encapsulating Security Payload (ESP) and Authentication Header (AH), this anti-replay service verifies packet freshness; if a sequence number repeats or precedes the window's left edge, the packet is dropped to thwart attacks where adversaries resend captured data. For extended lifetimes, 64-bit sequence numbers mitigate rollover risks, ensuring robustness without compromising the protocol's efficiency. This integration preserves the sliding window's ordering guarantees while enhancing resilience against manipulation in secure tunnels.

Implementations in Modern Networks

The Transmission Control Protocol () employs sliding window mechanisms as its core for reliable data transport over networks, allowing senders to transmit multiple segments before receiving acknowledgments while dynamically adjusting the window size based on receiver capacity and network conditions. To enhance efficiency in scenarios with packet losses, TCP incorporates Selective Acknowledgment (), which enables receivers to report non-contiguous blocks of received data, permitting selective retransmissions rather than entire windows; this extension, defined in 2018, significantly reduces recovery time in high-loss environments. Additionally, TCP supports window scaling to accommodate high-bandwidth-delay product links, where the advertised window is multiplied by a negotiated shift value during connection setup, extending the effective window up to 1 GB and improving throughput on modern gigabit networks. Beyond , sliding window principles appear in derivatives of (HDLC) used within the (PPP), where HDLC-like framing in 1662 provides byte- and bit-oriented encapsulation with optional sliding window modes for error-free over serial links, commonly applied in dial-up and connections. In , a UDP-based transport standardized in 9000, selective repeat mechanisms integrate stream-level flow control with offset-based acknowledgments and maximum data limits, functioning as per-stream sliding windows to prevent receiver overload while mitigating across multiplexed streams. Similarly, 5G New Radio (NR) employs (HARQ) at the MAC layer, as specified in 3GPP TS 38.321, where up to 16 parallel HARQ processes form a reordering for asynchronous retransmissions, combining with selective repeats to achieve low-latency reliability in radio access networks. Contemporary networks face challenges in applying sliding windows amid variable round-trip times (RTTs), particularly in environments where handoffs and cause fluctuations exceeding 100 ms, necessitating adaptive window sizing to avoid stalls and buffer overflows as analyzed in of over cellular links. Integration with (MPTCP), per RFC 6824, addresses multi-homing by maintaining a shared window across subflows, allowing aggregate throughput gains while each path operates its own sliding window, beneficial for heterogeneous access like and aggregation. In web traffic scenarios, TCP's sliding achieves near-line-rate efficiency on low-latency terrestrial links (RTT < 50 ms), supporting bursty HTTP/HTTPS downloads with window sizes scaling to 64 or more, yielding throughputs over 100 Mbps in controlled tests. Conversely, over links with RTTs up to 600 ms, such as geostationary orbits, standard windows underutilize bandwidth-delay products exceeding 1 MB, resulting in throughputs dropping to 10-20% of capacity without extensions like or scaling, as demonstrated in evaluations where Hybla-tuned variants recover up to 80% utilization by aggressively advancing windows.

References

  1. [1]
    2.5 Reliable Transmission - Computer Networks: A Systems Approach
    The data link protocol used in the original ARPANET provides an interesting alternative to the sliding window protocol, in that it is able to keep the pipe ...
  2. [2]
    RFC 793 - Transmission Control Protocol - IETF Datatracker
    The receiving TCP reports a "window" to the sending TCP. This window specifies the number of octets, starting with the acknowledgment number, that the ...Missing: Sliding | Show results with:Sliding
  3. [3]
    [PDF] 6.263/16.37: Lectures 3 & 4 The Data Link Layer: ARQ Protocols - MIT
    • Sliding window with cumulative acks. – Receiver can only return a single “ack” sequence number to the sender. – Acknowledges all bytes with a lower ...
  4. [4]
    A Brief History of the Internet - Internet Society
    The original ARPANET grew into the Internet. Internet was based on the ... Flow control would be done by using sliding windows and acknowledgments (acks).Origins Of The Internet · The Initial Internetting... · Transition To Widespread...
  5. [5]
    HDLC Protocol Overview - freesoft.org
    HDLC supports several modes of operation, including a simple sliding window mode for reliable delivery. Since Internet provide retransmission at higher ...
  6. [6]
    [PDF] Data Link Layer, Part 5 Sliding Window Protocols Preface
    used between a pair of directly-connected sender and receiver. ❑These protocols serve three purposes. 1. to guarantee delivery reliability,. 2. to enforce ...
  7. [7]
    8 Abstract Sliding Windows - An Introduction to Computer Networks
    The sender picks a window size, winsize. The basic idea of sliding windows is that the sender is allowed to send this many packets before waiting for an ACK.
  8. [8]
    [PDF] Sliding Window Protocol - MIT
    Dec 5, 2011 · Window definition: If window is W, then max number of. unacknowledged packets is W. This is a fixed-size sliding window. Page 2. 12/5/11.
  9. [9]
    The Internet turns 40! | Inria
    May 30, 2023 · Then, in March 1973, he presented his results - the “sliding window” mechanism used to control errors and flows when messages are being ...
  10. [10]
    [PDF] A Protocol for Packet Network Intercommunication - cs.Princeton
    A protocol that supports the sharing of resources that exist in different packet switching networks is presented. The protocol provides.Missing: sliding | Show results with:sliding
  11. [11]
    What is Synchronous Data Link Control (SDLC)? - TechTarget
    Aug 25, 2021 · Developed by IBM in the 1970s, SDLC is a more efficient replacement for the older Binary Synchronous Communications protocol, first introduced ...
  12. [12]
    [PDF] Reliable Data Transport Protocols - MIT OpenCourseWare
    Nov 3, 2012 · The good thing about the stop-and-wait protocol is that it is very simple, and should be used under two circumstances: first, when throughput ...
  13. [13]
    [PDF] Lecture 7: Flow Control" - UCSD CSE
    Stop-and-Wait Performance". ○ Lousy performance if xmit 1 pkt << prop. delay ... Highly lossy or very long delay channels (e.g., satellite). 13. CSE 123 ...
  14. [14]
    [PDF] CSC458 – Lecture 7 Sliding Windows, ARQ Connections
    • Leads to Sliding Window Protocol. – “window size” says how much data can be sent without waiting for an acknowledgement. Page 5. Sliding Window – Sender.
  15. [15]
    [PDF] 15-441 Computer Networking Outline Transport Protocols ...
    How do we overcome the limitation of one packet per roundtrip time: Sliding window. • Receiver advertises a “window” of buffer space. • Sender can fill the ...Missing: core | Show results with:core
  16. [16]
    [PDF] Lecture 7: Sliding Windows
    ○ Sender and receiver each maintain “window” abstractions to track outstanding packets. ♢. At the core of all modern ARQ protocols. ○ Go-Back-N is a special ...
  17. [17]
    Sliding Window Protocol - an overview | ScienceDirect Topics
    At the data link layer, sliding window protocols are used for reliable frame delivery, frame ordering, and frame retransmission. 3. Both sender and receiver ...
  18. [18]
    [PDF] Data Link Control Protocols
    Sliding Window Protocols q Window = Set of sequence numbers to send/receive q Sender window q Sender window increases when ack received q Packets in sender ...
  19. [19]
  20. [20]
  21. [21]
    [PDF] The sliding window protocol
    2. Receiver accepts a message if it is anticipated. Otherwise, it sends back an ack for the last message that it received.Missing: procedures | Show results with:procedures
  22. [22]
    RFC 3366 - Advice to link designers on link Automatic Repeat ...
    This document provides advice to the designers of digital communication equipment and link-layer protocols employing link-layer Automatic Repeat reQuest (ARQ) ...Missing: history | Show results with:history
  23. [23]
    [PDF] ECE 333: Introduction to Communication Networks Fall 2002 Lecture 9
    Go-Back-N vs. Selective Repeat. Conceptually, the main difference between the "Go-back N" and "Selective. Repeat" protocols is the size of the receiver window.
  24. [24]
    [PDF] Flow Control - University of Washington
    • For Selective Repeat: 2W seq numbers. • W for packets, plus W for earlier acks. • For Go-Back-N: W+1 sequence numbers. Typically implement seq. number with ...
  25. [25]
    Advice to link designers on link Automatic Repeat reQuest (ARQ)
    Stop-and-wait ARQ is simple, if inefficient, for protocol designers to ... Protocol", RFC 793, September 1981. [RFC1122] Braden, R., Ed., "Requirements ...
  26. [26]
    [PDF] Throughput Performance of Data-Communication Systems Using ...
    Sep 8, 1977 · 1 shows that as expected the stop-and-wait ARQ performance approaches the selective-repeat ARQ performance as the delays decrease. Throughput ...Missing: formula | Show results with:formula
  27. [27]
    [PDF] Go Back N and SRP - MIT OpenCourseWare
    Why are packets numbered Modulo 2W? • Lets consider the range of packets that may follow packet i at the receiver i - W +1.
  28. [28]
    [PDF] III. Link Layer ARQ Protocols 1 Automatic Repeat reQuest ... - MEWS
    The first type of generalized ARQ protocols is go back n ARQ in which the sender is allowed to transmit n frames before stopping to wait for acknowledgments. ...Missing: explanation | Show results with:explanation
  29. [29]
    [PDF] Chapter 3: Datalink Layer
    ⇒ Maximum Window = (Sequence Number Space)/2. Page 20. 20. 39. Raj Jain. The Ohio State University. Selective Repeat: Example. Fig 3-19. 40. Raj Jain. The Ohio ...
  30. [30]
    You Don't Know Jack about Network Performance - ACM Queue
    Jun 7, 2005 · In a sliding window protocol, the window size is the amount of data a sender is allowed to have “in” the network without having yet received an ...
  31. [31]
    [PDF] Optimal Sliding-Window Strategies in Networks with Long Round ...
    Finding the optimal strategy can then be viewed as being composed of two subproblems: an 'outer' problem of finding the optimal window size N, depending on the ...
  32. [32]
    CS 336 Lecture Notes -- Performance of Sliding Window Protocols
    The behavior of the protocol is determined entirely by the size of the sender's window, W. The size of the receiver's window, B, is irrelevant in this case.Missing: procedures | Show results with:procedures
  33. [33]
    Congestion avoidance and control - ACM Digital Library
    Congestion control involves finding places that violate conservation and fixing them. By 'conservation of packets' I mean that for a connection 'in equilibrium ...
  34. [34]
    RFC 5681 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  35. [35]
    RFC 9293: Transmission Control Protocol (TCP)
    If the window shrinks to zero, the TCP implementation MUST probe it in the standard way (described below) (MUST-35).¶. 3.8.6.1. Zero-Window Probing. The sending ...
  36. [36]
    RFC 6429 - TCP Sender Clarification for Persist Condition
    This document clarifies the Zero Window Probes (ZWPs) described in RFC 1122 ("Requirements for Internet Hosts -- Communication Layers").
  37. [37]
    [PDF] Reliable Multicast Protocol Spepficatioiis Flow Control and
    Oct 5, 1995 · A sliding window flow control scheme is an adaptive mechanism that attempts to maintain a constant window of packets in transit. Packets in ...
  38. [38]
    RFC 4304: Extended Sequence Number (ESN) Addendum to IPsec ...
    Abstract The IP Security Authentication Header (AH) and Encapsulating Security Payload (ESP) protocols use a sequence number to detect replay. This document ...
  39. [39]
    RFC 793 - Transmission Control Protocol (TCP) - IETF
    TCP provides a means for the receiver to govern the amount of data sent by the sender. This is achieved by returning a "window" with every ACK indicating a ...Missing: sliding | Show results with:sliding
  40. [40]
    RFC 2018 - TCP Selective Acknowledgment Options
    A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.
  41. [41]
    RFC 1323 TCP Extensions for High Performance - IETF
    The window scale extension expands the definition of the TCP window to 32 bits and then uses a scale factor to carry this 32- bit value in the 16-bit Window ...Missing: history | Show results with:history
  42. [42]
    RFC 9000 - QUIC: A UDP-Based Multiplexed and Secure Transport
    QUIC is a secure general-purpose transport protocol. This document defines version 1 of QUIC, which conforms to the version-independent properties of QUIC.Table of Contents · Flow Control · Address Validation · Connection TerminationMissing: sliding | Show results with:sliding
  43. [43]
    [PDF] ETSI TS 138 321 V17.7.0 (2024-02)
    This Technical Specification (TS) has been produced by ETSI 3rd Generation Partnership Project (3GPP). The present document may refer to technical ...Missing: sliding | Show results with:sliding
  44. [44]
  45. [45]
    RFC 6824 - TCP Extensions for Multipath Operation with Multiple ...
    MPTCP maintains the sending window at the MPTCP connection level and the same window is shared by all subflows. All subflows use the MPTCP connection level ...
  46. [46]
    [PDF] TCP Performance over Satellite Channels - UC Berkeley EECS
    TCP performance is degraded by high latency, bandwidth asymmetry, and packet congestion. Large latencies and high error rates on satellite channels also pose ...
  47. [47]
    TCP performance over geostationary satellite links - IEEE Xplore
    We start by providing a brief background on geostationary satellite links and TCP, before discussing each of the main factors that impact TCP performance.