Fact-checked by Grok 2 weeks ago

TCP congestion control

TCP congestion control refers to the set of algorithms implemented within the Transmission Control Protocol () to detect and mitigate by dynamically adjusting the rate at which is transmitted, thereby preventing the network from becoming overwhelmed and ensuring efficient resource utilization. These algorithms collectively manage the sender's congestion window—a variable that limits the amount of unacknowledged in flight—to probe for available while responding to signs of congestion such as or increased delay. The foundational algorithms of TCP congestion control, as standardized in RFC 5681, include slow start, which exponentially increases the congestion window during the initial transmission phase or after a timeout to quickly utilize available bandwidth; congestion avoidance, which linearly increases the window to probe for additional capacity without risking overload; fast retransmit, which prompts immediate retransmission of lost packets upon detection of three duplicate acknowledgments; and fast recovery, which temporarily inflates the window to maintain throughput during retransmission while avoiding full slow start. These mechanisms operate in tandem, transitioning between phases based on events like timeouts or duplicate ACKs, and are triggered by the slow start threshold (ssthresh), which delineates the boundary between slow start and congestion avoidance modes. The development of these algorithms addressed severe congestion collapses in the mid-1980s , where exponential retransmission backoffs and lack of rate control led to widespread packet drops and near-total throughput loss. In a seminal 1988 paper, proposed the core ideas of slow start and congestion avoidance, introducing a host-based approach that estimates available through additive increase and multiplicative decrease (AIMD) principles, which became the basis for subsequent standards. Fast retransmit and fast recovery were later integrated to improve responsiveness to mild congestion without invoking slow start, as formalized in 2581 (obsoleted by 5681 in 2009). Over time, TCP congestion control has evolved to accommodate diverse network conditions, with variants like Reno (a common implementation incorporating all four algorithms), NewReno (enhancing fast recovery for multiple losses), and more recent algorithms such as CUBIC (designed for high-bandwidth-delay product networks using a cubic probing function) and BBR (which models bottleneck bandwidth and round-trip propagation time for delay-based control). These extensions maintain while optimizing performance in modern environments, including wireless links and data centers, but all adhere to the where endpoints alone manage congestion signals.

Fundamentals

Network Congestion

in IP networks occurs when the aggregate arrival rate of packets at a router exceeds its outgoing link capacity, leading to queue overflow in the router's buffers. This overflow results in packet drops, as incoming packets are discarded when buffers reach their limit, causing for affected flows. Consequently, congestion manifests as increased due to queuing and reduced overall throughput, as the network's effective capacity diminishes under sustained overload. One prominent effect of is , where excessively large buffers in routers and other devices exacerbate by holding packets for prolonged periods during overload, even after the immediate of subsides. This leads to poor interactive for applications like voice and video, as delays accumulate without timely feedback to senders. Another impact is global synchronization, in which multiple concurrent flows experience simultaneous packet losses from a shared congested link, prompting them to reduce their transmission rates in unison and creating oscillatory underutilization of the link's . Congestion can also induce unfairness in bandwidth sharing among competing flows, where some flows may capture a disproportionate share of resources due to timing or burstiness differences, leaving others starved or severely throttled. A historical example of severe congestion's consequences is the 1986 Internet congestion collapse, during which exponential retransmissions by hosts in response to widespread packet losses caused network throughput to plummet to near zero for extended periods, highlighting the risks of unchecked retransmission growth. Basic queue management in routers, such as First-In-First-Out () scheduling with tail-drop, plays a central role in congestion dynamics but often worsens the problem. In queues, packets are served in arrival order until the buffer fills, at which point new arrivals are dropped indiscriminately, leading to bursty losses that synchronize flows and promote unfairness among them. This simple mechanism, while prevalent in early IP routers, lacks proactive signaling of impending , amplifying issues like global synchronization. To mitigate these effects, protocols like employ end-to-end congestion control, where endpoints infer and respond to signals without router assistance.

Goals and Principles

congestion control aims to maximize the throughput of data transmission across networks while ensuring efficient utilization of available . A primary objective is to prevent , a state where excessive packet drops lead to retransmissions that further exacerbate network overload, potentially rendering the network unusable. Additionally, it seeks to minimize and delay, promoting reliable and timely delivery without overwhelming intermediate routers. These goals are essential for maintaining stable in shared environments like the . Central principles guiding TCP congestion control include end-to-end responsibility, where endpoints detect and respond to congestion without relying on network assistance, as formalized in RFC 2581. This approach uses implicit signals such as and delay to infer network conditions, avoiding the need for explicit mechanisms that could introduce overhead or single points of failure. TCP employs conservative assumptions about the network state, starting with low sending rates and gradually increasing them to probe for available capacity, thereby reducing the risk of sudden overloads. Key trade-offs in TCP congestion control balance utilization against , where aggressive increases in sending rates can achieve higher throughput but risk instability through oscillations in queue lengths. Another trade-off exists between responsiveness to changing conditions and fairness among competing flows, as rapid adjustments may allow one flow to dominate resources at the expense of others. These considerations ensure that congestion control mechanisms promote equitable sharing of while adapting to dynamic topologies. The evolution of TCP congestion control began with RFC 793 in 1981, which focused primarily on flow control between sender and receiver but lacked mechanisms to address network-wide . This limitation became evident during the late 1980s when growth led to frequent collapses, prompting the development of congestion-aware designs. Van Jacobson's seminal work introduced foundational algorithms that shifted toward proactive avoidance, marking a transition from basic reliability to robust network stability.

Congestion Window

The congestion window (cwnd) is a state variable that determines the maximum amount of outstanding unacknowledged data, measured in bytes, that the sender is permitted to transmit into the network at any given time. It serves as the sender's estimate of the available along the path, thereby preventing the network from becoming overwhelmed by excessive traffic. Unlike the receiver window (rwnd), which is advertised by the receiver to indicate its buffer capacity, cwnd is maintained solely by the sender to enforce network-wide congestion control. The effective sending window in TCP is calculated as the minimum of the congestion window and the receiver window: effective window = min(cwnd, rwnd). This ensures that the sender respects both the receiver's capacity and the inferred network capacity. In modern TCP implementations, the initial value of cwnd, known as the initial window (IW), is typically set to 10 times the (MSS), or approximately 14.6 KB assuming a standard MSS of 1460 bytes. The window is dynamically adjusted based on from : it increases during periods of non- to probe for additional available capacity and decreases upon detection of signals such as or increased delay. In the congestion avoidance phase, cwnd grows linearly by incrementing it by 1 MSS for each round-trip time (RTT), which can be implemented by adding 1 MSS to cwnd upon of a full cwnd's worth of acknowledgments covering new data: \text{cwnd} \leftarrow \text{cwnd} + 1 \cdot \text{MSS} This additive increase per RTT helps the sender gradually approach the network's equilibrium throughput without overshooting. During the slow start phase, cwnd is used to exponentially the sending rate from the initial value.

Core Mechanisms

Slow Start

The slow start phase in control serves to rapidly probe the available network bandwidth at the beginning of a or after a significant loss event, starting from a small congestion window (cwnd) to prevent sudden bursts that could exacerbate . By exponentially increasing the amount of data in flight, slow start allows to quickly approach the network's capacity while minimizing the risk of , as observed in early incidents where unchecked transmissions led to widespread . This mechanism was first proposed by to address the " " problem, ensuring that the sender does not inject data faster than the network can forward it. The core operation begins with an initial cwnd value, typically set to a small number of segments—historically 1 segment, but standardized to up to 4 segments (or approximately 4 times the sender's , SMSS, in bytes) to balance quick startup with burst prevention. Upon receiving an (ACK) for new data, TCP increases cwnd by up to 1 SMSS bytes, effectively allowing the window to double every round-trip time (RTT) under ideal conditions with no delayed ACKs or losses. This growth is capped by the minimum of cwnd and the receiver's advertised window (rwnd), ensuring the sender respects both congestion signals and receiver capacity. The exponential ramp-up can be expressed as follows, where cwnd is in bytes and the increase occurs per ACK: \text{cwnd} \leftarrow \text{cwnd} + \text{SMSS} Over an RTT, assuming a full window of unique ACKs, this results in cwnd approximately doubling, enabling efficient utilization without prior knowledge of path capacity. Slow start concludes when cwnd exceeds the slow start threshold (ssthresh), at which point TCP shifts to the congestion avoidance phase for more conservative linear growth. The ssthresh is typically initialized to a high value (e.g., 64 KB) at setup but reset to half the current flight size (the amount of data outstanding) or at least 2 SMSS upon detecting via , providing a between aggressive probing and steady-state operation. This transition helps maintain fairness and stability once the network's equilibrium is approached. Implementations vary to mitigate risks like initial bursts in high-bandwidth or lossy environments; for instance, RFC 3390 limits the initial cwnd to no more than 4 segments to curb excessive early transmissions. Additionally, the optional limited slow-start modification caps the cwnd growth rate after reaching a parameter-defined (e.g., 100 SMSS), increasing it by at most half the standard rate per RTT thereafter, which reduces the likelihood of loss bursts in paths with large windows.

Congestion Avoidance

The congestion avoidance phase in TCP aims to maintain high utilization while preventing oscillations that could lead to , following an initial estimation. This phase is entered when the congestion window (cwnd) exceeds the slow start threshold (ssthresh), transitioning from the of slow start. The core mechanism involves an additive increase to the cwnd, allowing TCP to probe for additional available in a controlled, gradual manner. This follows the (AIMD) principle, where cwnd grows by 1 (MSS) per round-trip time (RTT) during periods without . In practice, for each (ACK) received during congestion avoidance, cwnd is incremented by MSS² / cwnd, which collectively approximates the +1 MSS per RTT increase when assuming MSS-normalized units (e.g., cwnd += 1 / cwnd per ACK). The multiplicative decrease upon congestion is applied differently based on detection: for loss via three duplicate ACKs (after fast recovery), ssthresh is set to cwnd / 2 and congestion avoidance resumes from this level (Reno-style); for timeouts indicating severe congestion, ssthresh is set to flight size / 2 (at least 2 MSS) but cwnd resets to 1 MSS, entering slow start. This ensures rapid response to overload while preserving some utilization, with the AIMD dynamics producing a characteristic sawtooth pattern in cwnd over time, fostering fairness among competing flows and network stability.

Fast Retransmit

Fast retransmit is a key mechanism in TCP congestion control designed to detect and recover from more rapidly than relying solely on retransmission timeouts. Introduced by in his seminal 1988 paper on congestion avoidance and control, it leverages duplicate acknowledgments (ACKs) from the receiver to infer loss without waiting for the retransmission timer to expire. This approach was later formalized as part of TCP's standard congestion control algorithms in RFC 2581. The trigger for fast retransmit occurs when the TCP sender receives three duplicate ACKs for the same sequence number, signaling that a segment has been lost and created a gap in the received data stream. These duplicate ACKs are generated by the receiver as it processes out-of-order segments arriving after the loss, acknowledging the last correctly received byte while indicating the missing one. The threshold of three duplicate ACKs was empirically determined to balance sensitivity to loss against tolerance for minor reordering in the network. Upon detecting this condition, immediately retransmits the missing segment, bypassing the potentially lengthy wait for the retransmission timeout (RTO), which can span multiple round-trip times (RTTs). This action allows recovery to begin almost immediately after the loss is inferred, typically adding only the delay for the three duplicate s to traverse the network plus one RTT for the retransmitted segment to reach the receiver and elicit a new . The primary benefit of fast retransmit is a substantial reduction in recovery time compared to timeout-based mechanisms, transforming what could be seconds of delay into a process that incurs near-zero additional beyond the inherent RTT. By enabling quicker repair of isolated losses, it improves overall throughput and responsiveness, particularly in networks with occasional packet drops unrelated to severe . This mechanism is typically followed by fast recovery to adjust the congestion window appropriately.

Fast Recovery

Fast Recovery is a phase in TCP congestion control that allows the sender to continue transmitting data after detecting a through duplicate acknowledgments, without resorting to a full slow start restart, as was done in earlier implementations like TCP Tahoe. Introduced as part of TCP Reno, it improves throughput by preserving the acknowledgment clock and avoiding the prolonged recovery associated with timeouts. The mechanism begins upon receiving the third duplicate acknowledgment, which triggers fast retransmit of the lost segment. At this point, the slow start threshold (ssthresh) is set to the maximum of half the current flight size and twice the sender's (SMSS): \text{ssthresh} = \max\left(\frac{\text{FlightSize}}{2}, 2 \times \text{SMSS}\right) The congestion window (cwnd) is then inflated to account for the segments that have left the network, estimated as ssthresh plus three times SMSS (representing the three duplicate acknowledgments received): \text{cwnd} = \text{ssthresh} + 3 \times \text{SMSS} During the recovery phase, for each additional duplicate acknowledgment beyond the initial three, cwnd is further inflated by one SMSS to reflect the continued departure of acknowledged data from the network: \text{cwnd} += \text{SMSS} \quad \text{(per additional duplicate ACK)} This inflation enables the sender to transmit new segments while waiting for , maintaining pipe utilization without entering slow start. Recovery concludes when a new (non-duplicate) acknowledgment arrives, confirming the retransmitted data and potentially more. At this point, cwnd is deflated to ssthresh, and the exits the fast recovery state to resume congestion avoidance with the reduced window size: \text{cwnd} = \text{ssthresh} This approach avoids the full penalty of a retransmission timeout by allowing quicker resumption of normal operation after a single loss event. Fast Recovery assumes at most one packet loss per congestion window, which works well for isolated losses but can lead to inefficiencies or timeouts if multiple losses occur within the same window of outstanding data. In such cases, the inflation may overestimate the pipe, potentially causing further congestion signals.

Algorithms

Loss-Based Algorithms

Loss-based algorithms detect network congestion primarily through packet losses, which serve as implicit signals when buffers in drop-tail queues overflow, prompting the sender to reduce its transmission rate to alleviate pressure on the . These algorithms form the foundation of TCP congestion control, building on the (AIMD) principle to throughput and , and they remain widely deployed due to their and with traditional paths where losses are the dominant congestion indicator. Unlike approaches that use delay gradients, loss-based methods react only after packets are dropped, which can lead to higher in some scenarios but ensures robustness in environments with bursty traffic. TCP Tahoe, one of the earliest loss-based variants, resets the congestion window (cwnd) to 1 segment upon detecting —either via retransmission timeout (RTO) or three duplicate acknowledgments (dupACKs)—and sets the slow-start threshold (ssthresh) to half the current cwnd before restarting slow start. This conservative response ensures quick recovery from congestion but can result in underutilization of after losses, as the full restart from a small window delays ramp-up. Tahoe's design, rooted in the original congestion avoidance mechanisms, prioritizes stability over aggressive growth, making it suitable for early conditions with high loss rates. TCP Reno improves upon Tahoe by incorporating fast retransmit and fast to handle losses signaled by three dupACKs more efficiently, setting ssthresh to cwnd/2, retransmitting the lost segment, and temporarily inflating cwnd to ssthresh plus three to account for acknowledged data during . During fast , Reno deflates cwnd upon receiving new ACKs and exits to congestion avoidance once all outstanding data is acknowledged, avoiding a full slow-start reset unless a timeout occurs, in which case it falls back to Tahoe-like behavior. This enhancement boosts throughput in networks with moderate loss rates by reducing idle periods post-loss, though it struggles with multiple losses in a single window, as partial ACKs can prematurely exit and trigger unnecessary retransmits. TCP NewReno addresses Reno's limitations with multiple losses by staying in fast until all outstanding segments are acknowledged, using partial ACKs to trigger additional retransmits without exiting recovery prematurely. Upon loss detection via three dupACKs, it sets ssthresh to cwnd/2, retransmits the lost segment, and sets cwnd to ssthresh plus the number of segments still outstanding, then increments cwnd by one for each new ACK during recovery. This deferral of full congestion response until the window is cleared improves performance in lossy links, reducing timeouts and enhancing throughput by up to 20-30% compared to Reno in simulations with correlated losses. For high-speed networks exceeding 1 Gbps, (Binary Increase Congestion) employs a search during avoidance to rapidly increase cwnd toward ssthresh, using larger increments when far from the target and halving them as it approaches, combined with a slow-start-like for aggressive growth. sets ssthresh to cwnd/2 on loss and uses a logarithmic probing to mitigate RTT unfairness against standard flows, achieving better utilization in long-distance paths while maintaining TCP-friendliness. Evaluations show attaining up to 90% link utilization on 10 Gbps links, compared to Reno's 50-60%, though it can exhibit oscillations in highly variable conditions. TCP CUBIC, designed as a successor to BIC for fast long-distance networks, replaces linear window growth with a cubic function that concaves upward for aggressive increases when below the last congestion window (W_max) and concave downward when above, optimizing for bandwidths over 1 Gbps. On loss, CUBIC performs multiplicative decrease: cwnd ← β × cwnd (β=0.7), sets ssthresh ← cwnd, and begins cubic growth from the reduced window toward W_max, with the cwnd growth governed by the equation: \text{cwnd} = C \cdot (t - K)^3 + W_{\max} where C is a (typically 0.4), t is time since the last event, and K is the time taken to achieve W_{\max} under the cubic curve, ensuring smooth transitions and reduced RTT unfairness. This formulation allows CUBIC to probe for available more responsively post-loss, yielding 2-3 times higher throughput than Reno in high-bandwidth-delay product (BDP) networks while preserving fairness with loss-based flows.
AlgorithmThroughput (High BDP Networks)Fairness (Intra-Protocol)RTT Unfairness (vs. Standard TCP)Performance in Lossy Links
TahoeLow (e.g., 20-40% utilization)HighLowStable but slow recovery
RenoModerate (50-70%)HighModerateProne to timeouts on multiple losses
NewRenoModerate-High (60-80%)HighModerateBetter handling of bursty losses
High (80-95%)ModerateLow-ModerateOscillatory in variable loss
CUBICVery High (90-98%)HighLowResponsive with minimal oscillations

Delay-Based Algorithms

Delay-based algorithms in TCP congestion control detect impending by monitoring variations in round-trip time (RTT), using increased delay as an early warning signal to proactively adjust the congestion window (cwnd) before packet losses occur. These approaches are particularly advantageous in low-loss, high-delay environments, such as long-haul or wide-area networks, where loss-based methods may react too late, leading to unnecessary retransmissions and throughput degradation. By responding to queueing delays rather than waiting for drops, delay-based algorithms aim to maintain higher link utilization while keeping queues shallow, though they differ from loss-based algorithms by prioritizing prevention over reaction. The seminal delay-based algorithm, TCP Vegas, introduced in 1994, was the first to leverage RTT measurements for proactive congestion avoidance. TCP Vegas estimates the expected throughput as the ratio of the current cwnd to the base RTT (the minimum observed RTT during the connection, representing propagation delay without queueing). The actual throughput is computed as the cwnd divided by the current RTT. Congestion is inferred from the difference (diff), defined as the number of extra packets queued, approximated by: \text{diff} = \text{cwnd} \times \left(1 - \frac{\text{BaseRTT}}{\text{CurrentRTT}}\right) Vegas adjusts cwnd every round-trip time (RTT) based on diff relative to thresholds α (typically 1 packet for increase) and β (typically 3 packets for decrease). If diff < α, cwnd increases additively by 1/cwnd per ACK to probe for more bandwidth. If diff > β, cwnd decreases additively by 1/cwnd per ACK to alleviate congestion. This linear adjustment helps stabilize the system around an equilibrium point, achieving 37-71% better throughput than TCP Reno in evaluated scenarios with reduced losses. Building on delay principles, FAST (2004) refines congestion detection for high-speed, long-latency links by primarily focusing on the delay gradient—the change in queueing delay over time—while incorporating secondary signals like and inter-arrival time variations for robustness. FAST estimates queueing delay as current RTT minus base RTT and updates cwnd proportionally to this delay, using a formula like cwnd ← cwnd + α (baseRTT / RTT - queueing delay / RTT), where α is a , enabling faster convergence and higher utilization (up to 10-100 times Reno's in bulk transfers over high-bandwidth-delay product paths). Loss events trigger a multiplicative halving of cwnd, but the core mechanism remains delay-driven to avoid oscillations. Despite their proactive nature, delay-based algorithms face limitations, including high sensitivity to non-congestion-related RTT fluctuations (e.g., from route changes or ), which can falsely signal congestion and cause unnecessary backoffs. Additionally, in networks with shallow buffers—common in modern data centers—the queueing delay signal is weak or absent, leading to conservative cwnd adjustments and bandwidth underutilization compared to loss-tolerant flows. These issues highlight the challenges of relying solely on delay in variable or buffer-constrained environments.

Hybrid and Model-Based Algorithms

Hybrid and model-based algorithms for TCP congestion control integrate signals from and delay with explicit estimates of network parameters, such as available and round-trip time (RTT), to achieve more precise rate adjustment and pacing in varied network conditions. These approaches employ mathematical models to infer the bottleneck and delay, enabling the sender to operate closer to the available without relying solely on reactive detection or simplistic delay gradients. By using filtered measurements from acknowledgments (ACKs) and timestamps, these algorithms mitigate issues like and underutilization in high-bandwidth-delay product () paths, while maintaining compatibility with existing TCP deployments. TCP Westwood+ enhances loss-based control by incorporating estimation to tune recovery parameters more accurately, particularly in or error-prone links. It estimates the available (BWE) at the sender using the rate of returning ACKs, computed as the difference in acknowledged bytes divided by the time interval between ACKs, with a to smooth variations: \text{BWE} = \frac{\text{ACKed bytes}}{t_{\text{now}} - t_{\text{last round}}} Upon indication (e.g., ), it sets the slow-start threshold (ssthresh) to BWE multiplied by the minimum RTT (RTT_min) and divided by the (MSS), rather than halving the congestion window as in Reno. This bandwidth-tuned approach reduces unnecessary window reductions from non- losses, improving throughput by up to 300% in simulated scenarios compared to TCP Reno. Compound TCP, developed by Microsoft, combines a loss-based component similar to TCP Reno with a delay-based window to scale better in high-speed long-distance networks. It maintains two windows: a Reno-style loss window (w_reno) that responds to packet losses via additive increase and multiplicative decrease, and a delay window (w_delay) that increases based on observed queuing delay to probe available bandwidth without inducing loss. The total congestion window is the sum of these components, cwnd = w_reno + w_delay, where w_delay is adjusted using a gain factor derived from the estimated spare bandwidth and RTT. This hybrid design achieves up to 3x higher throughput than Reno in 1 Gbps long-fat networks while remaining TCP-friendly, as evaluated in testbed experiments. TCP BBR, introduced by in , represents a model-based shift by explicitly estimating the bottleneck bandwidth (BtlBw) and minimum RTT (RTprop) to control sending rate and window size, largely decoupling from loss signals. BtlBw is derived from the maximum observed delivery rate over recent RTTs, while RTprop is the lowest recent RTT to filter out queueing delays. The pacing rate is set to BtlBw to match the bottleneck capacity, and the congestion window is approximately BtlBw × RTprop to cover the BDP, with an additional buffer for in-flight data. BBR version 1 ignores isolated losses, focusing on delivery rate trends, which boosts throughput by 2-25x over CUBIC in Google's wide-area backbone under shallow buffers but can exacerbate queueing in lossy environments. BBR version 2, released in , refines the model by incorporating detection to address overestimation in shallow-buffered networks, where version 1 might send excessively during microbursts. It introduces a gain-based response, reducing the pacing rate and window by a factor of 0.7 upon detecting rates above 2%, while retaining the core and RTT estimates. This adjustment improves fairness with loss-based flows, reducing their by up to 90% in mixed deployments, as shown in controlled experiments with varying sizes. Other model-based variants include TCP Hybla, designed for high-latency environments like links, where standard TCP underperforms due to prolonged RTTs. Hybla compensates by scaling the congestion window increase proportionally to the RTT ratio relative to a reference wired path (e.g., 100 ms), effectively accelerating slow start and avoidance phases to achieve fair bandwidth sharing. Upon loss, it applies a proportional rate reduction based on the normalized , enhancing throughput by factors of 5-10 over Reno in geostationary simulations with 600 ms RTTs.

Recent Advancements

Bottleneck Bandwidth and Round-trip propagation time (BBR) version 3, released in 2023, introduces enhancements to gain detection and pacing mechanisms, improving upon BBRv2's handling of multipath scenarios and achieving better fairness in () satellite networks like . BBRv3 refines bandwidth estimation to reduce self-induced congestion, leading to higher throughput and lower latency in high-variability environments without relying on loss signals. Evaluations in deployments demonstrate that BBRv3 outperforms CUBIC in throughput while maintaining fairness against loss-based algorithms. In 2024, TCP QtColFair emerged as a queueing theory-based designed for fair resource sharing in links, employing a cubic-like window growth function adjusted by estimated queue lengths to balance utilization and delay. Derived from , it dynamically tunes the sending rate to achieve approximately 96% link utilization, surpassing CUBIC's 93% and matching BBR's performance in buffered networks while minimizing packet losses. This approach addresses fairness issues in multi-flow scenarios by incorporating queueing delay feedback, making it suitable for data centers and wide-area networks. MSS-TCP, proposed in 2025, targets millimeter-wave (mmWave) and cellular networks by dynamically adjusting the congestion window based on signal strength, mobility patterns, and round-trip time (RTT) variations. The algorithm scales the window size proportionally to the (MSS) and incorporates mobility-induced handoff predictions to mitigate throughput drops during signal fluctuations, achieving up to 2x higher compared to standard variants in mobile mmWave environments. Evaluations in testbeds highlight its effectiveness in ad-hoc and high-mobility scenarios, where it reduces reordering and loss impacts. A (RL)-based TCP congestion control , introduced on in 2025, leverages Deep Q-Networks to adaptively tune the congestion window and pacing rate in response to dynamic network conditions like varying losses and delays. Trained on simulated environments mimicking real-world variability, it optimizes for throughput and fairness by learning policies from state observations including RTT and buffer occupancy, outperforming traditional in heterogeneous networks by 15-25% in simulated and ad-hoc setups. RFC 9743, published in March 2025, provides updated guidelines for specifying and evaluating new TCP congestion control algorithms, emphasizing modularity, rigorous testing in diverse topologies, and safeguards against global Internet harm. It mandates simulations and real-world deployments to assess interactions with existing flows, promoting standardized documentation for interoperability and deployment. Recent evaluations across , ad-hoc wireless (e.g., integrating deep enhancements like DCERL+), and networks confirm BBRv3's superiority over CUBIC in throughput and latency, with hybrid approaches like QtColFair and MSS-TCP showing gains in fairness and mobility handling.

Classification

Black Box Approaches

Black box approaches to TCP congestion control treat as an opaque entity, making no assumptions about internal queue states or link characteristics and relying exclusively on end-to-end feedback signals, such as or round-trip time variations, observed at the sender and receiver. These methods emerged as foundational strategies to stabilize the early without requiring router modifications or explicit network assistance, focusing instead on inferring congestion from observable packet behaviors. Prominent examples include TCP Tahoe, Reno, NewReno, and CUBIC, all primarily loss-based implementations. TCP Tahoe, the initial formulation, uses slow start for initial window growth and (AIMD) during avoidance, resetting the window to one upon detection via duplicate acknowledgments or timeouts. TCP Reno builds on Tahoe by introducing fast retransmit and fast recovery to avoid full slow-start resets after three duplicate ACKs, halving the window instead. NewReno enhances Reno's handling of multiple losses within a single window by continuing fast recovery upon partial ACKs, reducing unnecessary retransmissions. CUBIC, designed for high-bandwidth-delay product networks, employs a cubic window growth function post- near the last event and convex thereafter—to balance responsiveness and fairness with traditional AIMD variants like Reno. These approaches offer simplicity in design and deployment, as they operate entirely within the without needing infrastructure changes, enabling seamless integration across heterogeneous networks. Their wide compatibility stems from adherence to the , allowing them to function over unmodified networks while promoting fairness among flows through shared loss signals. However, black box methods are fundamentally reactive, waiting for loss events that indicate congestion has already built up, which can cause oscillations and underutilization of bandwidth. They struggle in environments with shallow buffers, where drops occur due to brief bursts rather than queuing delays, leading to premature throttling and low throughput—studies show loss-based algorithms like Reno achieving up to 50% less efficiency compared to proactive alternatives in such scenarios. Additionally, they perform poorly under correlated losses, such as those from wireless errors or bursty traffic, where multiple packets drop in one window, triggering excessive window reductions and prolonged recovery. In contrast to grey box approaches that leverage partial network insights for earlier detection, black box methods remain limited to endpoint-only observations.

Grey Box Approaches

Grey box approaches in TCP congestion control refer to algorithms that infer a limited view of the internal network state—such as or available bandwidth—from endpoint measurements, treating the network as partially observable rather than entirely opaque. Unlike purely reactive methods that rely solely on signals, these techniques use observable metrics like round-trip time (RTT) variations or patterns to estimate levels proactively. This partial inference enables finer adjustments to the congestion window (cwnd), aiming to maintain efficient throughput while minimizing buildup. The stems from measurement-based control paradigms that balance endpoint autonomy with indirect network insights. A seminal example is TCP Vegas, introduced in 1994, which leverages RTT statistics to detect before packet losses occur. Vegas estimates the base RTT as the minimum observed RTT over recent samples, approximating the link's delay without queuing. It then computes the between expected and actual throughput to infer the number of queued packets () along the :
d = \frac{cwnd}{baseRTT} - \frac{cwnd}{RTT}
This d represents the estimated excess packets buffered due to , as increased RTT indicates queuing . If d falls below a lower threshold (typically 1 packet), Vegas linearly increases cwnd by one every RTT; if above an upper threshold (typically 3 packets), it decreases cwnd by one. During slow-start, growth is moderated every other RTT to allow monitoring. Vegas modifies Reno's loss-based but prioritizes delay signals for avoidance, ensuring a small, stable queue. Experimental evaluations showed Vegas achieving 37% to 71% higher throughput than TCP Reno on diverse paths, with reduced and .
Another prominent grey box algorithm is TCP Westwood+, an enhancement of the original Westwood scheme developed around 2001–2004. Westwood+ estimates available by measuring the rate of arrivals over a recent RTT interval, capturing the "eligible rate" despite losses from errors or bursts. Upon detection (e.g., duplicate s or timeouts), it sets the slow-start and cwnd to approximately bandwidth estimate × RTT, avoiding overly conservative reductions seen in loss-based algorithms. This bandwidth sampling from trains provides an indirect gauge of path capacity and backlog, refining recovery without explicit network feedback. In simulations and tests over error-prone links, Westwood+ demonstrated faster recovery and higher compared to NewReno, particularly in wired- scenarios. These grey box methods offer advantages in networks with variable delays, such as long-haul or paths, by enabling proactive rate adjustments that prevent deep queues and oscillations, leading to lower and better fairness among flows. For instance, Vegas maintains smaller backlogs (linear in the number of flows rather than quadratic), stabilizing aggregate throughput under load. However, they rely on accurate RTT or sampling, making them vulnerable to measurement noise from route changes, cross-traffic, or delayed ACKs, which can cause misguided window adjustments or unfairness against loss-based baselines like Reno. In mixed deployments, this sensitivity often limits adoption, as Vegas underperforms in lossy environments without tuning.

Green Box Approaches

Green box approaches to TCP congestion control rely on explicit feedback signals provided directly by the network infrastructure, such as queue length information or Explicit Congestion Notification (ECN) marks, allowing senders to receive precise indications of congestion without relying solely on end-to-end inferences. This classification, part of a broader categorization in packet network congestion control, enables more accurate and responsive adjustments compared to methods that treat the network as opaque. A foundational mechanism in green box approaches is ECN, standardized in RFC 3168, which permits routers to set congestion experienced (CE) marks in the of packets when queues exceed a threshold, signaling incipient congestion to endpoints without packet drops. In response, endpoints using ECN can reduce their sending rate, typically by treating marks equivalently to losses in the congestion control algorithm. Rate-based marking extensions further enhance this by allowing network elements to convey explicit allowable rates or fair shares, as explored in early proposals like TCP MaxNet, which uses aggregate feedback from links to compute per-flow rates. Data Center TCP (DCTCP) exemplifies a widely adopted green box tailored for low-latency environments like data centers, where it leverages ECN to estimate and react to fractional . In DCTCP, the sender maintains an estimate of the fraction of marked packets, denoted as \alpha, computed as the exponential of observed in acknowledged bytes over recent round-trip times (RTTs). At the end of each RTT, the window cwnd is multiplicatively decreased according to the formula: cwnd \leftarrow cwnd \times (1 - \frac{\alpha}{2}) This adjustment halves the window only when all packets are marked (\alpha = 1), providing finer-grained control and reducing queueing delays by up to 100 times compared to traditional loss-based TCP in incast scenarios. Variants of TCP Vegas have also incorporated AQM-provided hints for explicit congestion signaling, enhancing the original delay-based probing with direct queue notifications to improve accuracy in bandwidth estimation. These approaches excel in controlled settings by minimizing bufferbloat and achieving microsecond-level latencies, but they necessitate network-wide deployment of supporting infrastructure, such as Active Queue Management (AQM) algorithms like PIE (Proportional Integral controller Enhanced) or CoDel (Controlled Delay), which generate the required signals. Emerging developments integrate green box mechanisms with , where explicit congestion hints can be tailored per slice to optimize and QoS for diverse applications, such as URLLC traffic requiring sub-millisecond . This enables dynamic feedback loops between radio access networks and endpoints, addressing challenges like variable radio conditions while preserving precision in congestion signaling.

Implementations

Linux Integration

The Linux kernel has supported pluggable TCP congestion control algorithms since version 2.6.13, released in 2005, allowing dynamic selection through the sysctl interface net.ipv4.tcp_congestion_control. This mechanism enables administrators to specify the default algorithm for new connections system-wide, with the initial default being Reno, which was suitable for standard bandwidth-delay product (BDP) networks but less optimal for emerging high-speed links. In response to the growing prevalence of high-BDP networks, such as those in data centers and long-haul internet paths, the default algorithm shifted from BIC-TCP to CUBIC starting with kernel version 2.6.19 in 2006. CUBIC was chosen for its cubic window growth function, which probes for bandwidth more aggressively after losses while maintaining fairness to Reno in standard conditions, making it the default in most Linux distributions today. Later, the Bottleneck Bandwidth and Round-trip propagation time (BBR) algorithm, developed by Google for better performance in diverse network environments, became available in kernel 4.9 released in December 2016. The lists supported algorithms via the /proc/sys/net/ipv4/tcp_available_congestion_control , which typically includes Reno (the baseline loss-based method), CUBIC (default for high-speed paths), BBR (model-based for low latency), Vegas (delay-based for early congestion detection), and Westwood (loss-based with rate estimation for wireless links). Additional algorithms can be compiled into the or loaded as modules, expanding the list dynamically without rebooting. To use a non-default algorithm like BBR, administrators load the corresponding kernel module with modprobe tcp_bbr, after which it appears in the available list. Per-socket tuning is possible via the setsockopt system call with the TCP_CONGESTION option and a cc_id string specifying the algorithm name, allowing applications to select congestion control independently of the system default. Network administrators can monitor TCP congestion states using tools like ss -i, which displays per-connection details including the congestion window (cwnd), slow start (ssthresh), and active . These tools provide insights into real-time adjustments, such as cwnd growth during slow start or reductions post-loss, aiding in high-latency or throughput issues.

Deployment and Tuning

Deployment of congestion control varies across operating systems, reflecting differences in default algorithms and available options tailored to common use cases. In modern Windows versions such as and 11, the default congestion control is CUBIC, which replaced Compound TCP (CTCP) starting with the Creators Update in 2017 to improve throughput consistency across diverse networks. FreeBSD 14.0 and later (released November 2023) default to CUBIC, while earlier versions default to NewReno, emphasizing stability for general-purpose networking; alternatives like CUBIC can be enabled via modules. , leveraging a , uses CUBIC by default for its balance of performance and compatibility in mobile environments, with BBR available as an optional for scenarios requiring better handling of variable . Tuning TCP congestion control involves adjusting key parameters to optimize performance for specific network conditions. Buffer sizing is critical and typically set to at least the (BDP), calculated as BDP = bandwidth × RTT, to ensure the congestion window can fully utilize available capacity without excessive queuing delays. Enabling (ECN) is a recommended practice, as it allows routers to signal impending via marked packets rather than drops, reducing unnecessary retransmissions in loss-based algorithms. Additionally, increasing the initial congestion window (initcwnd) to up to 10 segments, as specified in RFC 6928, accelerates startup for short flows and by allowing more data to be sent before receiving acknowledgments. Challenges in deployment arise from environmental factors that can mislead congestion detection. In wireless networks, packet losses due to signal or are often misinterpreted by loss-based algorithms as , leading to unwarranted reductions in the congestion window and underutilization of available . Tuning differs significantly between datacenters and wide-area networks (WANs): datacenters require low-latency configurations with smaller buffers to minimize queueing in high-speed, short-RTT environments, whereas WANs demand larger BDPs and loss-tolerant algorithms to handle long and variable paths without stalling. Best practices emphasize selecting algorithms based on characteristics and validating through testing. CUBIC remains suitable for standard wired and consistent scenarios due to its cubic probing that scales well with high BDPs, while BBR is preferred for or lossy paths, such as or links, as it models and RTT to maintain high throughput without relying solely on signals. Configurations should be tested using tools like for throughput measurement and flent for comprehensive flow analysis under simulated . Looking ahead, the integration of as defined in 9000 is influencing evolutions by introducing user-space congestion control that avoids and enables faster adaptations, potentially inspiring hybrid mechanisms for better multiplexing and recovery. In networks, specific tuning addresses high mobility and variable radio conditions, such as adjusting congestion windows dynamically based on RTT variations to mitigate and improve fairness among flows in non-standalone deployments.

References

  1. [1]
    RFC 5681 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  2. [2]
    Congestion avoidance and control - ACM Digital Library
    Congestion control involves finding places that violate conservation and fixing them. By 'conservation of packets' I mean that for a connection 'in equilibrium ...
  3. [3]
    RFC 2581 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  4. [4]
    6.2 Queuing Disciplines - Computer Networks: A Systems Approach
    FIFO with tail drop, as the simplest of all queuing algorithms, is the most widely used in Internet routers at the time of writing. This simple approach to ...
  5. [5]
    Bufferbloat: Dark Buffers in the Internet - Communications of the ACM
    Jan 1, 2012 · Bufferbloat is the existence of excessively large and frequently full buffers inside the network, causing unnecessary latency and poor ...
  6. [6]
    Congestion avoidance and control - ACM Digital Library
    In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC ...
  7. [7]
    RFC 2309: Recommendations on Queue Management and ...
    Some mechanisms are needed in the routers to complement the endpoint congestion ... For example, in a router using queue management but only FIFO scheduling ...
  8. [8]
    RFC 2914 - Congestion Control Principles - IETF Datatracker
    The goal of this document is to explain the need for congestion control in the Internet, and to discuss what constitutes correct congestion control.
  9. [9]
    RFC 2581: TCP Congestion Control
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  10. [10]
    RFC 5681: TCP Congestion Control
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  11. [11]
    RFC 6928: Increasing TCP's Initial Window
    This document proposes an experiment to increase the permitted TCP initial window (IW) from between 2 and 4 segments, as specified in RFC 3390, to 10 segments.
  12. [12]
  13. [13]
    [PDF] Congestion Avoidance and Control - CS 162
    Congestion Avoidance and Control. Van Jacobson*. University of California. Lawrence Berkeley Laboratory. Berkeley, CA 94720 van@helios.ee.lbl.gov. In October of ...
  14. [14]
    RFC 3390: Increasing TCP's Initial Window
    This document specifies an optional standard for TCP to increase the permitted initial window from one or two segment(s) to roughly 4K bytes, replacing RFC ...
  15. [15]
    RFC 3742 - Limited Slow-Start for TCP with Large Congestion ...
    This document describes an optional modification for TCP's slow-start for use with TCP connections with large congestion windows.
  16. [16]
    RFC 2581 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  17. [17]
    RFC 8312 - CUBIC for Fast Long-Distance Networks
    CUBIC is an extension to the current TCP standards. It differs from the current TCP standards only in the congestion control algorithm on the sender side.<|separator|>
  18. [18]
    RFC 6582 - The NewReno Modification to TCP's Fast Recovery ...
    This document describes a specific algorithm that conforms with the congestion control requirements of [RFC5681], and so those considerations apply to this ...
  19. [19]
    Binary increase congestion control (BIC) for fast long-distance ...
    This work presents a new congestion control scheme that alleviates RTT unfairness while supporting TCP friendliness and bandwidth scalability.
  20. [20]
    [PDF] CUBIC: A New TCP-Friendly High-Speed TCP Variant ∗
    ABSTRACT. CUBIC is a congestion control protocol for TCP (transmis- sion control protocol) and the current default TCP algo- rithm in Linux.
  21. [21]
    A Survey of Delay-Based and Hybrid TCP Congestion Control ...
    This paper demonstrates the effectiveness of TCP congestion control algorithms within a network operating under the MPT -GRE network layer multipath ...
  22. [22]
    [PDF] On the Effectiveness of Delay-Based Congestion Avoidance
    Our objective in this short note is to suggest possible reasons for the weak correlations between delays and losses, and to iden- tify conditions under which ...Missing: paper | Show results with:paper
  23. [23]
    TCP Vegas: new techniques for congestion detection and avoidance
    This paper motivates and describes the three key techniques employed by Vegas, and presents the results of a comprehensive experimental performance study—using ...Missing: original | Show results with:original
  24. [24]
    [PDF] FAST TCP:
    Abstract—We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation.Missing: original | Show results with:original
  25. [25]
    (PDF) On Hybrid TCP Congestion Control - ResearchGate
    PDF | This paper presents several considerations on the hybrid TCP congestion control, in which loss-based schemes like TCP-Reno and delay-based schemes.
  26. [26]
    TCP westwood | Proceedings of the 7th annual ... - ACM Digital Library
    TCP Westwood (TCPW) is a sender-side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as ...Missing: original | Show results with:original
  27. [27]
    (PDF) Performance evaluation of Westwood+ TCP congestion control.
    In this paper we report experimental results that have been obtained running Linux 2.2.20 implementations of Westwood+, Westwood and Reno TCP to ftp data over ...
  28. [28]
    [PDF] A Compound TCP Approach for High-speed and Long Distance ...
    We have implemented CTCP on the Microsoft Windows. Platform by modifying the TCP/IP stack. The first challenge is to design a mechanism that can pre- cisely ...Missing: original | Show results with:original
  29. [29]
    draft-cardwell-iccrg-bbr-congestion-control-02 - IETF Datatracker
    This document specifies the BBR congestion control algorithm. BBR ("Bottleneck Bandwidth and Round-trip propagation time") uses recent measurements of a ...Table of Contents · Terminology · Design Overview · Detailed Algorithm
  30. [30]
    TCP Hybla: a TCP enhancement for heterogeneous networks
    Aug 31, 2004 · In heterogeneous networks, TCP connections that incorporate a terrestrial or satellite radio link are greatly disadvantaged with respect to ...Missing: original | Show results with:original
  31. [31]
    Google's BBRv3 TCP Congestion Control Showing Great ... - Phoronix
    Aug 7, 2023 · Google's open-source BBR TCP congestion control algorithm is widely used within Google and its v3 iteration is already proving a success within the company.
  32. [32]
    [PDF] Promises and Potential of BBRv3 - PAM 2024
    BBRv2. Google introduced BBRv2 in 2019 to alleviate the problems with. BBRv1 [13,14]. BBRv2 split the ProbeBW phase into four new sub-phases: Down, Cruise ...
  33. [33]
    TCP Congestion Control Performance over Starlink
    We examine the performance of 14 different Linux TCP congestion control (CC) variants over Starlink connectivity. We then focus on the two most commonly used ...
  34. [34]
    TCP Congestion Control Algorithm Using Queueing Theory-Based ...
    The results highlight the ability of the proposed mechanism to reduce queueing delay, prevent packet loss, and maximize network utilization. 1.4. Article ...
  35. [35]
    A congestion control algorithm for boosting TCP performance in ...
    This paper proposes MSS-TCP, a novel congestion control algorithm designed for mmWave networks. MSS-TCP dynamically adjusts the congestion window (cwnd) based ...
  36. [36]
    A congestion control algorithm for boosting TCP performance in ...
    MSS-TCP: A congestion control algorithm for boosting TCP performance in mmwave cellular networks. May 2025; ICT Express 11(1). DOI:10.1016/j.icte.2025.05.005.
  37. [37]
  38. [38]
    RFC 9743 - Specifying New Congestion Control Algorithms
    Mar 12, 2025 · This document seeks to ensure that proposed congestion control algorithms operate efficiently and without harm when used in the global Internet.
  39. [39]
    [PDF] Congestion-Control Throwdown
    Nov 30, 2017 · Black-box approaches, in contrast, do not generate a model the network, but instead seek good mappings from empirically- observed performance to ...
  40. [40]
    RFC 2582 - The NewReno Modification to TCP's Fast Recovery ...
    This document describes a modification to the Fast Recovery algorithm in Reno TCP that incorporates a response to partial acknowledgements received during Fast ...
  41. [41]
    6.3 TCP Congestion Control — Computer Networks
    In practice, TCP's fast retransmit mechanism can detect up to three dropped packets per window. Finally, there is one last improvement we can make. When the ...
  42. [42]
    When to use and not use BBR - APNIC Blog
    Jan 10, 2020 · There are some cases though, where loss-based TCP algorithms do not work well. For example, in shallow buffers, packet loss might be ...
  43. [43]
    [PDF] When to use and when not to use BBR: An empirical analysis and ...
    Oct 21, 2019 · We find that BBR is well suited for networks with shallow buffers, despite its high retransmissions, whereas existing loss-based algo- rithms ...
  44. [44]
    [PDF] Approaches to Congestion Control in packet networks
    congestion. Due to the possibility of wrong estimations and measurements, the net- work is considered a grey box. The third category (”the box is green ...
  45. [45]
    [PDF] TCP Vegas: End to End Congestion Avoidance on a Global Internet
    The main result reported in this paper is that Vegas is able to achieve between 37 and 71% better throughput than Reno. Moreover, this improvement in throughput ...
  46. [46]
    [PDF] TCP Vegas: New Techniques for Congestion Detection and Avoidance
    This paper motivates and describes the five key techniques employed by. Vegas, and presents the results of a comprehensive experimental performance study—using ...Missing: original | Show results with:original
  47. [47]
    Performance evaluation of Westwood+ TCP congestion control
    In this paper we have tested the behavior of the TCP Westwood+ algorithm using ... Mascolo, Westwood TCP and easy RED to improve fairness in high-speed ...Missing: original | Show results with:original
  48. [48]
    TCP Westwood: congestion window control using bandwidth ...
    Abstract: We study the performance of TCP Westwood (TCPW), a new TCP protocol with a sender-side modification of the window congestion control scheme.
  49. [49]
    [PDF] Deployment Considerations for the TCP Vegas Congestion Control ...
    These include a new timeout mechanism, a novel approach to congestion avoidance that avoids packet loss, and a modified slow start algorithm. TCP Vegas ...
  50. [50]
    (PDF) Approaches to Congestion Control in Packet Networks
    Due to the possibility of wrong estimations and measurements, the network is considered a grey box. ... [11]. The congestion control in the traditional TCP ...
  51. [51]
    (PDF) A Comprehensive Overview of TCP Congestion Control in 5G ...
    Jun 28, 2021 · This paper provides an overview of the most popular single-flow and multy-flow TCP CC algorithms used in pre-5G networks.Missing: green box
  52. [52]
    IP sysctl - The Linux Kernel Archives
    tcp_congestion_control - STRING Set the congestion control algorithm to be used for new connections. The algorithm "reno" is always available, but ...
  53. [53]
    Linux_2_6_13 - Linux Kernel Newbies
    Summary of the changes and new features merged in the Linux Kernel during the 2.6.13 development. ... TCP congestion control to use on a per socket basis. (commit) ...<|control11|><|separator|>
  54. [54]
    Linux_2_6_19 - Linux Kernel Newbies
    Linux does support pluggable and runtime switchable TCP congestion algorithms since 2.6.13. 2.6.19 changes the default congestion algorithm from BIC-TCP to ...GFS2 · Other stuff · Arch-independent changes in... · Networking<|control11|><|separator|>
  55. [55]
    Linux_4.9 - Linux Kernel Newbies
    Dec 11, 2016 · This release adds another TCP congestion control algorithm: BBR (Bottleneck Bandwidth and RTT). The Internet has predominantly used loss-based ...
  56. [56]
    IP Sysctl - The Linux Kernel documentation
    Show/set the congestion control choices available to non-privileged processes. The list is a subset of those listed in tcp_available_congestion_control. Default ...
  57. [57]
    tcp(7) - Linux manual page - man7.org
    /proc interfaces System-wide TCP parameter settings can be accessed by files in the directory /proc/sys/net/ipv4/. In addition, most IP /proc interfaces ...
  58. [58]
    Inspecting Internal TCP State on Linux - Jane Street Blog
    Jul 9, 2014 · Use the `ss` utility with `crash` to inspect internal TCP state on Linux, revealing details like retransmission timeout and congestion control. ...