Fact-checked by Grok 2 weeks ago

Transport layer

The transport layer is the fourth layer in the Open Systems Interconnection (OSI) , responsible for providing end-to-end communication services between applications on different hosts, including data segmentation, reliable delivery, flow control, and error detection or recovery. It operates above the network layer to ensure that data transferred from the sending application is delivered completely and accurately to the receiving application, abstracting the complexities of the underlying network. In the TCP/IP protocol suite, this layer—often called the host-to-host transport layer—handles the transfer of data segments between end systems, using port numbers to multiplex and demultiplex traffic for specific processes. Key protocols at the transport layer include the and the , which offer contrasting service models to meet diverse application needs. establishes reliable, connection-oriented communication through mechanisms such as sequence numbering, acknowledgments, retransmissions, and congestion control, ensuring ordered and error-free delivery of data streams across potentially unreliable networks. In contrast, delivers a lightweight, connectionless service that prioritizes low and simplicity, without built-in reliability, ordering, or flow control, making it suitable for applications like streaming where occasional is tolerable. The transport layer's functions extend to managing variations across networks, including segmentation and reassembly of messages into smaller units (segments or datagrams) for efficient transmission, as well as addressing issues like and buffer overflows through adaptive controls. In the OSI framework, it supports both connection-mode services (via classes of protocols defined in standards like ISO/IEC 8073) for guaranteed delivery and connectionless modes for best-effort transfer, enabling in heterogeneous environments. These capabilities make the transport layer essential for bridging application requirements with the unreliable realities of lower-layer networking, influencing everything from web browsing to video conferencing.

Fundamentals

Role in Network Models

The transport layer serves as layer 4 in the Open Systems Interconnection (OSI) reference model, where it is tasked with enabling process-to-process delivery of data across networks by providing end-to-end communication services between applications on distinct hosts. This layer abstracts the underlying network infrastructure to ensure that data are properly segmented and reassembled (with sequencing and error recovery in connection-oriented modes) for the correct destination processes. In the TCP/IP model, the transport layer functions equivalently as the host-to-host layer, positioned between the above and the below, to facilitate logical connections and without regard to the specific physical paths taken. The foundational concepts of the transport layer trace back to the in the early 1970s, when the Network Control Protocol (NCP) was developed as the initial host-to-host protocol to manage communication between connected systems. As networking demands grew for interconnecting diverse networks, this evolved into the protocol suite, with a critical milestone occurring on January 1, 1983—known as ""—when fully transitioned from NCP to , establishing the transport layer's role in supporting scalable, inter-network communication. In contrast to lower layers, the transport layer operates at a higher abstraction level than the network layer, which is responsible for host-to-host routing using logical addresses like IP, while the data link and physical layers handle frame transmission and bit-level signaling over media. It distinguishes itself by employing port numbers as identifiers for specific processes, thereby enabling multiplexing of multiple application streams over a single network connection. The layers above it—the session, presentation, and application layers in OSI, or the consolidated application layer in TCP/IP—focus on managing dialog control, data syntax, and user-facing services such as file transfer or email interfaces. Illustrations of the often depict the transport layer as the central bridge in a seven-layer stack, encapsulating application data into transport-layer segments that are passed downward to the network layer for routing, emphasizing its role in isolating end-system concerns from . Similarly, TCP/IP model diagrams position the transport layer prominently between the top-level application protocols and the , highlighting its function in providing uniform end-to-end services atop variable underlying networks.

Key Services and Functions

The transport layer provides essential end-to-end communication services to applications, ensuring is transferred reliably or unreliably between hosts across . Core services include end-to-end delivery, where the layer manages the of units between application endpoints, abstracting the underlying 's hop-by-hop . Segmentation and reassembly allow large application messages to be broken into smaller segments suitable for the network layer and then reconstructed at the destination, preserving message boundaries where required. Service access points, typically implemented as ports, enable and demultiplexing of multiple application streams over a single connection, allowing distinct services to share the same . These services are offered in two primary modes: connection-oriented, which establishes a for ongoing communication, and connectionless, which sends independent datagrams without prior setup. In the OSI model, connection-oriented services are specified in five classes (TP0 to TP4) under ISO/IEC 8073, providing escalating levels of error detection, , and flow control—from basic (TP0) to full reliability on unreliable networks (TP4)—while connectionless services are handled by the Connectionless Transport Protocol (CLTP) per ISO/IEC 8602. Reliability options at the transport layer vary by service type but generally include mechanisms for guaranteed delivery, where lost data is retransmitted; ordered delivery, ensuring segments arrive in the sequence they were sent; and duplicate detection, to discard redundant packets. These features provide applications with configurable levels of assurance, from to fully reliable transport, without mandating them for all communications. Quality of Service (QoS) aspects supported by the transport layer encompass allocation through control to prevent overload and control to minimize delays for time-sensitive applications. These capabilities allow applications to express preferences for throughput, low , or ordered , influencing selection and behavior at the transport level. Transport layer services evolved from early reliable, connection-oriented like NCP in the 1970s and (standardized in 1981 per 793), to include connectionless options like (1980 per 768), supporting diverse application needs through IETF efforts in the late 1970s and 1980s.

Mechanisms

Addressing and Multiplexing

In the transport layer, addressing enables the identification of specific processes or applications on networked hosts, extending the network layer's host-to-host delivery to process-to-process communication. This is achieved primarily through 16-bit port numbers, which range from 0 to and serve as logical endpoints for data transmission. The (IANA) manages these assignments to ensure standardized use across protocols like and . Port numbers are categorized into three main ranges to balance global coordination with local flexibility. Well-known ports (0–1023) are reserved for standard services and require IETF Review or IESG Approval for assignment; for example, port 80 is designated for HTTP, allowing web servers to listen consistently on this endpoint. Registered ports (1024–49151), also known as user ports, are assigned via IETF Review, IESG Approval, or Designated Expert Review for specific applications that need broader recognition but not universal . Ephemeral ports (49152–65535), or dynamic ports, are unassigned and used temporarily by client applications for outgoing connections, ensuring they do not conflict with established services. This distinction prevents port exhaustion and supports secure, predictable communication, with servers typically binding to well-known or registered ports while clients select ephemeral ones. Multiplexing at the transport layer allows multiple application-layer streams from a single host to share a common by tagging data segments or datagrams with source and destination numbers. For instance, in , the protocol combines outgoing data from various processes into a single flow, using ports to differentiate streams; similarly, employs ports to enable connectionless without session overhead. At the receiver, demultiplexing reverses this process: the transport layer examines the destination port in incoming segments or datagrams to route data to the appropriate local process, ensuring and efficiency even when multiple applications operate concurrently. The abstraction encapsulates addressing by pairing a network-layer address (typically an ) with a number, forming an endpoint identifier such as (IP:port). In , a full socket is defined by the quadruple (local IP, local port, remote IP, remote port), providing a unique reference for end-to-end communication while abstracting the underlying protocol details for applications. This model, originating from early TCP specifications, facilitates process identification without requiring applications to manage low-level addressing directly.

Connection Management

Connection management in the transport layer encompasses the procedures for initiating, sustaining, and concluding communication sessions between endpoints, distinguishing between stateful connection-oriented approaches and stateless connectionless alternatives. Stateful services maintain about ongoing connections, enabling reliable sequencing and coordination, while stateless services treat each packet independently without session tracking. This dichotomy allows transport protocols to support diverse application needs, from reliable data streams to lightweight, messaging. In connection-oriented services, establishment begins with a three-way handshake to mutually agree on connection parameters and verify endpoint readiness. The initiating sends a synchronization request, the responding replies with an plus its own synchronization, and the initiator confirms with a final , ensuring both sides synchronize without ambiguity. This mechanism mitigates risks like half-open connections, where one assumes the link is active but the other has not yet committed, potentially leading to resource waste or security vulnerabilities. Once established, connections transition through abstract states to manage their lifecycle, including a listening state for incoming requests, a sent-synchronization state during setup, an established state for active communication, and a time-wait state post-closure to absorb delayed packets. These states provide a for tracking connection status, facilitating orderly progression without delving into transfer details. Termination in connection-oriented services employs a four-way handshake for graceful closure: one signals intent to finish by sending a finish flag, the peer acknowledges and optionally sends its own finish, followed by a mutual to release resources symmetrically. For immediate disruption, an abrupt signal can forcibly end the , bypassing the full handshake when anomalies like errors or attacks are detected. Conversely, connectionless services bypass all setup and teardown, enabling direct transmission to the target without maintaining any state, which simplifies implementation but shifts reliability burdens to higher layers. Ports, as identifiers, facilitate addressing in both paradigms, including during handshakes for connection-oriented flows.

Error and Flow Control

The transport layer ensures integrity and efficient by implementing detection and mechanisms, alongside flow control to manage data rates between sender and . detection primarily relies on appended to transport segments, which allow the to verify the of received . A widely used approach is the 16-bit , computed over the segment header and to detect such as bit flips. This is generated before and recalculated upon receipt; a mismatch indicates corruption, prompting the to discard the segment. The computation of the involves treating the data as a sequence of 16-bit words and performing a one's complement . Specifically, the sender all 16-bit words (with end-around carry for any ), then takes the one's complement of the result to obtain the value, which is inserted into the header (initially set to zero during ). The receiver performs the same , including the received , and verifies if the result equals all ones (0xFFFF in one's complement arithmetic), confirming no errors. This method, while efficient, detects most single- and multi-bit errors but may miss certain patterns, such as even numbers of bit errors in specific positions. For error recovery, the transport layer employs retransmission strategies based on acknowledgments (s). Positive s confirm successful receipt of segments, while the absence of an expected (negative acknowledgment or timeout) signals loss or corruption, triggering the sender to retransmit the unacknowledged data. This go-back-N or selective repeat approach ensures reliable delivery without higher-layer intervention, with timers set based on estimated round-trip times to detect timeouts. Such mechanisms recover from errors detected via failure or sequence gaps in received segments. Flow control prevents the sender from overwhelming the receiver's capacity through the . The receiver advertises its available space (advertised window) in , allowing the sender to transmit up to that window size in outstanding bytes without further . The effective window size is the minimum of the sender's allowable window and the receiver's advertised window, enabling continuous data flow while adapting to receiver constraints. The sender maintains a sequence number space where the window slides forward upon ACK receipt; for instance, the next sequence number boundary is calculated as: \text{NextSeq} = \text{Seq} + \text{WindowSize} This advances the send , permitting new segments to be sent as older ones are acknowledged. Basic congestion avoidance complements by interpreting to avoid overload. Implicit signals, such as detected through missing ACKs or timeouts, indicate potential , prompting the sender to reduce its transmission rate and halve the congestion to probe for available conservatively. This reactive approach distinguishes transport-layer congestion hints from pure , focusing on end-to-end stability without explicit router .

Protocols

Transmission Control Protocol (TCP)

The is a connection-oriented transport protocol that provides reliable, ordered, and error-checked delivery of a byte between applications over an . Originally specified in RFC 793 in September 1981 by , TCP has evolved as a core component of the , with the current standard defined in RFC 9293, published in August 2022, which obsoletes earlier versions and incorporates clarifications and updates from subsequent RFCs. TCP enables full-duplex communication, allowing simultaneous data transfer in both directions over a single connection, and treats application data as a continuous stream of bytes rather than discrete messages, abstracting the underlying packet-based . TCP's design emphasizes reliability through mechanisms like acknowledgments, retransmissions, and flow control, making it suitable for applications where is paramount, such as web services and . It operates above the layer, using port numbers to multiplex multiple connections between hosts. While serves as a lightweight, connectionless alternative for time-sensitive applications, TCP's stateful approach ensures delivery despite network variability. The TCP header consists of a fixed 20-byte portion followed by optional fields, totaling at least 20 bytes and up to 60 bytes. Key fields include 16-bit and destination numbers for endpoint ; a 32-bit sequence number to track the position of each byte in the stream; a 32-bit number indicating the next expected byte from the peer; a 4-bit data offset specifying header length; 9 control flags (URG for urgent , ACK for , PSH to push to the application, RST to reset the , SYN to synchronize sequence numbers, FIN to finish the , along with reserved bits); a 16-bit window size for flow control; a 16-bit covering the header, , and a pseudo-header from ; and a 16-bit urgent pointer for . Options, padded to 32-bit boundaries, can include the (MSS) to negotiate the largest size, typically set during setup to avoid fragmentation. A segment comprises the header prefixed to zero or more bytes of , with the total length determined by the layer. numbers are 32-bit unsigned integers that wrap around after reaching 2^32 - 1 (approximately 4.29 ), enabling continuous byte-stream numbering across the lifetime; implementations must handle wraparound correctly to avoid ambiguity in acknowledgments. During , the sender assigns numbers to each byte, and the receiver acknowledges cumulative receipt up to a certain point, triggering retransmission of unacknowledged segments if needed. Connection establishment employs a three-way to synchronize sequence numbers and confirm bidirectional . The client initiates by sending a with the flag set and an initial sequence number (ISN, chosen randomly for ); the server responds with a SYN- , setting its own flag, acknowledging the client's ISN ( = client's ISN + 1), and providing its ISN; the client completes the with an acknowledging the server's ISN ( = server's ISN + 1), after which data transmission can begin. This process ensures both endpoints agree on initial parameters and prevents issues like old duplicate segments from prior connections. Connection termination uses a similar four-way process involving FIN flags and acknowledgments from both sides to gracefully close the stream. Reliability relies on positive acknowledgments and timers for lost segments. Upon timeout, TCP retransmits unacknowledged data using an adaptive retransmission timeout (RTO) computed from round-trip time (RTT) measurements, with an initial RTO of 1 second before any RTT samples; subsequent RTO values are at least 1 second and incorporate smoothed RTT variance. On each retransmission expiration, the RTO doubles via to probe for , up to a maximum of at least 60 seconds, after which the connection may be considered failed; a new RTT measurement can reset the RTO to the computed value. For SYN segments, if the computed RTO is less than 3 seconds upon expiration while awaiting a SYN-ACK, the RTO resets to 3 seconds before proceeding to data transmission. To address limitations in high-speed and long-delay networks, TCP incorporates extensions defined in RFC 7323 (updating RFC 1323 from 1992), including window scaling to expand the 16-bit window field to support receive buffers up to 1 GB via a shift factor negotiated in options; selective acknowledgments () permitting receivers to acknowledge non-contiguous byte ranges, reducing unnecessary retransmissions; and round-trip time measurements (RTTM) using timestamps for accurate RTO estimation. These enhancements improve throughput over paths with bandwidth-delay products exceeding 64 KB, where the base window size would otherwise limit performance. TCP also integrates congestion control algorithms, such as slow start and congestion avoidance, to dynamically adjust sending rates based on network feedback, preventing overload (detailed further in performance analysis sections). TCP underpins many Internet applications requiring assured delivery, including web protocols like HTTP and for browsing and secure transactions, and email protocols such as SMTP for sending and IMAP or POP3 for retrieval. Despite its robustness, TCP's strict ordering introduces , where a single lost or delayed segment holds up delivery of all subsequent in-order data, even if later segments arrive out of order, potentially degrading performance for latency-sensitive flows.

User Datagram Protocol (UDP)

The () is a connectionless transport layer protocol that provides a simple, unreliable mechanism for exchanging datagrams between applications over networks. Defined in RFC 768 in August 1980, UDP emphasizes minimal overhead by avoiding connection establishment, handshakes, acknowledgments, or retransmissions, making it suitable for environments where low is prioritized over guaranteed . The UDP header consists of a fixed 8-byte structure: a 16-bit source number to identify the sending application, a 16-bit destination number for the receiving application, a 16-bit UDP length field specifying the total length of the (header plus data) in bytes, and a 16-bit for basic error detection, which is optional over IPv4 but mandatory over IPv6. The computation includes the UDP header, the data payload, and a pseudo-header from the layer to verify against transmission errors. UDP operates by allowing applications to directly encapsulate data into datagrams and pass them to the layer for transmission, with receiving applications demultiplexing based on numbers; there is no sequencing, ordering, flow control, or error recovery provided by the protocol itself. This datagram-oriented approach means that each packet is treated independently, potentially leading to loss, duplication, or reordering depending on conditions, with applications responsible for implementing any required reliability measures. Common use cases for UDP include Domain Name System (DNS) queries, where quick resolution outweighs the need for retransmission; real-time streaming media, such as video broadcasts, which tolerate minor losses to maintain playback smoothness; and Voice over IP (VoIP), where low delay supports conversational flow despite occasional artifacts from dropped packets. UDP's native support for IP multicast enables efficient one-to-many delivery, as seen in group communications or content distribution scenarios. Key limitations of UDP stem from its design simplicity: it provides no congestion control, risking network overload from unchecked transmission rates, and offers no inherent safeguards against or duplication, requiring applications to handle such issues if needed. For scenarios demanding reliable delivery, protocols like are employed instead.

Stream Control Transmission Protocol (SCTP)

The (SCTP) is a reliable, message-oriented transport layer standardized in 9260, published in October 2022, which obsoletes 4960 (September 2007); the latter in turn obsoleted earlier specifications 2960 and 3309. Designed primarily to transport (PSTN) signaling messages, such as Signaling System No. 7 (SS7), over networks, SCTP combines the reliability and ordered delivery of with the message-based (datagram-like) data transfer of . It supports end-to-end communication between two endpoints, providing services like error-free non-duplicated transfer of user messages, congestion avoidance, and resistance to blind denial-of-service attacks through cryptographic cookies. SCTP packets consist of a common header followed by one or more chunks, enabling flexible bundling of control and data information. The common header is 12 bytes long and includes a 16-bit , 16-bit destination port, 32-bit verification tag for association identification, and a 32-bit (using CRC32c for error detection). Chunks are variable-length units that carry specific functions; for example, the DATA chunk transports user messages with a Transmission Number (TSN) for ordering and acknowledgment, the SACK (Selective ) chunk provides feedback on received data to enable selective retransmissions, and the HEARTBEAT chunk monitors path liveness by eliciting responses from the peer. This chunk-based structure allows SCTP to handle multiple types of information within a single packet, improving efficiency over byte-stream protocols. SCTP establishes associations using a four-way initial to mitigate security risks like SYN flooding attacks seen in . The process begins with an chunk from the initiator, listing supported parameters like the number of inbound and outbound ; the responder replies with an INIT containing a state ; the initiator echoes the cookie in a COOKIE ECHO chunk, potentially bundling data; and the responder completes the with a COOKIE , also allowing data transmission. Multi-homing enables an SCTP endpoint to be represented by multiple addresses, allowing traffic to transparently across paths without interrupting the , which enhances reliability in scenarios with redundant network interfaces. Multi-streaming supports independent sequencing of messages within multiple per , using per-stream TSNs to prevent head-of-line (HOL) blocking, where a lost message in one stream delays others. Key features include an extension for partial reliability (PR-SCTP), defined in RFC 3758 (May 2004), which allows the sender to discard unacknowledged messages based on policies like timed reliability, reducing overhead for non-critical data while maintaining full reliability for others. SCTP's congestion control mechanisms are analogous to TCP's, employing slow-start, congestion avoidance, fast retransmit, and fast recovery algorithms, with adjustments for multi-homing to monitor and adapt per path. SCTP finds primary use in telephony signaling, such as transporting SS7 messages over in architectures to support traditional circuit-switched networks migrating to packet-based systems. It is also employed in for data channels, providing reliable, ordered, or unreliable message delivery over DTLS-secured tunnels, as specified in 8831 (January 2021), with advantages over including native multi-streaming to avoid HOL blocking and potential for multi-path in future extensions. These capabilities make SCTP particularly suitable for applications requiring robust and message integrity without the limitations of byte-stream semantics.

Comparisons

Internet vs OSI Protocols

The Open Systems Interconnection (OSI) model's transport layer specifies five classes of connection-mode transport protocols as defined in ISO/IEC 8073, ranging from TP0 to TP4, each tailored to different qualities of network service. TP0 offers the simplest connection-oriented service without error or flow control, suitable for reliable networks. TP1 extends TP0 with basic error recovery mechanisms for networks prone to occasional failures. TP2 focuses on multiplexing multiple transport connections over a single network connection for reliable, connection-oriented network services. TP3 combines the error recovery of TP1 with the multiplexing of TP2. TP4 provides the most comprehensive features, including full error detection and recovery, flow control, and expedited data transfer, making it ideal for unreliable or variable networks and closely resembling the capabilities of modern connection-oriented protocols. In contrast, the , part of the , relies on a smaller set of transport protocols: the for reliable, connection-oriented data delivery; the for lightweight, connectionless transmission; and the for reliable messaging with support for multi-streaming and multi-homing. Unlike the OSI's rigid class structure, these Internet protocols emphasize flexibility, allowing adaptation through extensions and implementations without strict categorization by network assumptions. For instance, TCP incorporates elements akin to TP4's reliability features but integrates them into a single protocol, while UDP provides minimal overhead similar to TP0 but operates connectionlessly, and SCTP introduces multi-homing not natively present in OSI classes. A core philosophical difference lies in the OSI model's emphasis on layered purity and standardized classes to ensure interoperability across diverse network types, including support for expedited data in TP4, whereas the TCP/IP suite prioritizes pragmatic simplicity and incremental evolution to support rapid deployment in heterogeneous environments. The OSI approach aimed for comprehensive specification to accommodate both connection-oriented and connectionless services uniformly, but this led to complexity in implementation. Internet protocols, by design, favor minimalism—such as TCP's avoidance of OSI-style expedited data in favor of congestion-aware flow control—to enable widespread adoption without mandating specific network reliabilities. Historically, the OSI model emerged from international standardization efforts led by the International Organization for Standardization (ISO) in the early 1980s, with ISO/IEC 8073 published in 1988, drawing partial influence from earlier protocols but establishing a theoretical framework for open networking. In parallel, TCP/IP protocols originated from U.S. Department of Defense-funded ARPANET research in the 1970s, with TCP specified in 1981 and UDP in 1980, gaining momentum through academic and military networks. By the late 1980s and early 1990s, TCP/IP dominated due to its earlier deployment, lower complexity, and path dependency in existing infrastructure, effectively winning the "protocol wars" against OSI despite the latter's government-backed promotion in Europe. The OSI model influenced TCP/IP design—such as in reliability concepts—but the Internet protocols' practical advantages led to their global prevalence post-1990. Today, OSI transport protocols like TP4 persist only in niche legacy applications, such as certain industrial control systems or X.25-based public data networks from the , where they provide reliable over older packet-switched infrastructures. However, these are increasingly obsolete, with minimal modern deployment due to the dominance of TCP/IP protocols, which underpin the vast majority of and have no need for OSI's class-based rigidity in contemporary IP-centric networks.

Protocol Feature Matrix

The Protocol Feature Matrix provides a concise comparison of key features across major transport layer protocols, highlighting their design choices for reliability, performance, and functionality. This matrix focuses on core attributes such as connection orientation, data delivery guarantees, and resource management mechanisms, drawing from their defining specifications. It includes established protocols like and , as well as more specialized ones like SCTP, DCCP, and , which integrates security and multiplexing over UDP.
ProtocolConnection-OrientedReliable DeliveryOrderingFlow ControlCongestion ControlMulti-HomingHeader Size (bytes)
TCPYes ()Yes (acknowledgments and retransmissions)Yes ()Yes (sliding window)Yes (e.g., Reno, Cubic)No20 (minimum; up to 60 with options)
UDPNoNoNoNoNoNo8 (fixed)
SCTPYes ()Yes (acknowledgments and retransmissions)Yes (per )Yes (per and )Yes (similar to )Yes (multiple addresses per endpoint)12 (common header; variable with chunks)
DCCPYes ()NoNoNoYes (pluggable CCIDs, e.g., CCID 2 for TCP-like)No12 (minimum; variable with options)
QUICYes (integrated with TLS)Yes (acknowledgments and retransmissions)Yes (per )Yes (per )Yes (e.g., NewReno-based)Yes (connection migration)Variable (typically 20–50; includes crypto overhead)
DCCP, defined in RFC 4340, offers unreliable delivery with mandatory congestion control, making it suitable for applications like that require timeliness over perfection. , standardized in RFC 9000 (2021), builds on to provide reliable, secure transport with built-in encryption and stream multiplexing, addressing limitations in for modern .

Analysis

Performance and Congestion Issues

The transport layer faces significant challenges in maintaining due to , where network resources become overwhelmed, leading to and increased delays. Congestion detection at the transport layer primarily relies on local signals observed by the sender, such as indicated by duplicate acknowledgments or timeouts, and global signals inferred from round-trip time (RTT) increases. These mechanisms allow protocols like to infer network overload without explicit feedback from intermediate routers, though distinguishing between congestion-induced loss and other causes remains imprecise. Early TCP congestion control algorithms, such as TCP Tahoe, introduced foundational mechanisms to mitigate these issues. Tahoe employs slow start, where the congestion window (cwnd) begins at one (MSS) and doubles every RTT until cwnd exceeds the slow start threshold (ssthresh) or a loss is detected via timeout. Upon loss, ssthresh ← cwnd/2, cwnd ← 1 MSS, and slow start restarts. Congestion avoidance is entered when cwnd > ssthresh. In congestion avoidance, Tahoe uses (AIMD): for each acknowledgment received, cwnd increases by 1/MSS, while upon loss, cwnd is halved and slow start is reinvoked. These are formalized as: \text{Slow Start: } cwnd \leftarrow cwnd + \text{MSS} \quad (\text{per ACK, effectively doubling per RTT}) \text{AIMD: } cwnd \leftarrow cwnd + \frac{1}{\text{MSS}} \quad (\text{per ACK}), \quad cwnd \leftarrow \frac{cwnd}{2} \quad (\text{on [loss](/page/Loss)}) TCP Reno refined Tahoe by incorporating fast retransmit and fast , detecting loss via three duplicate s rather than timeouts, and reducing cwnd by half without resetting to slow start. This improves in moderate but still reacts only after occurs. In contrast, TCP Vegas pioneered delay-based detection, monitoring RTT variations to proactively adjust cwnd before losses, aiming to keep the estimated number of buffered packets () between 1 (α) and 3 (β) by increasing cwnd if < 1 (actual throughput below expected based on base RTT) and decreasing if > 3. Vegas achieves 37-71% higher throughput than Reno in simulations while reducing losses by up to 80%. Modern algorithms like CUBIC, the default in since the , extend loss-based control for high-bandwidth-delay product networks using a for cwnd growth that accelerates post-loss recovery. CUBIC's window increase follows W(t) = C(t - K)^3 + W_{\max}, where K is the time to achieve the last maximum window W_{\max}, and C is a constant tuned for fairness with Reno; on loss, it applies AIMD with a milder multiplicative decrease. Performance metrics such as throughput (total bits/sec transmitted) and (useful application data rate, excluding overhead) highlight these algorithms' impacts: for instance, CUBIC sustains higher throughput over long-distance links compared to Reno, often exceeding 90% utilization in high-BDP scenarios. TCP BBR (Bottleneck Bandwidth and RTT) uses measurements of the network path's bottleneck bandwidth and RTT to control the sending rate, pacing packets to match the estimated and maintain a small based on the . Unlike purely loss-based algorithms, BBR is model-based and performs well in underutilized, lossy, or variable links, achieving up to 2-6x better in long fat networks compared to CUBIC in some deployments. It has been integrated into kernels since version 4.9 (2016) and is used by major services like . Bottlenecks exacerbate congestion issues, including —excessive queuing in router buffers that inflates during bursts, sometimes reaching hundreds of milliseconds even on gigabit links. High- networks, such as satellite links with RTTs over 500 ms, amplify this by prolonging slow start and increasing the risk of timeouts, reducing throughput to below 50% of available bandwidth without optimizations. In datacenters, DCTCP addresses low-, high-throughput needs by using (ECN) for fine-grained feedback, reducing buffer occupancy by 70-90% and halving tail compared to standard . For networks, post-2020 deployments reveal ongoing challenges like variable from mobility and small buffers, where traditional variants like CUBIC suffer and underutilization, prompting adaptations for RTT-oriented control to maintain above 80% under fluctuating conditions.

Security Considerations

The transport layer is susceptible to several vulnerabilities that exploit its mechanisms for connection establishment and data delivery. Port scanning involves sending packets to various ports to identify open services, allowing attackers to map potential entry points for further exploitation. SYN flooding, a denial-of-service (DoS) attack, overwhelms a target by initiating numerous half-open TCP connections through spoofed SYN packets, exhausting server resources without completing the handshake. Sequence prediction attacks target TCP's initial sequence numbers (ISNs), enabling by injecting malicious packets that mimic legitimate traffic if the numbers are predictable. Countermeasures at the transport layer include techniques to mitigate these threats without compromising functionality. address SYN flooding by encoding connection state into the -ACK response using a cryptographic , allowing stateless handling of legitimate connections while dropping invalid ones. Transport-layer firewalls filter traffic based on ports, protocols, and connection states, blocking unauthorized scans or floods by enforcing rules such as on SYN packets. Security protocols operating over the transport layer provide , , and . The (TLS) protocol, version 1.3, runs atop reliable transport like to encrypt data streams, preventing eavesdropping and tampering through and record protection mechanisms. For datagram-based protocols like , Datagram TLS (DTLS) version 1.3 (RFC 9147) adapts TLS to handle and reordering, ensuring secure communication in unreliable environments such as real-time applications. Authentication at the transport layer contrasts with lower-layer approaches like , which operates at the network layer to secure IP packets end-to-end. The MD5 option, once used for authenticating BGP sessions via a keyed hash, has been deprecated due to vulnerabilities against length-extension attacks and lack of support for modern . Emerging protocols like integrate TLS 1.3 natively within its UDP-based transport, embedding from the outset to reduce attack surfaces during handshakes. Recent advancements address post-quantum threats to transport-layer security. In 2024, NIST finalized standards for post-quantum encryption algorithms, such as ML-KEM for key encapsulation, enabling TLS 1.3 implementations to resist attacks like those exploiting on . These updates enhance protection against side-channel and harvest-now-decrypt-later threats in protocols like TLS and DTLS.

References

  1. [1]
    Overview of API - Cisco
    Aug 19, 2001 · • Transport Layer Services. Describes the types of services available for transporting data between networks, as well as modes of service and ...
  2. [2]
    What Is the OSI Model? | IBM
    Layer 4: The transport layer transmits end-to-end data between two devices interacting on the network, making sure that data isn't lost, misconfigured or ...Overview · Where did the OSI model...
  3. [3]
    What is transport layer? – TechTarget Definition
    Apr 10, 2023 · In the OSI model, the transport layer sits between the network layer and the session layer. The network layer is responsible for taking the data ...
  4. [4]
    RFC 9293: Transmission Control Protocol (TCP)
    This document specifies the Transmission Control Protocol (TCP). TCP is an important transport-layer protocol in the Internet protocol stack.Table of Contents · Purpose and Scope · Introduction · Functional Specification
  5. [5]
    RFC 768: User Datagram Protocol
    ### Definition and Key Functions of UDP in the Transport Layer
  6. [6]
    What is the OSI Model? | Cloudflare
    The transport layer performs error control on the receiving end by ensuring that the data received is complete, and requesting a retransmission if it isn't.Network protocol definition · User Datagram Protocol (UDP) · What is ICMP?
  7. [7]
    RFC 871: Perspective on the ARPANET reference model
    ### Summary of TCP/IP History and NCP Transition in 1983 (RFC 871)
  8. [8]
    Final report on TCP/IP migration in 1983 - Internet Society
    Sep 15, 2016 · A presentation on the ARPANET TCP/IP migration of 1983 from Ron Broersma (SPAWAR-US Navy) during the recent NLNOG Day in Amsterdam.
  9. [9]
    OSI Model: Transport Layer vs. Networking Layer - Baeldung
    Mar 18, 2024 · 7. Difference Between Transport and Network Layer ; Uses the port address to ensure the communication, Uses logical address ensure for the ...
  10. [10]
    What is OSI Model | 7 Layers Explained - Imperva
    4. Transport Layer ... The Transport Layer provides end-to-end communication services for applications. It ensures complete data transfer, error recovery, and ...
  11. [11]
  12. [12]
    RFC 1122 - Requirements for Internet Hosts - Communication Layers
    o Transport Layer The transport layer provides end-to-end communication services ... TCP is used by those applications needing reliable, connection-oriented ...Missing: core | Show results with:core
  13. [13]
    RFC 8095 - Services Provided by IETF Transport Protocols and ...
    Transport Control Protocol (TCP) TCP is an IETF Standards Track transport protocol. ... Fairhurst, "MIB for the UDP-Lite protocol", RFC 5097, DOI 10.17487 ...
  14. [14]
  15. [15]
    RFC 793 - Transmission Control Protocol - IETF Datatracker
    This document describes the DoD Standard Transmission Control Protocol (TCP). There have been nine earlier editions of the ARPA TCP specification on which this ...Missing: excerpt | Show results with:excerpt
  16. [16]
    RFC 768 - User Datagram Protocol - IETF Datatracker
    This User Datagram Protocol (UDP) is defined to make available a datagram mode of packet-switched computer communication in the environment of an ...
  17. [17]
    RFC 9621: Architecture and Requirements for Transport Services
    This document describes an architecture that exposes transport protocol features to applications for network communication. The Transport Services ...
  18. [18]
    RFC 9293: Transmission Control Protocol (TCP)
    Establishing a Connection. The "three-way handshake" is the procedure used to establish a connection. This procedure normally is initiated by one TCP peer and ...
  19. [19]
    RFC 1071 - Computing the Internet checksum - IETF Datatracker
    This memo discusses methods for efficiently computing the Internet checksum that is used by the standard Internet protocols IP, UDP, and TCP.
  20. [20]
    RFC 813 - Window and Acknowledgement Strategy in TCP
    The window mechanism is a flow control tool. Whenever appropriate, the recipient of data returns to the sender a number, which is (more or less) the size of ...
  21. [21]
    RFC 2001 - TCP Slow Start, Congestion Avoidance - IETF Datatracker
    Modern implementations of TCP contain four intertwined algorithms that have never been fully documented as Internet standards: slow start, congestion avoidance ...Missing: basics | Show results with:basics
  22. [22]
    RFC 9293 - Transmission Control Protocol (TCP) - IETF Datatracker
    This document specifies the Transmission Control Protocol (TCP). TCP is an important transport-layer protocol in the Internet protocol stack.Table of Contents · Purpose and Scope · Introduction · Functional Specification
  23. [23]
    RFC 6298 - Computing TCP's Retransmission Timer
    This document defines the standard algorithm that Transmission Control Protocol (TCP) senders are required to use to compute and manage their retransmission ...
  24. [24]
    RFC 1323 - TCP Extensions for High Performance - IETF Datatracker
    This memo presents a set of TCP extensions to improve performance over large bandwidth*delay product paths and to provide reliable operation over very high- ...
  25. [25]
    RFC 4960: Stream Control Transmission Protocol
    SCTP is a reliable transport protocol operating on top of a connectionless packet network such as IP. It offers the following services to its users.
  26. [26]
    RFC 905 - ISO Transport Protocol specification ISO DP 8073
    Mar 2, 2013 · ISO Transport Protocol specification ISO DP 8073. RFC 905 · 1 Purpose The procedure is used in all classes to assign transport connections to ...
  27. [27]
    OSI: The Internet That Wasn't - IEEE Spectrum
    Jul 29, 2013 · How TCP/IP eclipsed the Open Systems Interconnection standards to become the global protocol for computer networking.
  28. [28]
    (PDF) The battle between standards: TCP/IP Vs OSI victory through ...
    Between the end of the 1970s and 1994 a fierce competition existed between two possible standards, TCP/IP and OSI, to solve the problem of interoperability ...
  29. [29]
    RFC 9260 - Stream Control Transmission Protocol - IETF Datatracker
    SCTP is a reliable transport protocol operating on top of a connectionless packet network, such as IP. It offers the following services to its users.Missing: features | Show results with:features<|control11|><|separator|>
  30. [30]
    RFC 4340 - Datagram Congestion Control Protocol (DCCP)
    The Datagram Congestion Control Protocol (DCCP) is a transport protocol that provides bidirectional unicast connections of congestion-controlled unreliable ...
  31. [31]
    RFC 9000 - QUIC: A UDP-Based Multiplexed and Secure Transport
    QUIC is a secure general-purpose transport protocol. · QUIC is a connection-oriented protocol that creates a stateful interaction between a client and server.Table of Contents · Flow Control · Connections · Cryptographic and Transport...
  32. [32]
    RFC 5681: TCP Congestion Control
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  33. [33]
    RFC 2001: TCP Slow Start, Congestion Avoidance, Fast Retransmit ...
    Modern implementations of TCP contain four intertwined algorithms that have never been fully documented as Internet standards: slow start, congestion avoidance ...
  34. [34]
    [PDF] TCP Vegas: End to End Congestion Avoidance on a Global Internet
    The main result reported in this paper is that Vegas is able to achieve between 37 and 71% better throughput than Reno. Moreover, this improvement in throughput ...
  35. [35]
    Bufferbloat: Dark Buffers in the Internet - Communications of the ACM
    Jan 1, 2012 · The culprit is bufferbloat, the existence of excessively large and frequently full buffers inside the network.
  36. [36]
    [PDF] TCP Performance over Satellite Channels - UC Berkeley EECS
    As originally specified, TCP did not perform well over satellite networks (or high latency networks in general) for a number of reasons related to the ...
  37. [37]
    [PDF] Data Center TCP (DCTCP) - People | MIT CSAIL
    In this paper, we focus on soft real-time applica- tions, supporting web search, retail, advertising, and recommenda- tion systems that have ...
  38. [38]