Fact-checked by Grok 2 weeks ago

Packet segmentation

Packet segmentation is the process in computer networking whereby larger streams from applications are divided into smaller, manageable units known as segments for transmission over , ensuring compatibility with the (MTU) limits of underlying links and promoting efficient, reliable delivery. This division occurs primarily at the of the , with protocols like the (TCP) responsible for creating these segments, each containing a portion of along with headers for sequencing, error detection, and flow control. Unlike network-layer fragmentation, which breaks packets reactively at routers when they exceed link MTUs, segmentation is proactive and sender-initiated to avoid such interruptions. In specifically, segmentation transforms a continuous byte stream into discrete segments, where each segment's size is determined by the effective (MSS), typically calculated as the path MTU minus the combined length of TCP and IP headers—often around 1460 bytes for standard Ethernet MTUs of 1500 bytes. The sender negotiates the MSS during connection establishment via the MSS option in SYN segments, defaulting to 536 bytes for IPv4 or 1220 bytes for if unspecified, to prevent IP-layer fragmentation and optimize throughput. Sequence numbers are assigned based on the byte offset in the stream, allowing the receiver to reassemble data in order and request retransmissions for lost segments, thus enabling TCP's end-to-end reliability. This mechanism is crucial for handling variable network conditions, as it balances payload efficiency with constraints like and ; for instance, modern implementations support TCP Segmentation Offload (TSO), where network interface cards perform the final division of large buffers into segments, reducing CPU overhead on high-speed links. By adhering to protocols, segmentation minimizes from fragmentation, enhances congestion control, and supports diverse applications from web browsing to file transfers. In contrast to connectionless protocols like , which rely solely on and offer no built-in reassembly guarantees, 's segmentation underpins its role as the dominant transport protocol for reliable internet communications.

Fundamentals

Definition and Purpose

Packet segmentation is the process of dividing a large or from an application into smaller, manageable units known as segments at the of the . This segmentation facilitates the transmission of data across networks that impose size constraints on individual units, ensuring compatibility with underlying protocols like . In protocols such as , each segment includes a header with sequence numbers to track the order and integrity of the data bytes, allowing for ordered reassembly at the destination. The primary purpose of packet segmentation is to enable efficient and reliable data transmission over heterogeneous with varying link capacities and potential for errors. By breaking data into smaller segments, it accommodates (MTU) limitations on network links, preventing the need for lower-layer fragmentation that could degrade performance. Segmentation also enhances error recovery, as only affected segments can be retransmitted rather than the entire , which is particularly valuable in unreliable or noisy channels. Additionally, it supports flow control mechanisms, such as windowing in , where the receiver advertises available buffer space to regulate the sender's transmission rate and optimize bandwidth usage. Packet segmentation emerged in the alongside early packet-switched networks, notably the , which demonstrated the feasibility of dividing variable-sized data into smaller packets of variable length up to a maximum size to share network resources efficiently without dedicating full paths for each message. This approach addressed the challenges of interconnecting diverse computers over long distances, laying the foundation for modern protocols. Key benefits include improved reliability through selective retransmissions, reduced overall latency for error-prone transmissions by minimizing redundant data resends, and enabling of segments across network paths, which enhances throughput in high-bandwidth environments.

Role in the OSI Model

Packet segmentation primarily occurs at Layer 4, the , of the , where it facilitates end-to-end data delivery between host applications across a network. This layer receives data streams or messages from the upper layers—specifically the and layers—and divides them into manageable segments to ensure efficient transmission while maintaining the integrity of the original data. Unlike lower-layer processes, segmentation at this level is proactive and host-initiated, preparing data for delivery without relying on intermediate network adjustments. The interacts closely with adjacent layers to integrate segmentation into the overall communication stack. It accepts data from higher layers for segmentation and then passes the resulting segments, often encapsulated with transport headers, to the Network Layer (Layer 3) for further packetization and . This handover contrasts sharply with at Layer 3, which occurs reactively on already-formed packets when they exceed network path constraints, such as the (MTU), potentially leading to reassembly burdens on endpoints. By segmenting data upstream, the avoids such mid-network disruptions and aligns transmission with end-to-end reliability needs. Key responsibilities of the Transport Layer in segmentation include ensuring reliable delivery through mechanisms like sequencing and acknowledgments (as in connection-oriented protocols), implementing port addressing for multiplexing multiple applications over a single network connection, and adapting segment sizes to lower-layer constraints without modifying the semantic content of the upper-layer data. Ports, numbered from 0 to 65,535, enable demultiplexing at the receiver, directing segments to the appropriate processes. These functions collectively provide a logical boundary between application-specific data handling and the unreliable, best-effort delivery of underlying layers. In the practical TCP/IP model, which underpins much of modern networking, the 's role in segmentation remains aligned with the OSI Transport Layer but integrates more fluidly with the (equivalent to OSI's ) for implementations like over . This adaptation reflects the TCP/IP suite's origins as a streamlined alternative to the full seven-layer OSI framework, prioritizing interoperability over strict layering while preserving core segmentation principles for end-to-end control.

Segmentation Process

Steps in Segmenting Data

In the , packet segmentation begins with the intake of data from the , where stream-oriented protocols like receive a continuous byte stream without message boundaries, preparing it for transmission over the network. This process ensures that upper-layer data, which may exceed network-imposed size limits, is divided into manageable units called segments to facilitate reliable delivery. This process is characteristic of stream-oriented protocols like ; message-oriented protocols like transmit data as single datagrams without transport-layer segmentation. The next step involves determining the appropriate segment size by evaluating the total data length against protocol-specific constraints, such as the (MSS), which defines the largest allowable per segment excluding headers. In , for instance, the MSS is negotiated during connection establishment and typically set to 1460 bytes over Ethernet networks to align with the 1500-byte MTU after accounting for standard and headers. If the incoming data exceeds this limit, segmentation proceeds to avoid exceeding network capabilities. The division algorithm then breaks the data into fixed or variable-sized chunks, each not exceeding the MSS, using techniques like the sender's avoidance to optimize transmission efficiency by preferring full-sized segments. segments the byte stream without regard to application-level boundaries. Each resulting segment is appended with a sequence number—a 32-bit value starting from an initial send sequence number (ISS)—to enable ordered reassembly at the , with the number advancing by the byte count of the segment's data. header , if needed for , consists of non-transmitted zeros and is minimized to reduce overhead.

Handling Maximum Transmission Unit (MTU)

The (MTU) defines the largest packet size, in bytes, that can traverse a specific network link without fragmentation, encompassing both the header and payload. In standard Ethernet implementations, this value is 1500 bytes, accommodating typical traffic while balancing overhead and efficiency. (PMTUD) enables endpoints to identify the minimum MTU across an entire network path by leveraging ICMP "Destination Unreachable" messages with a "Fragmentation Needed" code, which report the constraining link's MTU. Upon receiving such feedback, transport protocols like dynamically adjust the (MSS)—the largest payload per segment—to fit within the path MTU, using the formula: \text{MSS} = \text{MTU} - 20 \text{ (IP header size)} - 20 \text{ (TCP header size)} This ensures segments remain intact without lower-layer fragmentation. When application data exceeds the effective MTU, protocols segment it into multiple units at the transport layer, preemptively dividing payloads to bypass IP fragmentation and reduce reassembly overhead at the receiver. MTU black holes, arising from firewalls or filters blocking ICMP feedback, can disrupt this process; robust implementations counter them by progressively falling back to conservative sizes, such as halving the MSS or defaulting to 536 bytes, until connectivity resumes. In environments, jumbograms extend MTU capabilities via a hop-by-hop option, supporting payloads up to 4,294,967,295 bytes (approximately 4 GB) on links with sufficiently large MTUs, though this requires end-to-end agreement and is rare outside specialized high-speed networks. Heterogeneous networks introduce challenges from varying MTUs, as encapsulations like VPN tunnels add overhead (e.g., 40-60 bytes for ), often reducing the effective MTU to 1400 bytes or less, necessitating proactive discovery and adjustment to prevent persistent fragmentation or packet drops.

Reassembly and Error Management

Reassembly Procedures

Reassembly at the receiving end involves reconstructing the original from segments that may arrive out of order, possibly duplicated, or incomplete due to variability. The process ensures reliable delivery by managing , ordering, and integration of segment payloads while handling potential losses through integration with mechanisms. Receivers allocate dedicated to store incoming segments until sufficient arrives for . These , with capacity typically much larger than the (MSS) to hold multiple segments for out-of-order arrivals, are managed according to the receive (RCV.BUFF) set by the implementation—often in the range of tens or hundreds of kilobytes—allowing efficient queuing without immediate delivery to the application. The MSS, negotiated during setup and defaulting to 536 bytes for IPv4 or 1220 bytes for if unspecified, determines the size of individual incoming segments. In TCP, the receive (RCV.BUFF) is partitioned into areas for unconsumed , the advertised window, and available . To manage ordering, receivers rely on sequence numbers embedded in each segment header to arrange payloads correctly and detect anomalies. Out-of-order segments are buffered and sorted based on these numbers relative to the next expected sequence (e.g., RCV.NXT in ), ensuring they fall within the receive window (RCV.NXT to RCV.NXT + RCV.WND - 1). Duplicates are identified and discarded by comparing incoming sequence numbers against acknowledged ranges, avoiding redundant processing. Sequence numbers, initially assigned during the segmentation process at the sender, thus play a critical role in enabling accurate reordering at the . Once segments fill gaps in the without overlaps or missing parts, payloads are concatenated in numerical order to form the continuous data , with all headers stripped to deliver pure application data. Overlapping segments, if any, are trimmed to include only novel bytes, advancing the expected pointer (e.g., RCV.NXT) upon successful . to the upper layer occurs when buffers reach capacity, a push flag is set, or the is contiguous up to the current expectation. To avoid resource exhaustion from stalled reassembly, timeout mechanisms discard incomplete segment collections after a defined interval. In , this aligns with the user timeout option, which defaults to 5 minutes and aborts the if remains undelivered, freeing buffers for incomplete assemblies. Retransmission timeouts further support this by prompting sender recovery for gaps, but persistent incompleteness triggers buffer release. For instance, consider original 4000-byte data segmented into four 1000-byte units with sequence numbers 0, 1000, 2000, and 3000. If arrival order is 1000, 0, 3000, 2000, the receiver buffers all, sorts by sequence number, removes headers from each , and concatenates them sequentially to yield the intact 4000 bytes for application use.

Error Detection and

In packet segmentation, error detection primarily relies on embedded in segment headers to identify corruption during transmission. For instance, in the Transmission Control Protocol (), a 16-bit one's complement is computed over a pseudo-header (including source and destination ports, lengths, and a fixed value), the TCP header, all options, and the data , padded if necessary to ensure even 16-bit words. This detects a significant portion of bit errors, though it is not foolproof against all multi-bit errors or certain burst patterns. Recovery from detected errors employs (ARQ) protocols, where the receiver discards corrupted segments and notifies the sender via acknowledgments, prompting retransmission of only the affected segments. implements a variant akin to using cumulative acknowledgments (ACKs), retransmitting from the first unacknowledged segment upon timeout or duplicate ACKs, while extensions like Selective Acknowledgment (SACK) enable selective repeat behavior by allowing explicit indication of received out-of-order segments, thus retransmitting only lost or corrupted ones. Some protocols incorporate negative acknowledgments (NAKs) for more direct recovery, where the receiver explicitly signals missing or erroneous segments, reducing unnecessary retransmissions compared to positive ACK-only schemes. Segmentation enhances error recovery by isolating issues to individual segments, preventing a single corruption from invalidating the entire and enabling granular retransmissions without impacting unaffected parts. In applications, such as voice or video over , (FEC) serves as an alternative to ARQ, where redundant parity segments are transmitted alongside data to allow receiver-side reconstruction of lost or corrupted packets without retransmission delays. These mechanisms introduce recovery due to detection, signaling, and retransmission overhead but substantially improve effective throughput on error-prone by ensuring reliable with minimal redundant data beyond essentials. For example, selective ARQ variants like can significantly improve recovery in high-loss scenarios compared to basic Go-Back-N, balancing and bandwidth efficiency.

Protocols and Applications

Implementation in TCP

In the Transmission Control Protocol (TCP), segmentation involves dividing the application-layer byte stream into discrete segments to facilitate reliable transmission over the (IP). Each TCP segment consists of a header of 20 to 60 bytes, depending on the inclusion of options, followed by the data payload. The header includes key fields such as 16-bit source and destination ports to identify the communicating applications, 32-bit sequence and acknowledgment numbers to track byte order and confirm receipt, control flags including for connection synchronization, for acknowledgment, and for graceful closure, a 16-bit window size indicating the receiver's buffer capacity, and a 16-bit for integrity verification. To optimize segment size and avoid IP-layer fragmentation, TCP endpoints negotiate the (MSS) during the three-way handshake connection setup. The MSS option (TCP option kind 2, length 4 bytes) is included in the SYN segments, where each side announces the maximum data octets it can receive in a segment, calculated as the path MTU minus the IP and TCP header sizes (typically 40 bytes minimum). If no MSS option is exchanged, a default of 536 bytes applies for IPv4 or 1220 bytes for IPv6; the effective MSS is the minimum of the two negotiated values adjusted for any additional options. This negotiation ensures segments fit within the underlying IP packet limits, promoting efficiency. TCP supports several options and extensions to enhance segmentation and . The Timestamps option (defined in 7323) adds 10-byte timestamps to segments for accurate round-trip time measurement and protection against wrapped sequence numbers (PAWS), improving performance on high-latency paths. Selective Acknowledgments (), specified in 2018, allow receivers to acknowledge non-contiguous blocks of data via additional options (up to four 8-byte blocks), enabling senders to retransmit only lost segments rather than all unacknowledged data, thus reducing time. Additionally, enforces a Maximum Segment Lifetime (MSL) of 2 minutes, beyond which segments are considered expired to prevent delayed duplicates from corrupting connections. TCP segments are encapsulated as payloads within IP packets, with specifics varying between IPv4 and IPv6. In IPv4, the IP header includes a Don't Fragment (DF) bit, which TCP implementations typically set to 1 during to probe for the maximum transmittable unit and avoid intermediate router fragmentation, triggering ICMP "Fragmentation Needed" messages if exceeded. IPv6 lacks a DF bit—fragmentation is prohibited at routers and handled only by the source host using a Fragment Header—but TCP over IPv6 similarly relies on MSS negotiation to align segments with the path MTU. This integration ensures TCP segmentation remains compatible across IP versions while minimizing fragmentation risks. For illustration, consider a segment carrying 1000 bytes of : the sequence number in the header starts at an initial value (e.g., the Initial Sequence Number plus one if is set) and increments by 1000 for the next segment, accounting only for octets ( or flags consume one sequence number unit each if present). The is computed as the one's complement of the one's complement sum of all 16-bit words from the pseudo-header (including addresses and protocol), header (with checksum field zeroed), and (padded to even length if necessary), ensuring end-to-end error detection. \text{Checksum} = \overline{\sum_{i=1}^{n} w_i} \mod 2^{16} where w_i are the 16-bit words and \overline{\cdot} denotes one's complement.

Use in Other Standards and Protocols

Packet segmentation in the operates in a connectionless manner, where data is transmitted as individual datagrams without inherent breakdown into ordered segments at the . employs a minimal 8-byte header consisting of source and destination ports (16 bits each), a length field (16 bits), and a (16 bits), which does not include mechanisms for segmentation, reassembly, or ordering; instead, any necessary handling of larger messages is delegated to the . This approach contrasts with more robust protocols by prioritizing simplicity and low overhead, though it relies on underlying for datagrams exceeding the path MTU. The standard, designed for high-speed home networking over powerline, , and phoneline media, incorporates segmentation and reassembly () sublayer to manage data transmission across noisy environments. segments data into (FEC) blocks typically sized at 120 bytes or 540 bytes, enabling efficient adaptation to varying channel conditions while supporting data rates up to 2 Gbit/s. Reliability is enhanced through (ARQ) mechanisms at the MAC layer, which retransmit corrupted segments to ensure error-free delivery over unreliable wired links. The (SCTP) extends segmentation capabilities for multi-streaming applications, such as signaling, by dividing user messages into chunks that can be transmitted independently across multiple s within a single association. SCTP supports partial reliability options (PR-SCTP), allowing selective retransmission of non-critical segments to balance reliability and timeliness. Similarly, the protocol, underlying , performs segmentation by breaking data into frames encapsulated within packets, integrating congestion control and loss recovery directly into the for faster web performance. QUIC's design enables 0-RTT handshakes and multiplexing, reducing compared to traditional segmentation. In wireless standards like (Wi-Fi), while frame aggregation techniques such as A-MPDU and A-MSDU combine multiple smaller units to counteract segmentation overhead and boost throughput, the initial packet segmentation still occurs at the before MAC-layer processing. Aggregation reverses lower-layer fragmentation by packing up to 64 MPDUs into a single physical layer (PPDU), improving efficiency in high-density environments but presupposing transport-level division of larger payloads. Emerging applications in 5G and 6G networks leverage packet segmentation for low-latency IoT scenarios, where the Radio Link Control (RLC) layer in the New Radio (NR) performs segmentation and reassembly to support ultra-reliable low-latency communication (URLLC) with end-to-end latencies under 1 ms.

Comparisons and Distinctions

Versus IP Fragmentation

Packet segmentation, typically performed at the transport layer by protocols like TCP, differs fundamentally from IP fragmentation, which operates at the network layer. IP fragmentation occurs when an IP datagram exceeds the maximum transmission unit (MTU) of a link, prompting routers or the source host to divide it into smaller fragments. Each fragment receives its own IP header, including a 16-bit identification field to group related fragments, a 13-bit fragment offset indicating position in the original datagram (in units of 8 octets), and a more fragments (MF) flag set to 1 for non-final fragments and 0 for the last one. Reassembly of these fragments happens at the destination host, using the identification, source and destination addresses, and protocol fields to match and reconstruct the datagram. In contrast, TCP segmentation is an end-to-end process controlled by the sending and receiving hosts, proactively dividing application data into segments sized to fit the path MTU, often using the (MSS) option to avoid altogether. This host-managed approach ensures reliability through sequence numbers and acknowledgments, whereas is reactive and hop-by-hop, with each router potentially fragmenting further if needed, leading to inefficiencies like additional 20-byte overhead per fragment. Segmentation thus maintains better control and performance, as TCP can adjust segment sizes dynamically based on network feedback, while fragmentation lacks such end-to-end coordination. A major drawback of IP fragmentation is its increased vulnerability to errors and losses; if any single fragment is dropped due to or corruption, the entire original must be discarded and retransmitted, amplifying the probability of failure compared to intact segments. This fragility is exacerbated by security risks, such as overlapping fragment attacks, and operational issues like black-holing when ICMP messages are filtered, prompting modern guidance to avoid fragmentation through techniques like (PMTUD). 8900 explicitly deems IP fragmentation fragile and recommends transport-layer mechanisms, such as TCP's MSS negotiation, to prevent it. Historically, early designs in the 1970s and 1980s relied heavily on fragmentation to handle diverse network MTUs, as outlined in RFC 791, but performance analyses and evolving standards shifted preference toward segmentation by the mid-1980s. RFC 879 introduced the TCP MSS option in 1983 to limit segment sizes proactively, reducing reliance on fragmentation, while subsequent developments like PMTUD in RFC 1191 (1990) further encouraged end-to-end sizing to optimize throughput and reliability. This transition reflected growing recognition of fragmentation's overhead and error proneness in internet-scale deployments. For instance, if a implementation sends a 4000-byte over a with a 1500-byte MTU (common on Ethernet), the layer would fragment it into three datagrams, each with duplicated headers and risking total loss if one fails; however, proper configuration uses MSS advertisement during connection setup to cap segments at around 1460 bytes (MTU minus 40 bytes for headers), ensuring no fragmentation occurs.

Versus Network Segmentation

Network segmentation is an architectural practice that divides a physical or virtual computer network into smaller, isolated subnetworks or segments to enhance , optimize , and ensure with regulatory requirements. This approach typically employs technologies such as Local Area Networks (VLANs), firewalls, lists (ACLs), or (SDN) to create boundaries that restrict traffic flow between segments. By isolating sensitive resources, such as payment systems or medical devices, reduces and limits the potential impact of unauthorized access or propagation. In contrast, packet segmentation operates at the protocol level as a technique, where transport-layer protocols like divide incoming data streams from upper layers into smaller, manageable segments to facilitate reliable transmission over heterogeneous networks. The primary goal of packet segmentation is to improve transmission efficiency by adapting segment sizes to network conditions, such as the (MSS) negotiated during connection establishment, thereby minimizing overhead and supporting flow control mechanisms. While focuses on infrastructure design to control overall traffic flow and contain breaches—aligning with zero-trust architectures that enforce granular access policies—the two differ fundamentally in scope: packet segmentation addresses data unit handling for transit reliability, whereas manages topology to mitigate lateral movement in security incidents. The shared terminology of "segmentation" can lead to misconceptions, as packet segmentation pertains specifically to breaking down data payloads at the , independent of , while involves partitioning the broader infrastructure without altering data packet contents. These processes do not directly interact, though may indirectly affect packet handling by imposing varying MTU limits across subnetworks, potentially influencing how segments are sized or fragmented. Regarding benefits, packet segmentation enhances data reliability during transit by enabling selective retransmission of lost segments, whereas bolsters isolation and breach containment, a priority amplified after high-profile incidents like the 2017 breach, where inadequate segmentation allowed attackers to pivot across databases and exfiltrate data from 145.5 million individuals. Standards for include , which defines VLAN bridging and tagging for logical isolation, while packet segmentation is governed by IETF RFC 793 for TCP's segment formation and delivery.

References

  1. [1]
  2. [2]
    17 TCP Transport Basics - An Introduction to Computer Networks
    It is traditional to refer to the data portion of TCP packets as segments. ... At 10 Gbps, a system can send or receive close to a million packets per second, and ...
  3. [3]
  4. [4]
  5. [5]
  6. [6]
  7. [7]
  8. [8]
    A Brief History of the Internet - Internet Society
    Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose ...
  9. [9]
  10. [10]
  11. [11]
    Transport Layer - Welcome to Adobe GoLive 6
    Introduction. The transport layer in both the Internet and OSI models is responsible for end-to-end communications. That means that it provides the layers ...
  12. [12]
    9 IP version 4 - An Introduction to Computer Networks
    IPv4 handles this by supporting fragmentation: a network may break a too-large packet up into units it can transport successfully. While IPv4 fragmentation is ...9.5 The Classless Ip... · 9.6 Ipv4 Subnets · 9.7 Network Address...<|control11|><|separator|>
  13. [13]
  14. [14]
    7 IP version 4 - An Introduction to Computer Networks
    IP handles this by supporting fragmentation: a network may break a too-large packet up into units it can transport successfully. While IP fragmentation is ...
  15. [15]
  16. [16]
  17. [17]
  18. [18]
  19. [19]
    RFC 1191 - Path MTU discovery - IETF Datatracker
    This memo describes a technique for dynamically discovering the maximum transmission unit (MTU) of an arbitrary internet path.Missing: adjustment | Show results with:adjustment
  20. [20]
    RFC 879 - The TCP Maximum Segment Size and Related Topics
    This is the MTU minus the IP header length (MDDS = MTU - IPHdrLen). When opening a connection TCP can send an MSS option with the value equal MDDS - TCPHdrLen.
  21. [21]
    RFC 2923 - TCP Problems with Path MTU Discovery
    This memo catalogs several known TCP implementation problems dealing with Path MTU Discovery [RFC1191], including the long-standing black hole problem.Missing: adjustment | Show results with:adjustment
  22. [22]
    RFC 2675 - IPv6 Jumbograms - IETF Datatracker
    Abstract A "jumbogram" is an IPv6 packet containing a payload longer than 65,535 octets. ... RFC 1883 has been superseded by RFC 2460, which no longer includes ...
  23. [23]
    RFC 4459 - MTU and Fragmentation Issues with In-the-Network ...
    3. Ensure that in the specific environment, the encapsulated packets will fit in all the paths in the network, e.g., by using MTU bigger than 1500 in the ...Missing: definition | Show results with:definition
  24. [24]
  25. [25]
    RFC 2018 - TCP Selective Acknowledgment Options
    A Selective Acknowledgment (SACK) mechanism, combined with a selective repeat retransmission policy, can help to overcome these limitations.
  26. [26]
    RFC 6363 - Forward Error Correction (FEC) Framework
    This document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection ...
  27. [27]
    RFC 5109 - RTP Payload Format for Generic Forward Error Correction
    This document specifies a payload format for generic Forward Error Correction (FEC) for media data encapsulated in RTP.
  28. [28]
    RFC 9293 - Transmission Control Protocol (TCP) - IETF Datatracker
    This document specifies the Transmission Control Protocol (TCP). TCP is an important transport-layer protocol in the Internet protocol stack.Table of Contents · Purpose and Scope · Introduction · Functional Specification
  29. [29]
    RFC 7323 - TCP Extensions for High Performance - IETF Datatracker
    This document specifies a set of TCP extensions to improve performance over paths with a large bandwidth delay product and to provide reliable operation over ...
  30. [30]
    RFC 768 - User Datagram Protocol - IETF Datatracker
    RFC 768 defines the User Datagram Protocol (UDP), a datagram mode for packet-switched communication, using IP as the underlying protocol.Missing: segmentation | Show results with:segmentation
  31. [31]
    RFC 9114 - HTTP/3 - IETF Datatracker
    This document defines HTTP/3: a mapping of HTTP semantics over the QUIC transport protocol, drawing heavily on the design of HTTP/2.
  32. [32]
    [PDF] Ultra-Reliable Low-Latency Communication - 5G Americas
    New services and applications requiring lower latency, better reliability, massive connection density and improved energy efficiency are emerging in an ...
  33. [33]
    RFC 791: Internet Protocol
    The fragmentation and reassembly procedures are most easily described by examples. The following procedures are example implementations. General notation in ...
  34. [34]
    RFC 8900: IP Fragmentation Considered Fragile
    Persistent loss of ICMP PTB messages can cause persistent black holes. ... Because the actual path MTU is unknown, such applications SHOULD fall back to ...
  35. [35]
    RFC 879: The TCP Maximum Segment Size and Related Topics
    This memo discusses the TCP Maximum Segment Size Option and related topics. The purposes is to clarify some aspects of TCP and its interaction with IP.
  36. [36]
    What Is Network Segmentation? - Cisco
    Network segmentation divides a network into smaller parts to improve performance and security by controlling traffic flow among those parts.
  37. [37]
    What Is Network Segmentation? - Palo Alto Networks
    Network segmentation is an architectural approach that divides a network into multiple segments or subnets, each acting as its own small network.
  38. [38]
    RFC 793 - Transmission Control Protocol - IETF Datatracker
    RFC 793 defines the Transmission Control Protocol (TCP), a reliable host-to-host protocol for packet-switched networks, providing reliable process-to-process ...
  39. [39]
    None
    Summary of each segment:
  40. [40]
    [PDF] Actions Taken by Equifax and Federal Agencies in Response to the ...
    Aug 30, 2018 · Equifax's investigation of the breach identified four major factors including identification, detection, segmenting of access to databases, and ...
  41. [41]
    IEEE 802.1Q-2018
    Jul 6, 2018 · IEEE 802.1Q-2018 is a standard for local and metropolitan area networks, specifying how MAC service is supported by bridged networks and the ...