Fact-checked by Grok 2 weeks ago

Datagram

A datagram is a self-contained unit of in packet-switched networks, consisting of a header with and destination addresses and a containing the actual message, transmitted independently without guaranteed delivery, order, or error correction. This connectionless approach allows each datagram to be routed separately, potentially via different paths, making it suitable for applications prioritizing speed over reliability, such as streaming or . The term "datagram" was coined in the early by engineer Halvor Bothner-By, combining "" and "telegram". computer scientist Louis Pouzin adopted the concept for his project starting in 1971, the first implementation of a pure datagram model. In datagram networks, switches forward these units using destination-based routing tables, enhancing resilience against failures but complicating congestion control and . Prominent examples include (IP) datagrams, which form the foundation of IP networks by encapsulating higher-layer , and the (UDP), which transports datagrams for low-latency applications like DNS queries and VoIP. Unlike connection-oriented protocols such as , datagrams do not establish sessions or ensure sequencing, trading reliability for efficiency in bandwidth-constrained environments. If a datagram exceeds the maximum transmission unit (MTU) of a network , it may be fragmented into smaller pieces, which are reassembled at the destination using header fields like and . Pouzin's innovation influenced the design of modern protocols, such as /IP, which were later implemented on the , emphasizing end-to-end responsibility for over network-level guarantees.

Fundamentals

Definition

A datagram is a self-contained, independent unit of that is transmitted over a in a connectionless manner, carrying its own source and destination addressing as well as information sufficient to enable its delivery without prior setup of a communication path. This structure allows each datagram to be treated as a entity, independent of any other data units, facilitating flexible across diverse topologies. At its core, datagram transmission operates on a principle, providing no guarantees of reliability, order of arrival, or flow control; individual datagrams may be lost, duplicated, or received out of sequence, with error detection and recovery left to higher-layer protocols if needed. The (IP), for instance, exemplifies this by routing each datagram independently based on its header information, without establishing or maintaining connections between sender and receiver. In packet-switched networks, datagrams distinguish themselves from other data units by their emphasis on autonomy and independence in , enabling efficient handling in environments where paths may vary dynamically and resources are shared among multiple communications. This approach contrasts with more rigid data transmission models, prioritizing simplicity and scalability over assured delivery.

Key Characteristics

Datagrams operate on a model, providing no guarantees of successful transmission, ordering, or duplication avoidance at the network layer. This unreliability means that datagrams may be lost, corrupted, or delivered out of sequence without acknowledgments, retransmissions, or error correction mechanisms inherent to the datagram service itself, placing the burden of reliability on higher-layer protocols if needed. A core trait of datagrams is their , where each one is routed autonomously through without maintaining session state or between sender and receiver. This connectionless approach allows datagrams from the same source to destination to follow different paths, enhancing and scalability in large, dynamic networks by avoiding the need for end-to-end setup or teardown. The simplicity of datagrams stems from their minimal requirements, imposing little to no on intermediate routers or endpoints, which results in low processing overhead and reduced latency compared to connection-oriented alternatives. However, this design shifts responsibilities such as error recovery and flow control to the , making datagrams suitable for scenarios prioritizing speed over guaranteed delivery. Datagrams are subject to size constraints defined by the underlying network's (MTU), typically ranging from 576 bytes (minimum host acceptance) to bytes maximum in IPv4 implementations. If a datagram exceeds the MTU along its path, it may be fragmented into smaller pieces for transmission, with reassembly performed only at the final destination, potentially introducing additional delays or loss if fragments fail to arrive.

Historical Development

Origins in Packet Switching

The concept of the datagram emerged as a foundational element within the broader development of networks during the mid-1960s, driven by the need for robust, distributed communication systems capable of surviving disruptions such as nuclear attacks. , working at the , laid early groundwork in his 1964 series of memoranda titled On Distributed Communications. In these documents, Baran advocated for a decentralized where large messages would be broken into smaller "message blocks" that could be transmitted independently across multiple redundant paths, reassembled only at the destination. This approach prioritized survivability and efficiency over traditional centralized or circuit-switched systems, marking message blocks as direct precursors to datagrams by emphasizing self-contained units of data without reliance on fixed connections. Independently, in 1965, Donald Davies at the UK's National Physical Laboratory (NPL) conceived the core principles of while exploring high-speed data communication for computer networks. Davies envisioned dividing messages into fixed-size "packets" for store-and-forward transmission, allowing flexible routing through intermediate nodes without dedicating end-to-end paths. By 1967, in his seminal paper presented at the ACM Symposium on Operating System Principles, Davies formalized the idea of these independent data units as "packets," defining them as autonomous messages containing source and destination addresses, routed on a best-effort basis without prior setup or guaranteed delivery. This terminology and model highlighted the role of such units in enabling simple, scalable networks where each could take varying paths, contrasting with more rigid communication paradigms. The specific term "datagram" was coined in the early 1970s by Halvor Bothner-By, a Norwegian engineer serving as rapporteur for the CCITT on , combining "data" and "telegram." These theoretical foundations influenced the practical design of early networks, particularly the project initiated by the U.S. Advanced Research Projects Agency () in the late 1960s. During 1969-1970 planning sessions, ARPA engineers, including Lawrence Roberts, debated network architectures after encountering ' and Baran's ideas through conferences and reports. The datagram approach—independent routing without virtual connections—surfaced as a compelling alternative to circuit-like models; proponents argued it offered greater flexibility, reduced complexity in node management, and better adaptability to heterogeneous computers. This perspective shaped ARPANET's adoption of with message decomposition into 1,000-bit blocks, though the final (IMP) design leaned toward virtual circuits for reliability, underscoring the ongoing tension between datagram simplicity and connection-oriented control. In parallel, European efforts advanced the pure datagram model. In 1971, French computer scientist Louis Pouzin began developing the network, the first implementation of a fully connectionless datagram system, where each packet was routed independently without network-level session management. Operational by 1973, CYCLADES demonstrated end-to-end host responsibility for reliability, influencing global network designs and emphasizing datagrams' efficiency for distributed systems.

Evolution and Standardization

The development of datagram protocols gained momentum in the through efforts to interconnect disparate packet-switched networks, culminating in the 's transition from the connection-oriented Network Control Program (NCP) to the TCP/IP suite. In 1974, Vinton Cerf and Robert Kahn proposed a foundational that relied on datagrams as self-contained units for end-to-end delivery, allowing hosts to manage reliability while the network layer focused on best-effort forwarding. This design philosophy was implemented in starting with experimental tests in 1977 and fully operationalized on January 1, 1983, known as , when TCP/IP replaced NCP across the network, enabling scalable inter-network communication without per-connection state in the core routers. The (IETF) played a pivotal role in standardizing datagram formats, beginning with RFC 791 in September 1981, which defined the (IPv4) as a connectionless datagram service providing addressing, fragmentation, and routing without guarantees of delivery or order. This specification formalized the datagram's header structure, including source and destination addresses, type of service, and checksum, establishing it as the core mechanism for the early Internet. Evolution continued with RFC 2460 in December 1998, introducing to address IPv4's address exhaustion; retained the datagram paradigm but enhanced it with expanded 128-bit addressing, simplified header processing, flow labeling for , and built-in security through support, facilitating global scalability. Key milestones in datagram adoption extended beyond IP networks, influencing the Open Systems Interconnection (OSI) model where the datagram emerged as the primary primitive for layer 3 (network layer) operations. The International Organization for Standardization (ISO) incorporated this in ISO/IEC 8473 (1984), defining the Connectionless-mode Network Protocol (CLNP) as a datagram-based service for routing without prior connection setup, mirroring IP's approach and enabling interoperability in diverse environments. In parallel, the 1990s saw datagram integration into Asynchronous Transfer Mode (ATM) networks, primarily through the ATM Adaptation Layer Type 5 (AAL5), which supported connectionless datagram delivery for IP traffic over ATM's fixed-size cells; standardized by the ATM Forum in the early 1990s, this mode was adopted in telecommunications backbones for efficient multiplexing of bursty data, though it remained secondary to ATM's predominant virtual circuit orientation.

Technical Aspects

Structure and Components

A datagram is composed of two primary components: a header and a . The header provides the required for and processing, incorporating essential fields such as and destination addresses to identify the sender and , a type to specify the upper-layer encapsulated in the , a total length field to indicate the overall size of the datagram, a for verifying header integrity, and fragmentation controls including an identification number to group related fragments, flags to manage fragmentation permissions, and an to position fragments within the original datagram. Datagram structures vary across , but for example in IPv4, the header includes these fields. The constitutes the variable- data portion that transports the actual message or application content, lacking a fixed internal format as it varies based on the sending application or . In terms of overall format, datagrams employ a header with a fixed minimum size that can vary through the inclusion of optional fields for extensibility, such as additional or parameters, while the total field encompasses both header and to define the complete unit. This design facilitates adaptation to diverse network conditions. For IPv4 datagrams, at the byte level, the layout sequences fields starting with header delineation and indicators ( and IHL), followed by priorities (TOS), specification (total ), fragmentation elements (identification, flags, offset), survival time (), identifier, checksum, addresses, and any options, concluding with the to enable straightforward parsing by network devices. Error detection in datagrams relies on the header , which in IPv4 employs a method like the 16-bit one's complement sum of the header words (with the checksum field zeroed during computation), allowing intermediate s to validate and discard corrupted headers without affecting the directly.

Processing and Delivery

In datagram networks, the involves independent forwarding of each datagram at every intermediate , where the destination in the header determines the next hop without maintaining per-flow state. In IP networks, routers consult their routing tables, populated by protocols such as the (RIP) or (OSPF), to select the appropriate outgoing interface and next-hop based on the to the destination. This connectionless approach allows datagrams from the same source to take varying paths, potentially arriving out of order, as no sequence or ordering guarantees are enforced during transit. Delivery semantics in datagram systems provide best-effort transmission, meaning datagrams are forwarded without reliability assurances, acknowledgments, or retransmissions by the network layer. If a datagram exceeds the (MTU) of an intervening link, fragmentation may occur at the source or intermediate nodes, dividing it into smaller pieces that are individually routed and reassembled only at the final destination using header fields like and values. Out-of-order arrivals are handled by endpoint reassembly processes, which buffer and reorder fragments based on these fields, though timeouts (typically 60-120 seconds) discard incomplete datagrams to prevent indefinite waits. At the network layer, the datagram serves as the fundamental unit for , enabling communication across heterogeneous subnetworks by encapsulating data in self-contained packets. Routers and switches examine the datagram header—particularly the destination address and options—to make autonomous next-hop decisions; in IP, this includes decrementing the time-to-live (TTL) field to prevent indefinite looping and discarding expired datagrams. This header-driven processing ensures scalability in large networks, as each operates without knowledge of prior or subsequent datagrams in a . Datagram networks lack built-in backpressure mechanisms, so can lead to silent loss of datagrams when buffers overflow, with no automatic throttling of the sender. In , error conditions, such as unreachable destinations or TTL exhaustion, are reported asynchronously via the (ICMP), which generates messages like Destination Unreachable or Time Exceeded to inform the source or intermediates without disrupting the forwarding plane. Higher-layer protocols must then interpret these reports to implement recovery, as the network layer provides no inherent error correction.

Comparisons

Datagram vs. Virtual Circuits

Virtual circuit networks operate on a connection-oriented , where a logical path is established between the source and destination before data transfer begins. This involves an initial setup phase to reserve resources along the path and maintain per-session state at each intermediate switch or router, including identifiers like virtual circuit identifiers (VCIs) for packets. For instance, the X.25 protocol suite implements this by creating on-demand s that share physical links among multiple users while appearing dedicated during the session, with switches performing error checking, acknowledgments, and retransmissions to ensure reliability. Datagram networks, by contrast, are connectionless, lacking any setup or teardown phases; each packet, or datagram, carries its full destination address and is routed independently based on current network conditions, enabling stateless forwarding at switches. This fundamental difference means datagrams do not guarantee ordered delivery or inherent reliability, as packets may arrive out of sequence or be lost, whereas use sequence numbers and state information to enforce ordering and retransmission along a fixed path once established. Additionally, datagram headers are larger due to complete addressing, while virtual circuit packets use compact VCIs that are link-local and swapped at each hop, reducing per-packet overhead after setup. Datagrams excel in flexibility and , as independent allows packets to dynamically avoid failed links or congested paths without disrupting the entire communication, promoting robustness in failure-prone wide-area networks. However, this comes at the cost of higher overhead for end-to-end reliability, which must be handled by upper-layer protocols to manage losses and reordering. Virtual circuits provide superior (QoS) through advance , such as buffer reservations in X.25, enabling predictable delay and guarantees, but they introduce setup (typically one round-trip time) and challenges from storing state for each active circuit across the network. Datagram switching suits bursty, unpredictable traffic where adaptability and efficient resource sharing are paramount, such as in environments with variable loads or intermittent connectivity. Virtual circuits, however, are better suited for , constant-bit-rate applications demanding low and assured delivery, like circuit-emulating services in early packet-switched networks.

Datagram vs. Connection-Oriented Packets

Connection-oriented packets, as exemplified by segments, establish and maintain an end-to-end virtual connection between sender and receiver before data transmission, ensuring reliable delivery through mechanisms such as acknowledgments, sequencing, and flow control. Acknowledgments confirm receipt of data segments, while sequence numbers order the delivery of bytes and detect duplicates or losses, triggering retransmissions if necessary. Flow control uses a sliding window to prevent overwhelming the receiver, dynamically adjusting the amount of unacknowledged data in transit. This stateful approach maintains connection variables in a Transmission Control Block () at each endpoint, tracking parameters like the next sequence number to send or receive. In contrast, datagrams operate on a basis with no session state or connection setup, treating each packet as an independent entity without guarantees of delivery, order, or error recovery. Protocols like and provide this connectionless service, where the sender dispatches datagrams without expecting acknowledgments or maintaining per-connection state, relying instead on the underlying network's best-effort forwarding. Unlike connection-oriented packets, datagrams do not inherently detect losses or reorder out-of-sequence arrivals, making them unsuitable for applications requiring strict reliability without additional upper-layer handling. Layer implications further distinguish these approaches: datagrams are typically implemented at the network layer, such as in the (IP), where they enable across diverse networks without per-flow state. Connection-oriented services, however, reside at the , like , which builds reliability atop datagram-based network services by encapsulating segments within IP datagrams. This layering allows TCP to provide end-to-end guarantees over an unreliable datagram substrate. The trade-offs between datagrams and connection-oriented packets center on performance and functionality: datagrams offer lower due to the absence of connection setup and teardown overhead, while also supporting and broadcast efficiently without per-receiver . However, this comes at the cost of requiring application-layer protocols to implement any needed reliability, potentially increasing complexity. Conversely, connection-oriented packets reduce errors and ensure ordered but introduce overhead from handshakes, state maintenance, and retransmissions, which can degrade performance in high-latency or lossy environments.

Examples and Applications

Internet Protocol Datagram

The (IP) datagram forms the core unit of data transmission at the network layer of the TCP/IP protocol suite, enabling the of packets across diverse interconnected networks without establishing end-to-end connections. As specified in the foundational standards, IP datagrams encapsulate higher-layer protocols such as TCP or , providing where the protocol handles addressing, , and fragmentation but does not guarantee reliability or order. This connectionless approach underpins the scalability of the , allowing datagrams to be forwarded independently based on destination addresses. In IPv4, the datagram begins with a minimum 20-byte header, which can extend to 60 bytes with options, structured to support efficient processing by routers. Key fields include the 4-bit field set to 4, indicating the protocol version; the 4-bit Internet Header Length (IHL) field specifying the header size in 32-bit words; the 8-bit (TOS) field for quality-of-service priorities like precedence and delay; the 16-bit Total Length field denoting the entire datagram size in octets (up to 65,535); the 16-bit field for fragment matching; 3-bit Flags including the Don't Fragment (DF) and More Fragments () bits; the 13-bit Fragment Offset field for reassembly positioning in 8-byte units; the 8-bit (TTL) field to prevent infinite loops by decrementing per hop; the 8-bit field identifying the encapsulated protocol (e.g., 6 for ); the 16-bit for error detection; and 32-bit Source and Destination Address fields. The following table summarizes these core fields for clarity:
FieldSize (bits)Purpose
Version4Protocol version (4)
IHL4Header length in 32-bit words (min 5)
8QoS parameters
Total Length16Datagram length in octets
16Fragment reassembly ID
Flags3Fragmentation controls (DF, MF)
Fragment Offset13Position in original datagram
8Hop limit
8Next-layer protocol
Header Checksum16Header integrity check
Source Address32Sender
Destination Address32Receiver
IPv6 introduces enhancements to address scalability and simplify processing, featuring a fixed 40-byte base header without variable-length options in the main structure, which reduces parsing overhead. Its fields comprise the 4-bit Version set to 6; an 8-bit Traffic Class for ; a 20-bit Flow Label to tag packets for special handling as a flow; a 16-bit Payload Length covering extension headers and data; an 8-bit Next Header indicating the subsequent header type; an 8-bit Hop Limit analogous to ; and expanded 128-bit Source and Destination Addresses to support vastly more unique identifiers. Optional is handled via extension headers, such as Hop-by-Hop Options (processed en route), , Fragment, and Destination Options, chained after the base header and referenced by the Next Header field. Fragmentation in IP occurs at the network layer when a datagram exceeds the path's maximum transmission unit (MTU), with reassembly solely at the destination to avoid intermediate router state. In IPv4, this is managed using the Identification, Flags (including MF), and Fragment Offset fields, where fragments are aligned on 8-byte boundaries and the DF flag can prevent fragmentation if set. IPv6 shifts fragmentation exclusively to the source host via a dedicated Fragment extension header containing an 8-byte offset, a 32-bit identification, and an MF flag, promoting path MTU discovery to minimize on-path fragmentation. As the network layer carrier, the IP datagram integrates seamlessly with the TCP/IP suite by demultiplexing payloads to transport-layer protocols via the Protocol or Next Header field, facilitating layered communication across heterogeneous networks.

User Datagram Protocol Datagram

The (UDP) is a minimalistic transport-layer that provides a datagram-oriented service atop the (IP), enabling applications to send discrete messages without establishing connections. UDP datagrams are lightweight, consisting of an 8-byte header followed by optional data, which facilitates low-overhead communication suitable for time-sensitive or high-volume data transfer. Unlike connection-oriented protocols, UDP does not guarantee delivery, ordering, or error correction beyond a basic , prioritizing speed and simplicity. The UDP header is fixed at 8 bytes and includes four fields: a 16-bit (identifying the sending application, optional and set to 0 if unused), a 16-bit destination (specifying the receiving application), a 16-bit field (indicating the total size in octets of the UDP header and data, with a minimum of 8), and a 16-bit (for error detection). To integrate with , UDP employs a pseudo-header during checksum calculation, which incorporates the source and destination IP addresses, a zero field (later set to 17 for UDP), and the UDP ; this ensures integrity checks against the underlying . The is optional and, if computed, covers the pseudo-header, the UDP header, and the data (padded with zeros if odd-); a transmitted of all zeros signifies no protection was applied, while all ones indicates a zero value. In operation, UDP encapsulates its header and data into datagrams for transmission, supporting both (to a single destination) and (to multiple recipients via ) without requiring handshakes or state maintenance at endpoints. This connectionless model allows UDP to handle bursty or real-time traffic efficiently, as it avoids the overhead of acknowledgments or retransmissions. UDP datagrams are carried within packets, leveraging for while providing port-based demultiplexing at the . UDP finds widespread use in applications demanding low latency over reliability, such as the (DNS) for query-response exchanges, (DHCP) for allocation, and real-time streaming via the (RTP) over UDP for multimedia delivery. These scenarios exploit UDP's speed—often achieving higher throughput than reliable protocols—but accept potential or duplication, shifting reliability burdens to the where needed. For instance, RTP applications may implement selective retransmission for critical data while discarding non-essential packets to maintain playback continuity.

References

  1. [1]
    What Is a Datagram? (The Java™ Tutorials > Custom Networking ...
    A datagram is an independent, self-contained message sent over the network whose arrival, arrival time, and content are not guaranteed.
  2. [2]
    What is Datagram? The Key to Efficient Network Communication
    Jul 29, 2023 · A datagram is an enclosed, complete communication that is sent through a network, and its arrival, timing, and content are not assured.
  3. [3]
    Louis Pouzin Coins the Concept and Term "Datagram"
    Pouzin coined the concept and term datagram Offsite Link, by combining the words data and telegram.
  4. [4]
    Louis Pouzin: a major Internet figure - Inria
    Jun 24, 2013 · He is the man behind packet switching (datagram), and his team built Cyclades – the first network working on the principles of the Internet – at ...
  5. [5]
    What Are Datagram Networks? | Baeldung on Computer Science
    Mar 18, 2024 · Datagram networks use packet switching and can forward packets with the same source and destination along different paths.
  6. [6]
  7. [7]
    Louis Pouzin - CHM - Computer History Museum
    Its minimalist design was based around an innovation Pouzin called “datagrams.” Like letters sent by certified mail, datagrams could take any available route ...
  8. [8]
    RFC 1392 - Internet Users' Glossary - IETF Datatracker
    See also: encryption. datagram A self-contained, independent entity of data carrying sufficient information to be routed from the source to the destination ...
  9. [9]
    RFC 791 - Internet Protocol - IETF Datatracker
    A higher level of effort to ensure delivery is important for datagrams with this indication. For example, the ARPANET has a priority bit, and a choice ...
  10. [10]
    RFC 768 - User Datagram Protocol - IETF Datatracker
    This User Datagram Protocol (UDP) is defined to make available a datagram mode of packet-switched computer communication in the environment of an ...
  11. [11]
    None
    ### Summary of Key Characteristics of IP Datagrams (RFC 791)
  12. [12]
    Security Assessment of the Internet Protocol Version 4 (RFC 6274)
    The "datagram" has a number of characteristics that makes it convenient for interconnecting systems [Clark1988]: o It eliminates the need of connection state ...
  13. [13]
    None
    ### UDP Datagram Characteristics Summary
  14. [14]
    On Distributed Communications: I. Introduction to ... - RAND
    This Memorandum briefly reviews the distributed communications network concept and compares it to the hierarchical or more centralized systems.
  15. [15]
    A digital communication network for computers giving rapid ...
    A digital communication network for computers giving rapid response at remote terminals. Authors: D. W. Davies. D. W. Davies. View Profile. , K. A. Bartlett.
  16. [16]
    [PDF] A History of the ARPANET: The First Decade - DTIC
    Apr 1, 1981 · The broad system design of the NPL Data. Network, as it was called, was first published in 1967, and bears a resemblance to the network ...Missing: datagrams | Show results with:datagrams
  17. [17]
    [PDF] A Protocol for Packet Network Intercommunication - cs.Princeton
    A protocol that supports the sharing of resources that exist in different packet switching networks is presented. The protocol provides.
  18. [18]
    [PDF] Computer Networking - A Top Down Approach (8th Edition)
    Chapter 1 Computer Networks and the Internet. 1. 1.1 What Is the Internet? 2. 1.1.1 A Nuts-and-Bolts Description. 2. 1.1.2 A Services Description.
  19. [19]
    RFC 1812: Requirements for IP Version 4 Routers
    Below is a merged summary of the IP Router Requirements from RFC 1812, consolidating all information from the provided segments into a dense and comprehensive format. To maximize detail while maintaining clarity, I will use a table in CSV format for each major category (Datagram Forwarding, Routing Tables, Next-Hop Decisions, Handling Out-of-Order, and Fragmentation). Each table includes references to the relevant sections or pages from RFC 1812 and the associated URLs.
  20. [20]
  21. [21]
    RFC 792 - Internet Control Message Protocol - IETF Datatracker
    ICMP messages are sent using the basic IP header. The first octet of the data portion of the datagram is a ICMP type field; the value of this field determines ...
  22. [22]
    [PDF] X.25 Basics - Patton Electronics
    Jun 3, 1994 · Packet switching uses "virtual" circuits; the data is characterized into packets which are switched in a logical fashion over a circuit shared ...
  23. [23]
    3.1 Switching Basics - Computer Networks: A Systems Approach
    There have been a number of successful examples of virtual circuit technologies over the years, notably X.25, Frame Relay, and Asynchronous Transfer Mode (ATM).
  24. [24]
    [PDF] Building Secure High Speed Extranets 1 Introduction
    The connectionless datagram based IP model is a flexible and robust networking mechanism over failure prone WANs. However, in an extranet envi- ronment, it is ...
  25. [25]
    RFC 793 - Transmission Control Protocol (TCP) - IETF
    At a gateway between networks, the internet datagram is "unwrapped" from its local packet and examined to determine through which network the internet datagram ...
  26. [26]
    None
    ### Summary of IP as Connectionless Datagram Service in RFC 791
  27. [27]
    None
    ### Summary of UDP as a Connectionless Datagram Protocol
  28. [28]
    [PDF] CS 140 - Summer 2008 - Lecture #23: Network and End-to-End layers
    Datagram Tradeoffs. ◇ Good: – No round-trip delay to setup connection ... – Telephone, ATM, TCP: connection-based. Explicitly set up connection before ...
  29. [29]
    RFC 8200 - Internet Protocol, Version 6 (IPv6) Specification
    This document specifies version 6 of the Internet Protocol (IPv6). It obsoletes RFC 2460. Status of This Memo This is an Internet Standards Track document.