Fact-checked by Grok 2 weeks ago

Presentation layer

The presentation layer, the sixth layer of the Open Systems Interconnection (OSI) model, serves as the syntax layer responsible for translating data between the formats used by the and the underlying network layers to ensure interoperability across diverse systems. It acts as a translator that formats data for transmission and reception, handling the semantics and syntax of information exchanged between application entities. Key functions of the presentation layer include data translation, such as converting between character encodings like and ASCII; and to optimize data size for efficient transmission; and and decryption to secure and confidentiality during exchange. These responsibilities ensure that the receiving application's data is in a readable and usable format, regardless of the originating system's conventions. For instance, it supports protocols like Secure Sockets Layer (SSL) and (TLS) for , as well as standards such as for image compression and MPEG for video formatting. In practice, the presentation layer facilitates seamless communication in heterogeneous networks by negotiating transfer syntaxes and maintaining integrity, though it is often implemented in conjunction with the in modern protocol stacks like TCP/IP. This layer's role is crucial for applications involving , secure transactions, and cross-platform , underscoring its importance in the OSI model's of network functions.

Definition and Role

Position in the OSI Model

The Open Systems Interconnection (OSI) model, defined in ISO/IEC 7498-1:1994, organizes network communication functions into seven distinct layers to promote standardization and interoperability among diverse systems. These layers are: the (layer 1), which handles bit transmission over physical media; the (layer 2), responsible for node-to-node delivery and error detection; the network layer (layer 3), managing routing and logical addressing; the (layer 4), ensuring end-to-end delivery and reliability; the (layer 5), coordinating communication sessions; the (layer 6), focusing on data formatting and translation; and the (layer 7), providing user-facing network services. The presentation layer occupies the sixth position in this hierarchy, situated directly between the (layer 5) below it and the (layer 7) above it. In the downward data flow on the sending side, application-layer —such as or application-specific messages—is passed to the presentation layer for syntax processing and formatting before being forwarded to the for dialog management. Conversely, in the upward data flow on the receiving side, ascends from the to the presentation layer, where it is translated and prepared in a usable format for the . This bidirectional ensures that the presentation layer acts as a mediator, encapsulating with appropriate syntax information during transmission and decapsulating it upon reception. A core role of the presentation layer is to enable across heterogeneous systems by defining and negotiating data representations independent of the underlying hardware or software differences. It achieves this through mechanisms like abstract syntax notation, particularly Abstract Syntax Notation One () as specified in ISO/IEC 8824-1:2021, which provides a formal way to describe data structures in an implementation-independent manner, allowing transfer syntax rules to encode them for transmission. This abstraction ensures that data semantics remain preserved while syntax variations are resolved, facilitating communication between systems with differing data formats, such as varying character sets or methods. The layer is also known as the " layer" because it emphasizes the syntactic aspects of —such as and encoding—rather than the semantic meaning, which is handled by the .

Core Responsibilities

The presentation layer's primary responsibility is to provide independence from differences in data representation encountered by the , achieving this by managing the and of exchanged between systems. This layer acts as an intermediary that shields the from the complexities of varying data formats across heterogeneous systems, ensuring seamless without requiring applications to handle low-level representation details. It ensures that data originating from an application's specific format is converted into a standardized form suitable for transmission over the network, thereby abstracting application-specific data structures and enabling reliable communication regardless of the underlying or software differences. This abstraction process allows diverse systems to exchange information effectively, as the presentation layer handles the necessary transformations to make the data network-compatible while preserving its intended at the destination. The presentation layer plays a crucial role in maintaining during transfer by representations in a way that does not alter the semantic meaning of the , thus guaranteeing that the remains accurate and interpretable upon receipt. Through this , it prevents distortions that could arise from incompatible encoding schemes, supporting the overall reliability of end-to-end exchange in open systems interconnection environments. A fundamental concept underlying these responsibilities is the distinction between abstract syntax, which describes the logical and semantics of the independent of any specific encoding, and concrete syntax, which defines the actual bit patterns and encoding rules used for ; the presentation layer manages the and between these to facilitate compatible exchange. This separation allows for flexible transfer syntaxes that can be agreed upon dynamically between communicating entities, enhancing adaptability without compromising the 's intrinsic meaning.

Key Functions

Data Representation and Translation

The presentation layer ensures interoperability between systems with differing data representations by translating data formats at the sender and receiver ends. This translation process involves converting data from one system's native format to a standardized network format and back, preventing compatibility issues arising from hardware or software differences. For instance, it handles the conversion of text data encoded in , commonly used in systems, to ASCII, the standard for most other computing environments, thereby enabling seamless communication across heterogeneous networks. A core aspect of this functionality is , where complex data structures—such as hierarchical records or objects—are transformed into linear byte streams suitable for transmission over the network. One prevalent technique is Type-Length-Value (TLV) encoding, which structures data by prefixing each element with a identifying its type, followed by its and the itself, allowing for flexible and extensible representation without fixed positions. This method facilitates efficient packing and unpacking of data, ensuring that the receiving system can reconstruct the original structure accurately regardless of the originating platform's conventions. In handling multimedia data, the presentation layer formats images, audio, and video into transmittable streams that preserve essential fidelity while adhering to network constraints. It negotiates and applies appropriate representations, such as converting proprietary graphics formats to standardized ones like for images or ensuring audio streams are in PCM or similar linear formats, thus supporting applications like video conferencing without requiring end-system modifications. This process maintains across diverse devices, from desktops to endpoints. To achieve architecture independence, the presentation layer employs Abstract Syntax Notation One (), a formal notation defined by ITU-T Recommendation X.680, which describes data structures in an abstract manner detached from specific machine implementations. allows definitions of types like integers or sequences without specifying low-level details, such as byte order—addressing differences between big-endian (most significant byte first) and little-endian (least significant byte first) systems—through associated encoding rules like Basic Encoding Rules (BER). This abstraction enables consistent data exchange, as the transfer syntax handles the concrete serialization independently of the abstract syntax.

Character Encoding and Formatting

The Presentation Layer facilitates the conversion between different schemes to enable interoperability in network communications, abstracting differences in how systems represent text data. This involves characters from one coded character set (CCS) to another, such as transforming ISO-8859-1 (also known as Latin-1), which encodes 256 characters primarily for Western European languages using 8-bit values, to , a variable-length encoding compatible with ISO 10646 () that supports over 140,000 characters across global scripts. Such conversions can be exact when the source repertoire is a of the or approximate otherwise, often using predefined tables to preserve semantic meaning while adjusting for differences in assignments. In addition to encoding translation, the Presentation Layer manages text formatting to ensure consistent display and processing across diverse platforms, including the handling of control characters that influence layout and . For instance, it standardizes line endings, converting between the (carriage return followed by line feed, ASCII 13 then 10) common in Windows environments and the single LF (line feed, ASCII 10) used in systems, thereby avoiding rendering artifacts like extra blank lines or merged paragraphs in transmitted documents. This formatting role extends to other control characters, such as those for tabs or escapes, ensuring that the structural integrity of text is maintained during without platform-specific assumptions disrupting . The Presentation Layer supports by processing multi-byte character sequences and bidirectional scripts, enabling global network applications to handle diverse linguistic requirements seamlessly. Encodings like allow efficient representation of multi-byte characters—for example, Han ideographs in or may require up to four bytes—while preserving the order and context needed for correct reassembly at the receiver. For right-to-left () scripts, such as those in or Hebrew, the layer ensures transmission of code points that include embedding controls (e.g., U+202A for left-to-right embedding), facilitating proper algorithmic reordering via the Unicode Bidirectional without altering the underlying byte stream. This capability is essential for multilingual environments, where mixed LTR/RTL text must render accurately to avoid visual confusion or . A critical aspect of this handling involves addressing in multi-byte representations, such as those in UTF-16 for , where byte order can vary between big-endian (most significant byte first) and little-endian systems. The Presentation Layer mitigates misinterpretation by enforcing a canonical network byte order—typically big-endian in standards like (XDR)—during , often appending a (BOM, U+FEFF) to signal the intended order and allowing decoding systems to swap bytes if necessary for local processing. This standardization prevents character garbling, as seen in scenarios where a little-endian sender's UTF-16 "A" (0x0041) might otherwise be read as an unintended on a big-endian receiver.

Compression and Encryption

The presentation layer in the optimizes data transmission by applying techniques to reduce size, thereby minimizing usage and transmission delays. at this layer involves transforming data into a more compact form before it is passed to the , with the receiving presentation layer responsible for decompression to restore the original format. Common methods include (RLE), which efficiently handles repetitive sequences by replacing them with a single value and a count (e.g., a string of 50 identical characters might be encoded as a pair indicating the character and its repetition count, achieving up to 30% size reduction in highly redundant data), and , a variable-length that assigns shorter bit sequences to more frequent symbols based on their . Compression techniques are categorized as lossless or lossy depending on the and application requirements. Lossless methods, such as RLE and , preserve all original , making them suitable for text or numerical data where integrity is paramount; for instance, these ensure exact reconstruction in file transfers like archives. In contrast, discards less critical details to achieve higher ratios, ideal for like images or video; examples include for still images, which approximates pixel values, or MPEG for video streams, reducing file sizes by factors of 10-50 while maintaining perceptual quality. The choice of method is negotiated between peers to balance efficiency and fidelity. Encryption and decryption at the presentation layer secure data confidentiality by scrambling the payload in a standardized format, ensuring that only authorized recipients can access the content during transfer. Symmetric ciphers, such as the (AES), use a key for both encryption and decryption, offering high-speed performance for bulk data; AES operates on 128-bit blocks with key sizes of 128, 192, or 256 bits, providing robust protection against brute-force attacks. Asymmetric ciphers, like (RSA), employ public-private key pairs to enable secure without prior shared secrets; RSA, based on the difficulty of factoring large primes, is often used to encrypt session keys for subsequent symmetric operations. These mechanisms focus on transforming data syntax to prevent and tampering at the format level. The presentation layer facilitates negotiation of security parameters, including encryption algorithms, key lengths, and compression levels, through protocol exchanges during session establishment. Using standards like Abstract Syntax Notation One (), peers exchange capabilities to agree on compatible transformations, such as selecting AES-256 for or Huffman for , ensuring interoperability without exposing sensitive details prematurely. This occurs via control messages that define the abstract syntax and transfer syntax for the connection. In the , presentation layer can include comprehensive security features, such as those in protocols like (TLS), providing end-to-end integrity, replay protection, and endpoint authentication via certificates.

Associated Protocols and Standards

Traditional Protocols

The traditional protocols associated with the presentation layer primarily emerged in the to address the need for platform-independent data exchange in early environments, such as network file systems (NFS), by emphasizing canonical data formats for and encoding. External Data Representation (XDR) was developed by in 1987 as a standard for data in a platform-independent manner, particularly for use with (RPC) mechanisms in distributed systems. XDR defines a set of basic data types, such as integers and strings, and specifies their byte order (big-endian) and alignment to ensure consistent representation across heterogeneous architectures, facilitating without requiring runtime translation at the application level. It played a key role in protocols like NFS, where it enabled the transfer of data between diverse hosts. Abstract Syntax Notation One () is an ISO and standard, initially specified in 1988 under recommendations X.208 and X.209, for formally defining the abstract syntax of data structures used in and protocols. provides a notation to describe complex data types, including sequences, sets, and choices, independent of any specific encoding or implementation language, allowing protocols to specify message formats that can be unambiguously interpreted across systems. It has been foundational in standards for directory services, security, and signaling, such as those in the OSI reference model. The Basic Encoding Rules (BER) serve as the original encoding scheme for , defined in Recommendation X.209 (1988) and later in , employing a tag-length-value (TLV) format to represent values as octet strings. In this structure, the tag identifies the , the length specifies the size of the value, and the value contains the encoded content, which may be primitive or constructed (nested TLV elements). BER's flexibility supports representations but can produce multiple valid encodings for the same ; it is exemplified in applications like digital certificates, where structures for are encoded for secure transmission.

Modern Standards and Formats

In contemporary networking, the presentation layer's functions are supported by several internet-era standards that facilitate data serialization, encoding, and interchange across diverse systems. These modern formats emphasize efficiency, interoperability, and adaptability to web and distributed environments, evolving from the OSI model's abstract principles to practical implementations in protocols like HTTP and email. Multipurpose Internet Mail Extensions (MIME), defined in RFC 2045, enables the encoding of non-ASCII text, binary attachments, and multimedia content within text-based protocols such as email (SMTP) and web transfers (HTTP). This standard specifies headers like Content-Type and Content-Transfer-Encoding to describe data formats and handle 8-bit to 7-bit conversions, ensuring compatibility across heterogeneous networks. Originally published in 1996 and updated through subsequent RFCs, MIME remains foundational for representing diverse data types in internet communications. JavaScript Object Notation (JSON), standardized as ECMA-404 in 2013, provides a lightweight, human-readable text format for serializing structured data, widely used in web APIs for request-response exchanges between clients and servers. Its syntax, based on key-value pairs and arrays, supports easy parsing in languages like JavaScript, Python, and Java, promoting seamless data interchange without platform-specific dependencies. JSON Schema, a complementary specification, allows for validation and documentation of JSON instances, enhancing reliability in API contracts and data pipelines. Protocol Buffers (protobuf), introduced by in 2008, is a format designed for high-performance exchange in and distributed systems. It uses a defined in .proto files to generate efficient code for encoding and decoding structured messages, resulting in smaller payloads and faster processing compared to text-based alternatives like XML. Protobuf's backward and forward compatibility features support evolving schemas in large-scale applications, such as gRPC-based services. As of 2025, , released in 2009, has seen widespread adoption in ecosystems for its schema-inclusive , which embeds directly in files to enable robust in distributed processing frameworks like Hadoop and Kafka. This row-oriented format facilitates compact storage and streaming of complex records, with self-describing data that reduces errors in schema mismatches across clusters. Avro's integration with tools like underscores its role in handling petabyte-scale datasets while maintaining interoperability.

Relation to Other Network Models

In the TCP/IP Model

The TCP/IP model, also known as the , organizes network communication into four layers: the (or network access layer), the , the , and the . Unlike the , it does not include a dedicated presentation layer; instead, the responsibilities of representation, formatting, , and are integrated directly into the protocols. This structure is outlined in RFC 1123, which defines the requirements for Internet hosts and emphasizes the 's role in handling user-facing protocols such as , FTP, and SMTP, where presentation functions are embedded to ensure end-to-end handling without separate abstraction. A prominent example of this integration is the Hypertext Transfer Protocol (HTTP) and its secure variant (), both operating at the . HTTP manages data formatting through the Content-Type header, which specifies media types like text/html or application/json to ensure compatible representation between client and server. Content negotiation occurs via headers such as Accept, Accept-Encoding, and Accept-Language, allowing servers to select appropriate formats, compressions (e.g., ), or languages based on client preferences. For encryption, layers Transport Layer Security (TLS) over HTTP, handling data confidentiality and integrity within the application layer rather than a distinct presentation mechanism. This absorption of presentation functions into the application layer results in a simpler , facilitating faster implementation and deployment compared to the OSI model's stricter separations. However, it can lead to increased complexity—or "bloat"—in individual application protocols, as they must independently manage , encoding, and without relying on a unified . The TCP/IP model's design prioritizes practicality and for real-world networks, a that contributed to its dominance beginning in the early , when ARPANET transitioned to / on January 1, 1983, and the U.S. Department of Defense mandated it as the standard for military networking in 1982.

Mapping and Overlaps

In the TCP/IP model, which underpins most modern internet communications, the functions of the OSI presentation layer—such as data translation, encoding, compression, and encryption—are predominantly absorbed into the application layer, with some aspects extending to the transport layer. This mapping reflects the TCP/IP model's more streamlined four-layer structure, where the OSI's upper layers (application, presentation, and session) are consolidated to facilitate practical implementation. For example, the Multipurpose Internet Mail Extensions (MIME) standard, integral to the Simple Mail Transfer Protocol (SMTP) in the TCP/IP application layer, handles the encoding and decoding of diverse data types like text, images, and attachments into a transportable format, directly fulfilling presentation layer roles. Similarly, Transport Layer Security (TLS), positioned above the TCP/IP transport layer (atop TCP), between the transport and application layers, provides encryption and decryption services that align with the OSI presentation layer's data confidentiality mechanisms, ensuring secure data representation across heterogeneous systems. Overlaps between OSI presentation functions and TCP/IP layers become evident in protocol implementations, where data formatting and syntax handling span multiple levels without strict delineation. In Hypertext Transfer Protocol (HTTP) communications within the TCP/IP , JPEG image compression is applied directly to content payloads, integrating what would be a dedicated presentation layer task in OSI—such as abstract syntax notation for —into application-specific processing. Likewise, JSON parsing in web services, governed by standards like 8259, occurs at the application layer during HTTP exchanges but involves translating structured data syntax and semantics, echoing presentation layer responsibilities for ensuring interoperability between differing application environments. These examples illustrate how TCP/IP protocols embed presentation logic to optimize end-to-end data flow, contrasting with OSI's theoretical separation. Challenges arise in mapping OSI presentation concepts to the heterogeneous landscape of IP-based , where rigid layer boundaries often yield to integrated designs for and . Modern hybrid systems, blending legacy and environments, leverage OSI principles for conceptual guidance but adapt them flexibly, as strict adherence could impede performance in dynamic infrastructures. A notable development as of is the role of cloud-native architectures, such as those orchestrated by , where presentation-related functions like encryption termination and protocol translation are increasingly delegated to service meshes (e.g., Istio or Linkerd). These meshes operate as a programmable overlay between and application layers, blending session and presentation duties to enhance communication without altering core application code. This evolution further blurs OSI-TCP/IP distinctions, prioritizing resilience and observability in distributed systems.

History and Evolution

Development of the OSI Model

The development of the Open Systems Interconnection (OSI) reference model, including its presentation layer, was initiated by the (ISO) in 1977 through the establishment of Technical Committee 97, Subcommittee 16 (ISO/TC97/SC16), aimed at standardizing higher-layer protocols to enable interoperable networking. This effort addressed the growing fragmentation in computer networking during the mid-1970s, where proprietary systems dominated, such as IBM's (), which locked users into vendor-specific environments and hindered cross-vendor data exchange. The initiative sought to create vendor-neutral international standards for open systems interconnection, promoting global compatibility in data communications amid the rise of diverse computing platforms. Influences on the OSI model's structure drew from practical experiences in early networks, including the ARPANET's transition from the Network Control Program (NCP) to a more modular layered approach, which informed the need for distinct protocol layers to manage complexity. Additionally, recommendations from the International Telegraph and Telephone Consultative Committee (CCITT, now ) on standards and emerging data networks, such as the X.25 protocol, shaped the model's emphasis on reliable, structured data transfer across heterogeneous systems. By 1978, French engineer Hubert Zimmermann and collaborators had outlined the seven-layer architecture in internal ISO documents, with the presentation layer (Layer 6) specifically defined to handle data syntax, including code conversion and reformatting for . The model was formalized and published in as ISO 7498, establishing the presentation layer's role in addressing within international standards for open systems. This standard emphasized " processes" that require between communicating entities, allowing the presentation layer to manage independently of application-specific formats and thereby defining its for abstracting syntactic differences in global networks. The framework thus provided a foundational blueprint for handling, relieving higher-layer applications from low-level concerns while ensuring compatibility across diverse implementations.

Advancements Post-OSI

Following the formalization of the in the early 1980s, presentation layer functions evolved significantly through the adoption of internet standards led by the (IETF) in the 1990s, which integrated data representation, encoding, and syntax negotiation directly into the TCP/IP protocol suite's to address practical deployment needs in heterogeneous networks. This shift emphasized lightweight, interoperable mechanisms over the OSI's more rigid, layered abstractions, enabling broader adoption in emerging internet infrastructure. A key example is the update to the (XDR) standard in RFC 1832, published in 1995, which refined canonical data encoding for cross-platform compatibility in protocols like NFS and RPC, ensuring consistent integer, floating-point, and representations without the overhead of full OSI session management. In the late 1990s and 2000s, web technologies further advanced presentation concepts by prioritizing human- and machine-readable formats for distributed systems. The World Wide Web Consortium's (W3C) XML 1.0 Recommendation in 1998 introduced a flexible, extensible markup language for structured data interchange, serving as a de facto presentation syntax that abstracted platform-specific details and facilitated syntax translation in web services, much like OSI's abstract syntax notation but optimized for text-based transmission. Building on this, Roy Fielding's 2000 dissertation outlined the Representational State Transfer (REST) architectural style, which embedded presentation functions—such as resource representation in formats like XML or JSON—into HTTP-based APIs, enabling stateless, scalable data formatting and negotiation without dedicated layers. These developments influenced RESTful APIs, which by the mid-2000s became standard for web applications, handling data compression, character encoding (e.g., UTF-8), and media type negotiation implicitly at the application level. The rise of () protocols in the 2010s highlighted the need for lightweight serialization to support resource-constrained devices, addressing OSI's limitations in efficiency for low-bandwidth, high-latency environments. Protocols like CoAP ( 7252, June 2014) incorporated binary formats such as ( 8949, December 2020), a concise encoding scheme that provides compact, schema-optional data representation akin to but with reduced overhead—up to 50% smaller payloads—for data and exchange in edge networks. This evolution filled gaps in OSI's verbose encoding rules, enabling real-time in and systems by 2025, where traditional protocols proved too cumbersome for battery-limited deployments. OSI's structural rigidity also spurred de facto standards like Google's (Protobuf), released as open-source in 2008, which offered efficient binary serialization with forward/, schema evolution, and , outperforming XML in speed (up to 10x faster parsing) and size for high-volume data in . addressed mobile and demands by integrating presentation functions—data typing, validation, and encoding—directly into application code, becoming widely adopted in and cloud-native architectures by the . By 2023-2025, advancements incorporated quantum-safe encryption into presentation mechanisms, with IETF and NIST efforts extending ASN.1-based structures (e.g., in certificates and ) to support post-quantum algorithms like CRYSTALS-Kyber and , ensuring resistance to harvest-now-decrypt-later attacks in protocols such as TLS 1.3. In August 2024, NIST published the first three finalized standards: FIPS 203 (ML-KEM, based on ), FIPS 204 (ML-DSA, based on ), and FIPS 205 (SLH-DSA, based on Sphincs+). These integrations, detailed in ongoing drafts, reflect adaptive evolution to emerging cryptographic threats, maintaining backward compatibility while updating encoding rules for quantum-resistant and signatures.

References

  1. [1]
    What Is the OSI Model? | IBM
    Layer 6: The presentation layer prepares data for the application layer, including data translation, compression and encryption. Layer 5: The session layer ...
  2. [2]
    What is OSI Model | 7 Layers Explained - Imperva
    The Open Systems Interconnection (OSI) model describes seven layers that computer systems use to communicate over a network.
  3. [3]
    What Is the OSI Model? - 7 OSI Layers Explained - Amazon AWS
    The Open Systems Interconnection (OSI) model is a conceptual framework that divides network communications functions into seven layers.Why is the OSI model important? · What are the seven layers of...
  4. [4]
    OSI Model Reference Chart - Cisco Learning Network
    The presentation layer formats the data to be presented to the application layer. It can be viewed as the translator for the network. This layer may ...
  5. [5]
    What is the OSI Model? The 7 Layers Explained - BMC Software
    Jul 31, 2024 · The presentation layer transforms data into formats the application layer can process, acting as the data translator between systems with ...What Is The Osi Model? The 7... · Layer 4: The Transport Layer · Layer 1: The Physical Layer
  6. [6]
    TLV Format - Devopedia
    Feb 18, 2023 · TLV (Tag-Length-Value) is a binary format used to represent data in a structured way. TLV is commonly used in computer networking protocols, smart card ...
  7. [7]
    What Is The Presentation Layer In The OSI Model? - ITU Online
    ASCII and EBCDIC: Character encoding standards that are often converted by the Presentation Layer to ensure cross-platform compatibility. Role of the ...
  8. [8]
    [PDF] X.680 - ITU
    ITU-T Recommendation X.680 is a notation called Abstract Syntax Notation One (ASN.1) for defining the syntax of information data.<|control11|><|separator|>
  9. [9]
    [PDF] ASN.1 Complete - OSS Nokalva
    May 31, 1999 · representations, but this so-called "big-endian/little-endian" representation of integers is often the ... notation for abstract syntax definition ...
  10. [10]
    RFC 2130 - The Report of the IAB Character Set Workshop held 29 ...
    ... presentation layer in the ISO telecommunications model. Locale - The attributes of communication, such as language, character set and cultural conventions.
  11. [11]
    [PDF] Presentation Layer
    Abstract Syntax Notation (ASN.1) is an ISO standard that addresses the issue of representing, encoding, transmitting, and decoding data structures. It consists ...
  12. [12]
  13. [13]
    A Comprehensive Guide To Understanding Which OSI Layer ...
    Jul 17, 2023 · Advantages of Presentation Layer encryption: 1. Provides an additional layer of security to protect sensitive data. 2. Ensures that only ...
  14. [14]
    RFC 1014: XDR: External Data Representation standard
    RFC 1014 External Data Representation June 1987 3.2.Unsigned Integer An XDR ... data, SUN Microsystems [Page 6]. RFC 1014 External Data Representation June ...
  15. [15]
    RFC 2045 - Multipurpose Internet Mail Extensions (MIME) Part One
    This set of documents, collectively called the Multipurpose Internet Mail Extensions, or MIME, redefines the format of messages.
  16. [16]
    ECMA-404 - Ecma International
    The goal of this specification is only to define the syntax of valid JSON texts. Its intent is not to provide any semantics or interpretation of text conforming ...
  17. [17]
    Protocol Buffers Documentation
    Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and ...Overview · Google.Protobuf · google.protobuf.Timestamp · EncodingMissing: 2008 | Show results with:2008
  18. [18]
    RFC 1122 - Requirements for Internet Hosts - Communication Layers
    This is one RFC of a pair that defines and discusses the requirements for Internet host software. This RFC covers the communications protocol layers.
  19. [19]
    RFC 7231 - Hypertext Transfer Protocol (HTTP/1.1) - IETF Datatracker
    This document defines the semantics of HTTP/1.1 messages, as expressed by request methods, request header fields, response status codes, and response header ...
  20. [20]
    TCP/IP Model vs. OSI Model: Similarities and Differences | Fortinet
    The biggest difference between the two models is that the OSI model segments multiple functions that the TCP/IP model groups into single layers. This is true of ...
  21. [21]
    History of IP Addresses Part 2: How TCP/IP Changed Everything
    Jan 17, 2018 · On New Year's Day, 1983, ARPANET switched from their NCP protocol to TCP/IP, which was considered more flexible and more powerful.
  22. [22]
    [PDF] Leveraging Service Meshes as a New Network Layer - Radhika Mittal
    Nov 12, 2021 · OSI layers 5 and 6 (ses- sion and presentation) also sit between the transport and application layers. Service meshes share some features with.
  23. [23]
  24. [24]
  25. [25]
    ISO 7498:1984 - Basic Reference Model
    ISO 7498:1984 is a withdrawn standard for Open Systems Interconnection, with a new version available at ISO/IEC 7498-1:1994.
  26. [26]
    [PDF] ISO/IEC 7498-1 - Ecma International
    Jun 15, 1996 · 7 Detailed description of the resulting OSI architecture 7.1 Application Layer 7.2 Presentation Layer 7.3 Session Layer......... 7.4 Transport Layer. ...
  27. [27]
    CHAPTER 5: Representational State Transfer (REST)
    This chapter introduces and elaborates the Representational State Transfer (REST) architectural style for distributed hypermedia systems.
  28. [28]
    Protocol Buffers: Google's Data Interchange Format
    Jul 7, 2008 · Protocol Buffers allow you to define simple data structures in a special definition language, then compile them to produce classes to represent those ...
  29. [29]
    Overview | Protocol Buffers Documentation
    Protocol Buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data. It's like JSON, except it's smaller and faster.Protobuf Editions Overview · Tutorials · Java API