Ethernet frame
An Ethernet frame is the fundamental protocol data unit (PDU) employed at the data link layer in Ethernet networks, encapsulating upper-layer protocol data for transmission over physical media in local area networks (LANs). Defined primarily by the IEEE 802.3 standard, it structures data into a sequence of fields including synchronization bits, addressing information, payload, and error-checking mechanisms to ensure reliable delivery.[1] The frame format supports variable payload sizes from 46 to 1500 bytes (extendable in modern variants), enabling efficient multiplexing of protocols like IP over Ethernet while maintaining backward compatibility with early designs.[2] Two primary variants exist: the Ethernet II (DIX) format, which uses a 2-byte Type field to identify the encapsulated protocol, and the IEEE 802.3 format, which employs a 2-byte Length field followed by a Logical Link Control (LLC) header for protocol demultiplexing.[2] In the IEEE 802.3 LLC structure, the frame begins with a 7-byte preamble for clock synchronization and a 1-byte Start Frame Delimiter (SFD), followed by 6-byte Destination MAC Address (DMAC) and Source MAC Address (SMAC) fields, the 2-byte Length, a 3-byte LLC header (DSAP, SSAP, and Control), a variable data field of 43 to 1497 bytes, and a 4-byte Frame Check Sequence (FCS) using cyclic redundancy check (CRC) for integrity verification.[3] An interframe gap of at least 12 bytes separates consecutive frames to allow receiver processing time.[3] Introduced in the 1970s by Xerox, Intel, and Digital Equipment Corporation (DIX) and standardized by IEEE in 1983, the Ethernet frame has evolved to support full-duplex operation without CSMA/CD in switched networks and speeds from 10 Mbps to 800 Gbps as of 2024, underpinning the majority of wired LAN infrastructure worldwide.[1][4] Its design emphasizes simplicity and scalability, with MAC addresses (48 bits) facilitating unicast, multicast, and broadcast addressing, while optional features like VLAN tagging (IEEE 802.1Q) insert a 4-byte header for network segmentation without altering the core frame. Modern extensions, such as jumbo frames up to 9000 bytes, address high-throughput applications like data centers, but the base format remains integral to interoperability across Ethernet-compliant devices.[2]Overview
Definition and Purpose
An Ethernet frame is the protocol data unit (PDU) at the data link layer (Layer 2 of the OSI model) in Ethernet networks, encapsulating higher-layer packets—such as those from the network layer (e.g., IP)—for transmission over the physical medium, as specified in the IEEE 802.3 standard.[5] This encapsulation adds necessary control information to ensure the data can be properly addressed, synchronized, and verified during transit across shared or dedicated links in local area networks (LANs).[6] The primary purpose of the Ethernet frame is to facilitate reliable communication between devices by incorporating mechanisms for MAC addressing to identify senders and receivers, error detection through a cyclic redundancy check (CRC) in the frame check sequence, and bit-level synchronization to align receiver clocks with the incoming signal.[5] It supports both unicast (point-to-point) and broadcast delivery modes, enabling efficient data distribution in multi-device environments. In half-duplex operations over shared media, the frame structure underpins carrier sense multiple access with collision detection (CSMA/CD) to manage contention and retransmit corrupted frames, while full-duplex modes leverage it for direct, collision-free transmission with optional flow control via higher-layer protocols.[6] Key characteristics of the Ethernet frame include its variable length, ranging from a minimum of 64 bytes to a maximum of 1518 bytes in the standard configuration (including headers and the frame check sequence but excluding the 8-byte preamble and start frame delimiter).[7] Frames are transmitted as a continuous stream of binary bits, with an octet-aligned structure that ensures compatibility across diverse physical layer implementations, such as twisted-pair copper or fiber optic cabling.[5] In relation to the OSI model, the Ethernet frame primarily resides at the data link layer for logical framing and addressing, but it interfaces with the physical layer for actual bit transmission, including synchronization elements that span both layers.[6]Historical Development
The Ethernet frame originated in 1973 at Xerox's Palo Alto Research Center (PARC), where engineer Robert Metcalfe drafted a memo on May 22 outlining a local area network concept using coaxial cable for a shared medium access system operating at 2.94 Mbps. This prototype, developed with David Boggs and others, evolved into a functional 3 Mbps network by 1974, interconnecting Xerox Alto computers and demonstrating packet switching with collision detection.[8][9] In 1980, the Digital Equipment Corporation (DEC), Intel, and Xerox (DIX) consortium published the Ethernet 1.0 specification, known as the "Blue Book," which defined the initial frame format for 10 Mbps operation over thick coaxial cable (10BASE5), including fields for addresses, data, and checksum. This DIX Ethernet I frame laid the groundwork for commercial adoption. By 1982, the DIX group released Version 2.0, introducing the Ethernet II frame with an EtherType field to better support protocols like IP, which became the dominant format for non-IEEE implementations.[10] The IEEE formalized Ethernet in 1983 through the 802.3 working group, publishing the first draft standard that year and the full standard in 1985, which largely replaced the proprietary DIX specification while adopting a similar frame structure but interpreting the length/type field differently. Subsequent revisions maintained the core frame logic for backward compatibility: IEEE 802.3u in 1995 introduced Fast Ethernet at 100 Mbps over twisted-pair cabling (100BASE-TX), shifting from thick coaxial "Thicknet" to more flexible unshielded twisted pair (UTP) media starting with 10BASE-T in 1990. IEEE 802.3ab in 1999 defined Gigabit Ethernet (1000BASE-T) over Category 5 UTP, further enabling scalable local area networks without altering the frame's essential components. By 2017, IEEE 802.3bs specified 200 Gbps and 400 Gbps operations, with extensions reaching 800 Gbps by 2024 under 802.3df, supporting modern data centers while preserving compatibility with legacy frames to ensure seamless evolution.[11][12]Layered Structure
Physical Layer Components
The physical layer (Layer 1) of the OSI model in Ethernet, as defined by IEEE 802.3, handles the transmission of raw bits over physical media such as twisted-pair copper cables or optical fiber by specifying electrical signals, signaling rates, connector types, and topologies to ensure reliable bit-level delivery.[13] This layer adds essential signaling to the data link layer frame, converting logical bits into physical signals suitable for the medium without altering the frame's content.[14] In Ethernet terminology, the "packet" encompasses the complete transmitted signal at the physical layer, which includes the core frame plus additions like the preamble for receiver synchronization, the start frame delimiter, and the interframe gap that maintains channel idle states between transmissions.[15] The preamble consists of alternating 1s and 0s to allow clock recovery, while the interframe gap enforces a minimum 96-bit-time pause (equivalent to 9.6 μs at 10 Mbps) to prevent overlap and enable hardware recovery.[16] The transmission process involves encoding the frame's bits according to the medium and speed; for instance, 10BASE-T Ethernet uses Manchester encoding, where each bit is represented by a transition (high-to-low for 0, low-to-high for 1) to embed clocking and ensure self-synchronizing signals over unshielded twisted pair.[17] Fast Ethernet variants, such as 100BASE-TX, apply 4B/5B block encoding, converting groups of 4 data bits into 5-bit symbols to guarantee sufficient transitions for clock recovery and error detection, increasing the effective baud rate to 125 MHz.[18] Gigabit Ethernet's 1000BASE-T employs four-dimensional PAM-5 encoding, using five signal levels (-2, -1, 0, +1, +2) across four twisted pairs to achieve full-duplex 1 Gbps transmission with reduced crosstalk.[19] Prior to and following each frame, the physical medium remains idle, with no signal or a continuous idle pattern transmitted to delineate frame boundaries.[20] IEEE 802.3 physical layer specifications maintain backward compatibility across Ethernet speeds from 10 Mbps to 800 Gbps and beyond by standardizing core signaling principles while adapting encoding for higher rates and media types, such as shifting from binary to multilevel modulation in faster variants.[21][4] For example, higher-speed implementations like 400 Gbps Ethernet incorporate advanced encodings such as 64B/66B with PAM4 to handle increased data density over fiber or copper.[22] This evolution ensures interoperability while optimizing for power, distance, and error rates specific to each physical medium.[23]Data Link Layer Components
The data link layer, corresponding to layer 2 of the OSI model, facilitates node-to-node data delivery across Ethernet networks by segmenting higher-layer data into frames, ensuring reliable transmission through error detection mechanisms such as the frame check sequence (FCS), and managing access to the shared medium via protocols like Carrier Sense Multiple Access with Collision Detection (CSMA/CD) in legacy half-duplex configurations.[24][5] This layer operates above the physical layer, abstracting the details of bit transmission to focus on logical framing and control functions that enable devices to communicate within a local area network.[25] Ethernet frame boundaries are logically defined by the combination of a header (containing destination and source addresses along with control fields), a variable-length payload, and a trailer (primarily the FCS for integrity verification), excluding physical layer elements such as the preamble, start frame delimiter, and interframe gap that handle signal synchronization and timing.[26] This structure ensures that the frame represents a complete protocol data unit at the data link level, with minimum and maximum sizes enforced to support collision detection and efficient transmission in shared environments.[5] Encapsulation at the data link layer involves wrapping network layer protocol data units, such as Internet Protocol (IP) packets, within the Ethernet frame by adding MAC sublayer headers that include 48-bit source and destination MAC addresses for local delivery, along with other control fields to delineate the payload boundaries.[27] This process allows higher-layer protocols to remain agnostic to the underlying medium while enabling direct hardware addressing for efficient LAN communication.[28] The data link layer in Ethernet is subdivided into the Media Access Control (MAC) sublayer, which primarily handles frame formation, addressing, and medium access, and the Logical Link Control (LLC) sublayer, which—when utilized—provides additional multiplexing capabilities to support multiple upper-layer protocols over the same MAC connection.[24] The MAC sublayer, defined in IEEE 802.3, encapsulates the data and enforces access rules, whereas the LLC sublayer per IEEE 802.2 adds protocol identification and flow control if needed for complex environments.[29][25]Field Breakdown
Preamble and Start Frame Delimiter
The preamble and start frame delimiter (SFD) constitute the leading physical layer signaling sequence in an Ethernet transmission, enabling receiver synchronization prior to the arrival of the frame's data link layer content. The preamble comprises 7 bytes (56 bits) consisting of the repeating binary pattern 10101010, which generates an alternating sequence of 1s and 0s. This fixed pattern allows the physical layer signaling (PLS) circuitry in the receiver to attain steady-state synchronization with the timing of the incoming signal.[30] Following the preamble is the SFD, a 1-byte field fixed at the binary value 10101011. The SFD denotes the conclusion of the preamble and the commencement of the MAC frame proper, with its pattern differing from the preamble only in the least significant bit to provide a clear transition marker.[30] These fields serve to facilitate clock recovery and bit-level alignment in receivers operating asynchronously with the transmitter, ensuring reliable detection of subsequent frame bits without data content. The preamble's regular oscillation aids phase-locked loop (PLL) circuits in locking onto the signal's frequency and phase, while the SFD alerts the receiver to shift to frame parsing mode.[15] The preamble and SFD are appended and transmitted by the physical layer entity at the medium's bit rate, independent of the MAC frame structure. Upon reception, the physical layer processes them for synchronization and then discards them, passing only the ensuing MAC frame to the data link layer; consequently, they are excluded from the logical frame definition in IEEE 802.3.[30] In Gigabit Ethernet implementations (IEEE 802.3 Clauses 35–40), the standard 7-byte preamble and 1-byte SFD are retained, but physical layer variants incorporate specialized handling, such as preamble shrinkage in 8B/10B encoded PHYs or integration with carrier extension symbols for half-duplex short frames to maintain collision domain integrity without altering the core synchronization role.[31]Address Fields
The address fields in an Ethernet frame comprise the Destination Address (DA) and Source Address (SA), positioned immediately after the Start Frame Delimiter as the initial components of the MAC header. Each field is 6 octets (48 bits) in length, resulting in a total of 12 octets for both.[32] These MAC addresses operate within a 48-bit flat address space, structured to facilitate unique identification across networks. The least significant bit (bit 0) of the first octet serves as the Individual/Group (I/G) bit, set to 0 for individual (unicast) addresses targeting a single recipient or 1 for group (multicast) addresses targeting multiple recipients. The second least significant bit (bit 1) of the first octet is the Universal/Local (U/L) bit, set to 0 for universally administered addresses assigned by the IEEE or 1 for locally administered addresses configured by network administrators. The remaining 46 bits consist of the 24-bit Organizationally Unique Identifier (OUI), allocated by the IEEE to manufacturers for the first three octets, followed by a 24-bit extension unique to the specific network interface card (NIC).[33] In usage, the DA determines frame delivery: unicast addresses have the first octet starting with even hexadecimal values (I/G bit = 0, e.g., 00 prefix), multicast addresses start with odd values (I/G bit = 1, e.g., 01 prefix), and the broadcast address is the all-ones pattern FF:FF:FF:FF:FF:FF (a special multicast case). The SA always employs a unicast address (I/G bit = 0) to identify the transmitting NIC, ensuring traceability without group addressing for origins.[33][34] Receivers prioritize DA examination upon frame arrival; hardware-level filtering discards frames where the DA does not match the local unicast address, broadcast, or any subscribed multicast groups, thereby reducing CPU load and bus contention on the host system.[35]Length/EtherType Field
The Length/EtherType field is a 2-byte (16-bit) component in the Ethernet frame header, positioned immediately after the destination and source address fields. This field serves a dual purpose depending on its numerical value: if the value is less than 0x0600 (1536 in decimal), it functions as the Length field in IEEE 802.3 frames, specifying the number of bytes in the payload (including any padding and upper-layer headers like LLC). Conversely, if the value is 0x0600 or greater, it acts as the EtherType field, identifying the protocol encapsulated in the payload, such as Internet Protocol version 4 (IPv4) or Address Resolution Protocol (ARP).[36] Historically, the original Ethernet II specification, developed by Xerox, Digital Equipment Corporation, and Intel in 1982, exclusively used this field as an EtherType to directly indicate the upper-layer protocol, without a separate length indicator. In contrast, the IEEE 802.3 standard, ratified in 1983, redefined the field as a mandatory Length indicator to measure the payload size, relegating protocol identification to the subsequent IEEE 802.2 Logical Link Control (LLC) header when necessary. This shift aimed to standardize frame sizing and collision detection in shared media environments, though Ethernet II frames remain widely compatible due to the value-based distinction.[37][36] Receivers interpret the field by performing a numerical comparison on its value, treating it as a big-endian 16-bit integer; values below 0x0600 trigger length-based parsing, ensuring the frame's data field is exactly that many bytes long, while higher values invoke EtherType processing without regard to payload length. For IEEE 802.3 frames where the indicated length is less than 46 bytes, the sender must pad the payload with zeros to reach this minimum, maintaining the overall frame size at least 64 bytes (excluding preamble and start frame delimiter) to support reliable collision detection on the medium. This padding is transparent to upper layers, as the actual data length is conveyed within the protocol headers.[36] EtherType values are assigned and managed by the IEEE Registration Authority to prevent conflicts and ensure global interoperability, with assignments ranging from 0x0600 to 0xFFFF and requiring justification for new protocols. Common examples include 0x0800 for IPv4, 0x86DD for IPv6, 0x0806 for ARP, and 0x8864 for Point-to-Point Protocol over Ethernet (PPPoE) session frames. These values enable efficient protocol demultiplexing at the receiver, directing the payload to the appropriate network stack.[38][36]Payload
The payload, also known as the MAC client data field in IEEE 802.3, constitutes the variable-length portion of the Ethernet frame that encapsulates data from higher-layer protocols.[32] This field carries the actual user information or protocol packets without undergoing any Ethernet-specific processing at the MAC layer, serving primarily as a transparent container for network-layer datagrams such as Internet Protocol (IP) packets.[39] For instance, in standard IP encapsulation over Ethernet, the entire IP datagram fits within this field, enabling seamless transmission across the local area network.[39] The standard size of the payload ranges from 46 to 1500 octets, ensuring compatibility with the overall frame constraints defined in IEEE 802.3.[32] The exact length is indicated either by the preceding Length field in IEEE 802.3 formats or inferred from the EtherType field in Ethernet II frames when its value exceeds 1500.[40] This octet-aligned structure maintains byte-level boundaries, facilitating efficient processing by network interfaces. The 1500-byte maximum payload size, corresponding to a maximum transmission unit (MTU) of 1500 bytes, was originally selected to prevent excessive fragmentation in early Ethernet implementations while accommodating typical packet sizes without compromising performance.[41] If the upper-layer data is shorter than 46 octets, the payload must be padded with zero octets to reach this minimum length, resulting in a total frame size of at least 64 octets (including headers and frame check sequence).[42] This padding requirement stems from the need to ensure reliable collision detection in the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) mechanism used in half-duplex Ethernet networks, where the minimum frame duration must cover the round-trip propagation time across the maximum network diameter—equivalent to a slot time of 512 bit times or 64 bytes at 10 Mbps.[41] The receiving station uses the Length field to distinguish actual data from padding, discarding the extraneous zeros during protocol processing.[42]Frame Check Sequence
The Frame Check Sequence (FCS) is a 4-byte (32-bit) cyclic redundancy check (CRC-32) field appended to the end of an Ethernet frame, serving as the primary error-detection mechanism in the IEEE 802.3 standard.[30] It protects the integrity of the frame's header and payload fields against transmission errors caused by noise or interference on the physical medium.[43] The FCS enables the receiving device to verify whether the frame has been corrupted, though it provides detection only, without error correction capabilities.[30] The CRC-32 value is generated using the irreducible polynomial G(x) = x^{32} + x^{26} + x^{23} + x^{22} + x^{16} + x^{12} + x^{11} + x^{10} + x^8 + x^7 + x^5 + x^4 + x^2 + x + 1, represented in binary as 0x04C11DB7 (with the highest-order bit omitted for implementation).[30][43] To compute the FCS, the sender first complements the initial 32 bits of the protected frame content, treats the remaining bits as coefficients of a polynomial M(x), shifts it left by 32 bits (multiplying by x^{32}), and performs polynomial division modulo 2 by G(x) to obtain the 32-bit remainder R(x). The final CRC is the bitwise complement of R(x).[30] This calculation is performed exclusively over the destination address (DA), source address (SA), Length/EtherType field, and payload (including any padding), excluding the preamble, Start Frame Delimiter (SFD), and the FCS itself.[30] The resulting 32-bit sequence is appended immediately after the payload and transmitted with the most significant bit (x^{31}) first, such that each octet is sent least-significant bit first within the byte, but the overall polynomial order ensures proper alignment for verification.[43][44] At the receiver, the FCS is verified by recalculating the CRC over the same protected fields of the incoming frame (DA, SA, Length/EtherType, and payload), appending the received FCS, and performing the polynomial division by G(x). If the remainder is zero, the frame is deemed error-free; otherwise, a mismatch indicates corruption, and the frame is discarded without further processing.[30][43] This process reliably detects all single- and double-bit errors, all odd numbers of bit errors, and all burst errors of length 32 bits or less with 100% probability, while providing greater than 99.999% detection for longer burst errors up to the maximum frame size.[30][45] The high detection rate stems from the polynomial's properties, ensuring a Hamming distance of at least 4 for Ethernet frames.[45] Implementation of the FCS is typically hardware-accelerated within network interface controllers (NICs), using linear feedback shift registers (LFSRs) for efficient serial or parallel computation during frame transmission and reception.[43] This offloads the CRC operations from the host processor, enabling real-time error checking at wire speeds without impacting overall system performance.[43] The detection-only nature of the FCS relies on higher-layer protocols, such as TCP, for any necessary retransmissions in response to dropped frames.[30]Frame Formats
Ethernet II
The Ethernet II frame format, also known as DIX Ethernet after its developers Digital Equipment Corporation, Intel Corporation, and Xerox Corporation, was specified in November 1982 as version 2.0 of the Ethernet data link layer protocol.[46] Although it predates and differs from the IEEE 802.3 standard, Ethernet II remains prevalent for carrying Internet Protocol (IP) traffic in local area networks due to its straightforward design and broad adoption in TCP/IP environments.[37] The frame structure begins with a 6-byte destination address (DA) field, followed by a 6-byte source address (SA) field, both containing MAC addresses for unicast, multicast, or broadcast delivery. A 2-byte EtherType field then specifies the encapsulated upper-layer protocol, such as IPv4 or ARP. The payload follows, ranging from 46 to 1500 bytes (with padding added if shorter to meet the minimum size), and the frame concludes with a 4-byte frame check sequence (FCS) for error detection via cyclic redundancy check (CRC). Unlike length-based formats, Ethernet II omits a dedicated length field, relying on the FCS to delineate the end of the payload.[46] This format's primary advantage lies in its direct protocol identification through the EtherType field, eliminating the need for an intervening Logical Link Control (LLC) header and reducing processing overhead at receivers. The EtherType registry, administered by the IEEE Registration Authority, supports extensibility by assigning unique 16-bit values to protocols, including 0x0800 for IPv4 and 0x0806 for ARP, fostering interoperability across diverse network implementations.[47][37] In contemporary TCP/IP networks, Ethernet II dominates due to its efficiency and native support for IP without additional encapsulation, making it the preferred choice for most Ethernet deployments. Its compatibility with IEEE 802.3 infrastructure is ensured by reserving EtherType values above 1500 (0x05DC hexadecimal), which distinguishes them from valid length indicators in 802.3 frames.[48][37]IEEE 802.3 Raw
The IEEE 802.3 raw frame format, as defined in the 1983 IEEE Std 802.3, provides the foundational structure for Ethernet data link layer framing without an IEEE 802.2 Logical Link Control (LLC) header.[49] This format includes a 6-byte destination address (DA), a 6-byte source address (SA), a 2-byte length field specifying the number of bytes in the subsequent payload, a payload ranging from 46 to 1500 bytes, and a 4-byte frame check sequence (FCS) for error detection.[37] The total frame size, excluding the 8-byte preamble and start frame delimiter, spans 64 to 1518 bytes to ensure reliable transmission over early Ethernet media.[37] In legacy applications, particularly Novell NetWare implementations from the 1980s and early 1990s, the raw format served as the default encapsulation for IPX/SPX protocol traffic.[49] Here, the payload directly encapsulates the IPX header and data without an intervening LLC header, relying on the IPX header's initial two bytes—set to 0xFFFF—as an implicit identifier for Novell packets.[49] This approach assumed a single-protocol environment, enabling proprietary handling by Novell drivers in Open Data-link Interface (ODI) configurations for NetWare 2.x through 4.x systems.[49] The raw format's primary limitation stems from the absence of a standardized protocol identifier, restricting it to environments supporting only one protocol stack, such as IPX, without support for multiplexing diverse upper-layer protocols.[16][50] Without LLC integration, it also lacked features like IPX checksumming and compatibility with multivendor diagnostics or bridging at the MAC layer, leading to interoperability issues in mixed environments.[49] These constraints prompted a shift away from the raw format post-1990s, with Novell defaulting to IEEE 802.2 encapsulation starting April 15, 1993, to accommodate growing network diversity.[49] Compatibility with Ethernet II frames is maintained through the length field's semantic distinction: values of 1500 (0x05DC) or less denote payload length in IEEE 802.3, while values exceeding 1500 signal an EtherType in Ethernet II, enabling both formats to coexist on shared segments without conflict.[37]IEEE 802.2 LLC and SNAP
The IEEE 802.2 Logical Link Control (LLC) sublayer provides a standardized interface between the MAC sublayer and higher-layer protocols in IEEE 802 networks, including Ethernet (IEEE 802.3), by enabling multiplexing of multiple network protocols over a single physical link.[25] The LLC header, which is 3 octets long and placed at the beginning of the payload field in an 802.3 frame, consists of three fields: the Destination Service Access Point (DSAP), Source Service Access Point (SSAP), and Control.[51] The DSAP (1 octet) identifies the destination protocol or service, with its least significant bit indicating individual (0) or group (1) addressing; for example, a value of 0xAA (hexadecimal) signals the use of the Subnetwork Access Protocol (SNAP) extension.[25] The SSAP (1 octet) similarly identifies the source protocol or service, with its least significant bit denoting command (0) or response (1) frames.[51] The Control field (1 or 2 octets, depending on the LLC type) specifies the type of protocol data unit (PDU) and manages functions such as sequencing, acknowledgments, and flow control.[52] LLC supports three modes of operation, each defined by the structure and interpretation of the Control field. Type 1 operation is connectionless and unacknowledged, using an unnumbered Control format (e.g., 0x03 for Unnumbered Information or UI PDUs) suitable for simple, datagram-style transfers without flow control or error recovery.[51] Type 2 is connection-oriented, employing Information (I), Supervisory (S), and Unnumbered (U) formats in the Control field with sequence numbers and poll/final (P/F) bits for reliable, sequenced delivery and error handling.[25] Type 3 provides acknowledged connectionless service, using specialized Acknowledged Connectionless (AC) PDUs in the Control field for optional confirmations without establishing a persistent connection.[51] In IEEE 802.3 frames, LLC is essential for multi-protocol environments, such as encapsulating Internet Protocol (IP) datagrams, where the header enables higher-layer protocols to access the MAC service.[52] The Subnetwork Access Protocol (SNAP) extends LLC to restore compatibility with the EtherType field from Ethernet II frames within the length-constrained IEEE 802.3 format.[53] SNAP is invoked when DSAP and SSAP are both set to 0xAA and the Control field is 0x03 (indicating Type 1 unacknowledged operation), adding a 5-octet extension immediately following the LLC header.[51] This extension comprises a 3-octet Organizationally Unique Identifier (OUI), typically 00-00-00 for standard protocols, followed by a 2-octet EtherType value (e.g., 0x0800 for IP or 0x0806 for ARP).[53] By embedding the EtherType in this manner, SNAP allows IEEE 802.3 networks to carry protocols originally defined for Ethernet II, ensuring interoperability in mixed environments.[37] The combined LLC and SNAP headers total 8 octets, and their use is mandatory for transmitting IP and ARP over IEEE 802 networks in multi-protocol setups to maintain protocol identification without relying on proprietary DSAP/SSAP assignments.[53]| Field | Size (octets) | Description | Example for SNAP (IP) |
|---|---|---|---|
| DSAP | 1 | Destination service access point; indicates protocol | 0xAA |
| SSAP | 1 | Source service access point; indicates frame type | 0xAA |
| Control | 1 | PDU type and control (Type 1: unacknowledged) | 0x03 |
| OUI (SNAP) | 3 | Organizationally unique identifier | 00-00-00 |
| EtherType (SNAP) | 2 | Protocol identifier (e.g., IP, ARP) | 0x0800 |