Fact-checked by Grok 2 weeks ago

End system

An end system, also known as a or end device, is a computing device at the edge of a that serves as the source or destination of data communications, running application programs to generate or consume network traffic. In the OSI reference model, it is defined as a system containing application processes capable of communicating through all seven layers, making it equivalent to an host. Examples include personal computers, smartphones, workstations, servers, and (IoT) devices such as smart appliances or sensors. End systems connect to the wider via access technologies like wired Ethernet, wireless LANs, or cellular links, and they exchange messages with other end systems using layered protocols such as those in the TCP/ suite to support distributed applications. These devices rely on intermediate systems, such as routers and switches, to forward across the infrastructure, but end systems themselves do not perform for transit traffic. In client-server models, end systems function as clients—requesting services like web pages or email—or as servers, providing resources to multiple clients simultaneously. Today, with the proliferation of and , end systems play a central in enabling scalable, user-centric services across diverse environments.

Fundamentals

Definition

In computer networking, an end system refers to any device attached to a that originates or ultimately receives messages, serving as the rather than an intermediary that forwards traffic. This distinguishes end systems from devices like routers that perform relaying functions. According to standards such as ISO/IEC 7498-1, the OSI Reference Model defines open systems interconnection, where end systems participate fully across the seven-layer architecture to enable communication between applications. The term "end system" is frequently used synonymously with "" in contexts, but it particularly highlights the position in end-to-end data paths. In this role, end systems implement the complete , with primary activity at the (Layer 7), where user-facing processes generate or consume data. Unlike intermediate systems, which relay communications without originating or terminating them, end systems focus on direct application-level interactions. End systems consist of hardware components, such as processors for , memory for , and network interfaces for , combined with software that executes applications and manages the protocol layers. These elements enable the device to function as a participant in communications, adhering to established standards for .

Key Characteristics

End systems are distinguished by their operational , enabling them to function independently as sources and destinations of data in a . Unlike intermediate devices that merely forward packets, end systems execute application-level processes initiated by users or software, managing their own communication sessions without dependence on centralized control or mediation from core infrastructure. This self-sufficiency aligns with the , where intelligence and decision-making reside at the edges rather than within the network fabric, allowing end systems to adapt dynamically to local conditions and initiate interactions proactively. A core trait of end systems is their role in hosting resources essential for and . They run diverse applications, such as web browsers and email clients, which produce or process messages that form the basis of network traffic. To support these functions, end systems allocate computational resources—including CPU cycles for processing, for buffering , and for maintaining —ensuring efficient encapsulation of application into and packets. This resource management occurs locally, tailored to the system's capabilities, and underpins their ability to serve as both clients and servers in distributed environments. Uniqueness in addressing is another fundamental characteristic, providing a means for precise identification amid vast interconnectivity. Every end system is assigned a distinct -layer address, such as an , which enables and delivery of packets to the correct . In IPv4, these are 32-bit numerical identifiers, while employs 128-bit addresses to scale for in connected entities; additional identifiers like transport-layer port numbers further distinguish specific processes on a single system. This addressing scheme ensures reliable data exchange by mapping logical endpoints to physical locations dynamically. End systems also exhibit wide variability in scale, accommodating a from minimally resourced units to high-capacity platforms without compromising their core networking role. Resource-constrained examples, like sensors, operate with limited processing and power, yet participate in data flows through lightweight implementations. In contrast, servers handle intensive workloads, supporting thousands of concurrent via robust . This range fosters adaptability across topologies, with global estimates indicating approximately 33 billion connected devices in operation as of October 2025, spanning personal gadgets to industrial infrastructure.

Role in Networking

Data Origination and Termination

End systems serve as the primary points of data origination in computer networks, where applications running on these devices generate data streams based on user inputs or automated processes. This origination begins with the creation of at the application level, such as a request or an composition, which is then encapsulated into structured packets suitable for transmission across . The encapsulation process involves wrapping the application data with headers that include addressing, sequencing, and control information, enabling the data to traverse diverse infrastructures without modification by devices. At the receiving end, end systems perform data termination by reversing this process through decapsulation, extracting the original application from incoming packets and delivering it to the appropriate local application for . This ensures end-to-end , as end systems alone are responsible for verifying the and completeness of the , without forwarding it further to other destinations. Unlike systems, which only packets, end systems host the full application logic required to interpret and act upon the received , closing the communication . Central to these origination and termination roles is the , which posits that critical communication functions—such as reliability, correction, and flow control—should be implemented fully at the endpoints rather than relying on core. Under this principle, end systems directly manage reliability by employing mechanisms like acknowledgments and retransmissions between peers, ensuring that data arrives correctly despite potential losses or corruptions in transit; for instance, if a packet is dropped, the originating end system detects this via timeouts and resends it, independent of intermediate actions. correction is similarly handled end-to-end, with end systems using checksums or redundancy checks to validate , as partial fixes in the network cannot account for all failure modes, such as application crashes or storage errors at the receiver. Flow control is enforced to prevent overwhelming the receiving end system, adjusting transmission rates based on buffer availability and processing capacity, thereby avoiding that could propagate through the network. This approach, articulated in foundational work, emphasizes that only end systems possess the contextual knowledge of applications to implement these functions completely and correctly. By concentrating application-layer logic and end-to-end controls at end systems, this design enhances efficiency, particularly in reducing compared to models where intermediate devices perform complex processing. Intermediate systems operate on a store-and-forward basis, incurring delays for each hop as they receive, buffer, and retransmit entire packets, which can accumulate in multi-hop paths; in contrast, end systems minimize these overheads by offloading reliability and flow management to the edges, allowing the core to focus solely on basic delivery with lower per-hop processing times. This separation scales better for diverse applications and has been key to the internet's robustness, as evidenced by its influence on protocols like , where end-to-end mechanisms achieve reliable delivery over unreliable links without encumbering routers.

Protocol Implementation

End systems implement a complete to facilitate communication across networks, encompassing layers from the application down to the physical interface, in contrast to intermediate systems that typically handle only lower layers. This full-stack implementation enables end systems to manage end-to-end responsibilities, such as , addressing, and reliability assurance. For instance, at the , end systems execute protocols like and , where incorporates congestion control mechanisms including slow start, congestion avoidance, fast retransmit, and fast recovery to prevent network overload and ensure efficient data flow. Similarly, at the network layer, end systems perform IP-related functions, including source address selection and fragmentation handling, as required for proper transmission. At the application layer, end systems take primary responsibility for negotiating sessions and formatting data according to specific , enabling diverse services like web browsing and . In HTTP, for example, end systems act as clients or servers to establish connections, parse requests and responses, and manage resource representations, with the defining methods such as GET and for interaction. For SMTP, end systems handle message submission and relay, including envelope formatting and command-response sequences to ensure reliable delivery between mail transfer agents. These implementations often integrate with higher-level software, allowing applications to invoke -specific behaviors without direct interaction. Security features are predominantly managed at the end systems to protect , aligning with the that places such functions at the communication endpoints for completeness and flexibility. TLS, a core security protocol, is implemented by end systems to perform handshakes that negotiate cipher suites, exchange keys, and authenticate peers using digital certificates. Certificate handling involves validation against trusted roots, chain verification, and revocation checks (e.g., via OCSP), all executed by the end system's TLS library to mitigate risks like man-in-the-middle attacks. This endpoint-centric approach ensures that security is tailored to application needs, avoiding reliance on potentially untrusted intermediate nodes. Error handling in end systems follows end-to-end protocols, where mechanisms like and retransmissions are implemented to detect and recover from or loss without depending on the network core. end systems compute a 16-bit one's complement over the header, pseudo-header, and to identify transmission errors, discarding invalid segments. Upon detection of loss—via duplicate acknowledgments or timeouts—end systems initiate retransmissions, adjusting the congestion window to balance reliability and throughput. These processes underscore the end-to-end argument, emphasizing that robust error correction requires application-level awareness for optimal performance.

Types and Examples

Traditional Computing Devices

Traditional computing devices serve as foundational end systems in networked environments, enabling user interaction, , and service delivery through robust and software architectures. These devices, including computers, servers, and workstations, are characterized by their general-purpose design, high computational power, and comprehensive support for networking protocols, allowing them to originate and terminate data across local and wide-area networks. Personal computers, encompassing desktops and laptops, function as primary end systems for individual user interaction and productivity tasks. Desktops provide stationary, high-performance platforms often equipped with dedicated graphics and storage for demanding applications, while laptops offer portability with integrated networking capabilities such as and Ethernet adapters. Operating systems like Windows and play crucial roles in facilitating networking on these devices; Windows implements a full /IP protocol that supports IPv4, , and associated transport protocols like and , enabling seamless connectivity to enterprise and resources. Similarly, the incorporates a modular networking that handles packet , , and socket interfaces, widely used in both consumer and professional settings for its efficiency and customizability. Servers represent dedicated end systems optimized for hosting network services, such as web hosting, database management, and , often deployed in data centers to handle multiple client requests. Their is enhanced by multi-core processors, which allow of concurrent connections; for instance, modern multi-core architectures can manage thousands of simultaneous flows by distributing workloads across cores, improving throughput and reducing in high-traffic scenarios. This design ensures reliable performance in environments, where servers dominate for business-critical operations. Workstations are high-end end systems tailored for specialized, resource-intensive tasks, such as (CAD), , and scientific simulations, frequently involving network-intensive applications like collaborative tools or cloud-integrated workflows. Equipped with powerful CPUs, ample RAM, and professional GPUs, workstations support protocols for high-bandwidth data transfer, enabling seamless integration with remote servers and resources. As of 2025, over 2 billion computers are in use worldwide, supplemented by more than 70 million physical servers, underscoring the continued dominance of traditional devices in enterprise networks despite the rise of specialized systems.

Modern Embedded Devices

Modern embedded devices exemplify the shift toward resource-constrained end systems that prioritize efficiency, mobility, and integration into pervasive networks. Smartphones and tablets function as primary end systems, maintaining persistent for origination and termination in dynamic environments. These devices incorporate battery-optimized protocols, such as power-saving modes in TCP/IP implementations and lightweight alternatives like for reduced latency and overhead, which minimize energy drain during always-on operations. As of 2025, adaptations for Advanced—commercialized by major operators—enhance these capabilities through features like improved idle mode signaling and AI-assisted beam management, while early research focuses on frequencies and integrated sensing for even greater efficiency in battery-constrained scenarios. IoT devices, such as environmental sensors, smart thermostats, and wearable health monitors, operate as specialized end systems that generate and process at the network . These systems leverage low-power wide-area networks (LPWAN) technologies—including NB-IoT for cellular coverage, LoRaWAN for unlicensed long-range , and for moderate mobility—to enable intermittent, low-data-rate transmissions while extending battery life to years in some deployments. further optimizes their role by offloading computation from resource-limited devices to nearby gateways, reducing reliance on cloud infrastructure and enabling real-time analytics for applications like in smart homes. Automotive embedded systems, including in-vehicle (IVI) units and Advanced Driver-Assistance Systems (ADAS), serve as interconnected end systems that handle multimedia streaming, , and cooperative decision-making. IVI systems integrate high-bandwidth connectivity for and entertainment, while ADAS employs cameras, , and as data sources for autonomous features. (V2X) communication standards, particularly (C-V2X) based on , facilitate direct exchanges between vehicles (V2V), infrastructure (V2I), and pedestrians (V2P), supporting low-latency safety alerts and traffic optimization with latencies under 1 ms in critical scenarios. The expansion of these embedded end systems is forecasted to drive connected IoT devices to approximately 39 billion globally by 2030, up from 21.1 billion in , fueled by cost reductions in sensors and . This scale amplifies challenges inherent to embedded constraints, such as limited computational resources that hinder robust and updates, rendering devices susceptible to exploits like Mirai-style botnets and supply-chain attacks. Addressing these requires tailored approaches, including root-of-trust mechanisms and over-the-air (OTA) provisioning adapted for low-power environments.

Architectural Integration

In the OSI Model

In the OSI reference model, end systems—such as computers and servers—serve as the primary hosts that originate and terminate data communications, implementing the full seven-layer to enable complete end-to-end interactions. Unlike intermediate systems like routers, which primarily operate at the lower layers, end systems traverse all layers bidirectionally: data from an application descends through layers 7 to 1 for transmission and ascends from 1 to 7 upon reception at the destination. This layered approach, standardized by the (ISO) in 1984, promotes by defining abstract interfaces and services that allow diverse hardware and software to communicate seamlessly across networks. At the Physical (Layer 1) and Data Link (Layer 2) layers, end systems manage the interface between digital data and the physical . The handles signaling, such as converting bits into electrical or optical signals via network interface cards (NICs) and cables, ensuring synchronization and bit-rate control. The then frames these bits into structured units, incorporating error detection (e.g., cyclic redundancy checks) and addressing via Media Access Control () identifiers, as seen in Ethernet implementations where end systems use 48-bit MAC addresses to identify themselves on local networks. In the Network (Layer 3) and Transport (Layer 4) layers, end systems perform core functions for reliable data delivery across interconnected networks. The Network Layer implements logical addressing and routing, with end systems originating packets using IP addresses and, if necessary, performing fragmentation to fit maximum transmission unit (MTU) constraints of the path, as specified in IPv4 where the source host divides oversized datagrams into fragments. The Transport Layer builds on this by providing end-to-end services, such as segmenting data streams, managing flow control, and ensuring ordered delivery through mechanisms like TCP sequence numbers, which assign a unique identifier to each byte of data transmitted between endpoints. The upper layers—Session (Layer 5), , and —are exclusively implemented in end systems, focusing on user-facing and data-processing aspects of communication. The establishes, maintains, and terminates dialog connections, enabling synchronization and recovery points for multi-turn interactions, such as in remote procedure calls (RPC). The handles data translation, encryption, and compression, including conversions like ASCII to to ensure compatibility between disparate systems. Finally, the interfaces directly with end-user software, supporting protocols like HTTP for web browsing or SMTP for , where end systems manage resource access and data exchange. Overall, end systems integrate the OSI layers holistically to encapsulate application data into layered data units (PDUs)—progressing from application-layer messages to segments, packets, frames, and physical bits—for transmission, with the reverse process at the receiver ensuring transparent communication. This full-stack traversal standardizes , allowing end systems from different vendors to exchange data without proprietary dependencies, as evidenced by the model's adoption in protocols that span global networks.

In the TCP/IP Model

In the TCP/IP model, end systems serve as the primary hosts that originate, process, and terminate data communications across the Internet protocol suite, handling responsibilities across the link, internet, transport, and application layers to enable reliable end-to-end connectivity. Unlike intermediate routers, end systems focus on application-specific interactions while managing lower-layer protocols for packet encapsulation, addressing, and error handling. This architecture, defined in foundational IETF standards, emphasizes simplicity and modularity, allowing end systems to adapt to diverse network environments. At the link and internet layers, end systems perform (ARP) resolution to map addresses to physical addresses on local networks, broadcasting ARP requests and caching responses to facilitate direct communication within the same subnet. For addressing and packet processing, end systems generate and parse IPv4 headers, which include 20-byte fixed fields for version, header length, type of service, total length, identification, flags, fragment offset, , , header , and source/destination addresses, verifying checksums and handling fragmentation if necessary. In IPv6 environments, end systems process simplified 40-byte headers with fields like version, traffic class, flow label, payload length, next header, hop limit, and 128-bit addresses, supporting extension headers for options such as or fragmentation while performing (NDP) instead of ARP for address resolution, enabling stateless autoconfiguration and multicast-based neighbor interactions. These processes ensure end systems can correctly route datagrams to gateways or peers, reassemble fragments, and discard invalid packets to maintain network integrity. The in end systems implements and through programming interfaces, where applications bind to ports for data exchange, enabling multiplexing of multiple connections over a single . provides reliable, connection-oriented service with APIs for establishing three-way handshakes, managing sequence numbers, acknowledgments, and retransmissions, while incorporating congestion avoidance algorithms such as Reno, which uses slow start to exponentially increase the congestion window until a , followed by linear growth and multiplicative decrease upon loss detection via duplicate acknowledgments or timeouts. offers lightweight, connectionless delivery with minimal overhead, relying on s for port-based demultiplexing and optional checksums, suitable for applications. Modern implementations often employ advanced algorithms like Cubic, which employs a cubic congestion window growth function to probe more aggressively in high-speed networks, reducing retransmission delays. At the , end systems directly execute protocols for user-level services, such as issuing DNS queries via for domain name to IP addresses in a stateless manner, where each query-response pair operates independently without maintaining session state. In contrast, protocols like FTP involve stateful operations over , establishing control and data connections for file transfers, tracking session parameters such as transfer modes and across multiple commands. These implementations allow end systems to encapsulate application data into transport segments, which are then passed down for and link-layer processing, ensuring seamless integration across the stack. The underpins the vast majority of global networking, with the protocol suite serving as the for end systems in Internet-connected environments due to its proven and . As of 2025, it supports over 6 billion online users worldwide, forming the backbone of across wired and infrastructures. Its adaptability to environments stems from mechanisms like , which mitigates in variable-bandwidth mobile networks, and , enabling seamless handoffs without disrupting end system operations.

Historical Context

Origins in Early Computer Networks

The concept of end systems originated in the pioneering efforts of early computer networks during the late and , where they were primarily referred to as "hosts"—computing devices responsible for initiating and receiving communications, distinct from the underlying . The , funded by the U.S. Advanced Research Projects Agency () and operational by the end of 1969, exemplified this distinction by connecting four host computers at UCLA, the Stanford Research Institute, the University of California, Santa Barbara, and the . These hosts served as the endpoints for exchange, while specialized Interface Message Processors (IMPs), developed by Bolt, Beranek and Newman (BBN), acted as intermediaries to manage and transmission between them, ensuring reliable host-to-host connectivity without hosts directly handling low-level operations. A key milestone in defining end system roles came with the development of the (NCP) in 1970, the ARPANET's first host-to-host , which enabled hosts to establish connections, manage data transfer, and handle basic error control for applications such as remote login and . Finalized by the Network Working Group in December 1970 under , NCP positioned hosts as the primary runners of applications, abstracting network complexities and allowing end systems to focus on user-level tasks. This marked the initial formalization of end systems as active participants in networked communication, laying groundwork for subsequent protocols. Parallel advancements in Europe further shaped the conceptualization of end systems through host-centric designs. The National Physical Laboratory (NPL) in the UK, under , implemented the world's first packet-switched network in 1970–1971, connecting hosts directly to a simple switch fabric where end systems managed packet assembly, error detection, and retransmission, emphasizing decentralized control over centralized network intelligence. Similarly, France's project, initiated in 1971 by Louis Pouzin and operational with initial host connections by 1973, adopted a datagram-based architecture that placed full responsibility for reliable end-to-end communication on the hosts themselves, isolating them from underlying transport details and promoting transparency in network design. During the 1970s, the (ISO) advanced these ideas through the development of the Open Systems Interconnection (OSI) , which explicitly defined end systems as open systems capable of interoperable communication across diverse networks. This work, begun in 1977 by ISO Technical Committee 97, culminated in preliminary recommendations by 1978 that outlined end systems' roles in layers 4 through 7, with formal publication as ISO 7498-1 and ITU-T X.200 in 1984. These European and international efforts influenced foundational principles, notably articulated in the 1981 paper "End-to-End Arguments in System Design" by Jerome H. Saltzer, , and , which argued for placing communication functions like reliability and security at the end systems to enhance system robustness and adaptability in packet-switched environments.

Evolution and Modern Relevance

The transition of to / on January 1, 1983, marked a pivotal moment in the evolution of end systems, standardizing internetworking protocols and elevating the role of host computers as primary network endpoints. This shift from the Network Control Program to enabled more robust and scalable connectivity, allowing end systems to function as autonomous participants in a distributed network rather than mere terminals. By June 1983, all hosts had adopted , fostering the growth of interconnected end systems beyond military research applications. Subsequent standardization efforts, such as RFC 1122 in 1989, further refined end system capabilities by specifying requirements for communication layers, ensuring interoperability and reliability across diverse hosts. The 1990s and 2000s introduced wireless technologies that transformed end systems from stationary devices to mobile ones, adapting them for dynamic environments. The standard, ratified in 1997, established the foundation for , enabling wireless local area networks that allowed end systems like laptops and early smartphones to connect without physical cabling. This was complemented by the rollout of networks under IMT-2000 specifications, with the first commercial launch by on October 1, 2001, providing mobile end systems with consistent data speeds up to 2 Mbps for and applications. These advancements necessitated adaptations in end systems to handle mobility, , and power efficiency, broadening their deployment in consumer and enterprise settings. Entering the 2010s, the proliferation of the (IoT) and drove massive scaling of end systems, with adoption addressing the address exhaustion of IPv4 and enabling billions of connected devices. 's expanded , finalized in RFC 8200 in 2017 but gaining traction throughout the decade, supported seamless integration of low-power sensors and smart devices into global networks. By 2025, networks combined with have enhanced endpoint intelligence, allowing end systems to process data locally for real-time decision-making in applications like autonomous vehicles and industrial automation, reducing to under 1 millisecond in ultra-reliable scenarios. Looking ahead, end systems are evolving toward quantum-resistant protocols and to address emerging threats and complexities. In 2025, NIST selected the HQC algorithm as a fifth post-quantum encryption standard, providing backup defenses for end systems against quantum attacks on . Concurrently, agents are enabling greater in end systems, with 2025 projections indicating that intelligent orchestration will handle end-to-end processes independently, shifting human oversight to strategic roles in ecosystems. These trends, guided by NIST's post-quantum migration mappings, ensure end systems remain secure and adaptive in a hyperconnected future.

References

  1. [1]
    1.3.3 End devices - Internet of everything - The Open University
    End devices are either the source or destination of data transmitted over the network. In order to distinguish one end device from another, each end device on a ...
  2. [2]
    RFC 1208 - A Glossary of Networking Terms - IETF Datatracker
    end system: An OSI system which contains application processes capable of communicating through all seven layers of OSI protocols. Equivalent to Internet ...
  3. [3]
    None
    Summary of each segment:
  4. [4]
    [PDF] Computer Networking - A Top Down Approach (8th Edition)
    Chapter 1 Computer Networks and the Internet. 1. 1.1 What Is the Internet? 2. 1.1.1 A Nuts-and-Bolts Description. 2. 1.1.2 A Services Description.
  5. [5]
    RFC 1122 - Requirements for Internet Hosts - Communication Layers
    This is one RFC of a pair that defines and discusses the requirements for Internet host software. This RFC covers the communications protocol layers.
  6. [6]
    Number of connected IoT devices growing 14% to 21.1 billion globally
    Oct 28, 2025 · Number of connected IoT devices growing 14% to 21.1 billion globally in 2025. Estimated to reach 39 billion in 2030, a CAGR of 13.2% [...]Connected IoT device market... · Wi-Fi IoT · Bluetooth IoT · Cellular IoT
  7. [7]
    [PDF] END-TO-END ARGUMENTS IN SYSTEM DESIGN - MIT
    The principle, called the end-to-end argument, suggests that functions placed at low levels of a system may be redundant or of little value when compared with ...
  8. [8]
    RFC 5681: TCP Congestion Control
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  9. [9]
    RFC 9110: HTTP Semantics
    This document describes the overall architecture of HTTP, establishes common terminology, and defines aspects of the protocol that are shared by all versions.Info page · RFC 9112 · RFC 9111 · RFC 3864: Registration...
  10. [10]
    RFC 8446 - The Transport Layer Security (TLS) Protocol Version 1.3
    This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet.
  11. [11]
    RFC 793 - Transmission Control Protocol (TCP) - IETF
    Because segments may be lost due to errors (checksum test failure), or network congestion, TCP uses retransmission (after a timeout) to ensure delivery of every ...
  12. [12]
    5G Technology and Milestones Timeline - Qualcomm
    In 2025, major operators are expected to begin the commercialization of 5G Advanced. The technology will see significant advancements through 3GPP Releases ...
  13. [13]
    [PDF] How network adaptations for 5G devices will lead to superior battery ...
    Longer battery life can be achieved by reducing the energy consumption of the 5G mobile device, which is equally critical to sustainability goals. 3GPP has ...
  14. [14]
    6G - Follow the journey to the next generation networks - Ericsson
    6G will build on 5G, evolving from today's networks towards the needs of 2030 and beyond. The first wave of 6G will advance technologies and use cases already ...Missing: protocols | Show results with:protocols
  15. [15]
  16. [16]
    LPWANs for IoT Connectivity: A Comprehensive Guide - Zipit Wireless
    Sep 19, 2025 · LPWANs deliver long-range, low-power IoT connectivitY: LPWANs are purpose-built to connect IoT devices over long distances using minimal power, ...
  17. [17]
    Edge Computing for IoT - IBM
    Edge computing for IoT is the practice of processing and analyzing data closer to the devices that collect it rather than transporting it to a data center ...Missing: wearables LPWAN<|separator|>
  18. [18]
    Understanding Vehicle-to-everything (V2X) Communication
    An overview of vehicle-to-everything (V2X) communication, what it is, how it works, competing standards and the challenge for design engineers.
  19. [19]
    Everything You Need to Know About In-Vehicle Infotainment Systems
    Aug 17, 2018 · Infotainment systems are in-built car computers that combine a wide range of functions – from digital radios to in-built reversing cameras.
  20. [20]
    Vehicle-to-everything (V2X) in the autonomous vehicles domain
    Advanced communication systems, such as V2X, enable real-time information sharing among road users, maximizing safety for all, including VRUs. •. Development of ...
  21. [21]
    Market Guide for Embedded Security for IoT Connectivity - Gartner
    Jul 8, 2025 · With rising cybersecurity attacks, cyber-physical systems (CPS) protection is evolving, making embedded IoT security essential. Align embedded ...Missing: projection 2030 constraints
  22. [22]
    IoT Security Risks: Stats and Trends to Know in 2025 - JumpCloud
    Jan 10, 2025 · Key IoT Security Risks Backed by Data · Device Vulnerabilities · Botnets & DDoS Attacks · Data Privacy Breaches · Industrial IoT (IIoT) Risks.
  23. [23]
    Understanding OSI - Chapter 2 - Packetizer
    Specifying the operation of nodes (end and intermediate systems) to provide the end-to-end network service over various topologies of links. Following this ...<|control11|><|separator|>
  24. [24]
    What is the OSI Model? The 7 Layers Explained - BMC Software
    Jul 31, 2024 · The OSI Model is a 7-layer framework for network architecture that doesn't have to be complicated. We break it all down for you here.
  25. [25]
    What is the OSI Model? | Cloudflare
    The Open Systems Intercommunication (OSI) model is a conceptual model that represents how network communications work. Learn more about the 7-layer OSI model.Network protocol definition · User Datagram Protocol (UDP) · What is ICMP?Missing: intermediate | Show results with:intermediate
  26. [26]
    RFC 791: Internet Protocol
    The internet protocol is designed for use in interconnected systems of packet-switched computer communication networks. Such a system has been called a catenet.
  27. [27]
  28. [28]
  29. [29]
    RFC 8200: Internet Protocol, Version 6 (IPv6) Specification
    ### Summary of End System Processing of IPv6 Headers (RFC 8200)
  30. [30]
    RFC 5681 - TCP Congestion Control - IETF Datatracker
    This document defines TCP's four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery.
  31. [31]
    What is TCP/IP Model? Computer Networking Guide
    May 27, 2025 · Global Internet Standard – As of 2024, over 5.5 billion people worldwide rely on TCP/IP-based networks for communication, making it the backbone ...
  32. [32]
    Exploring the TCP/IP Protocol Suite: Architecture, Dominance, and ...
    Aug 1, 2024 · This article delves into the architecture and functionalities of the TCP/IP protocol suite, emphasizing its dominance in data communication.Missing: global rate percentage
  33. [33]
    A Brief History of the Internet - Internet Society
    Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early ...
  34. [34]
    Internet History of 1970s
    In December, the Network Working Group (NWG) led by Steve Crocker finishes the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP).
  35. [35]
    How the ARPANET Protocols Worked - Two-Bit History
    Mar 8, 2021 · The IMP-Host Protocol was specified by BBN in a lengthy document called BBN Report 1822. The document was revised many times as the ARPANET ...
  36. [36]
    NCP, Network Control Program | LivingInternet
    The Network Control Protocol (NCP) was the first standard networking protocol on the ARPANET. NCP was finalized and deployed in December 1970.
  37. [37]
    RFC 6529 - Host/Host Protocol for the ARPA Network
    ... protocol. The first official document defining the protocol was issued by Crocker on August 3, 1970 as "Host-Host Protocol Document No. 1" (see citation in ...
  38. [38]
    NPL Network and Donald Davies 1966 - 1971
    When the network first worked in 1971, the simple host-to-host protocol proved inadequate and had to be completely re-written in order to speed up packet ...
  39. [39]
    CYCLADES Network and Louis Pouzin 1971 - 1972
    CYCLADES was a pure datagram network where hosts sent datagrams directly, providing end-to-end error correction, and isolated from PTT complications.
  40. [40]
    History of the OSI Reference Model - The TCP/IP Guide!
    In the late 1970s, two projects began independently, with the same goal: to define a unifying standard for the architecture of networking systems. One was ...
  41. [41]
    [PDF] ISO Reference Model for Open Systems Interconnection (OSI)
    Oct 11, 2018 · The ISO defines a system as a set of one or more computers and associated software, peripherals, tenninals, human operators, physical processes,.<|control11|><|separator|>
  42. [42]
    How TCP/IP Changed Everything: A History of IP Addresses Part 2
    Jan 17, 2018 · On New Year's Day, 1983, ARPANET switched from their NCP protocol to TCP/IP, which was considered more flexible and more powerful.
  43. [43]
    ARPANET Adopts TCP/IP - IEEE Communications Society
    ARPANET architects decide to replace the existing Network Control Program (NCP) with TCP/IP on all ARPANET hosts. By June 1983, every host was running TCP/IP.
  44. [44]
    IEEE 802.11-1997 - IEEE SA
    Nov 18, 1997 · This standard contains three physical layer units: two radio units, both operating in the 2400-2500 MHz band, and one baseband infrared unit.
  45. [45]
    A Concise History of The 3G Technology - Intraway
    Jun 23, 2020 · The first commercial launch of the technology happened on October 1, 2001- also by NTT Docomo in Japan. However, the technology saw a slow pace ...
  46. [46]
    Are Small-and Medium-sized Businesses Ready for IPv6?
    Jul 27, 2021 · It means that working from anywhere will likely be an IPv6-first experience and SMBs will therefore enable it. IoT adoption is the fourth and ...
  47. [47]
    15 Edge Computing Trends to Watch in 2025 and Beyond
    Jan 8, 2025 · Edge devices would ingest and process endpoint-generated data to first determine what action is needed and then they would direct the autonomous ...
  48. [48]
    NIST Selects HQC as Fifth Algorithm for Post-Quantum Encryption
    Mar 11, 2025 · The new algorithm, called HQC, will serve as a backup defense in case quantum computers are someday able to crack ML-KEM.
  49. [49]
    [PDF] AI: A Declaration of Autonomy - Accenture
    When AI expands exponentially, systems are upended. Technology Vision 2025 | AI: A Declaration of Autonomy. Organizations are entering a generation-defining.
  50. [50]
    New Draft White Paper | PQC Migration: Mappings to Risk ...
    Sep 18, 2025 · Organizations should start planning now to migrate to PQC, also known as quantum-resistant cryptography, to protect their high value, long-lived ...Missing: protocols | Show results with:protocols<|separator|>