Fact-checked by Grok 2 weeks ago

Control plane

The control plane is a fundamental component of computer networking that encompasses the processes and protocols responsible for determining how data packets are routed and forwarded across a network, including the establishment of routing tables and network topology. It operates by exchanging control messages between network devices, such as routers and switches, to make decisions on traffic paths, policy enforcement, and resource allocation, ensuring efficient and secure data transmission. Distinct from the data plane, which executes the high-speed forwarding of actual packets based on the control plane's instructions, the control plane functions as the "brain" of the network, enabling dynamic adaptation to changes in topology or traffic demands. In traditional network architectures, the control plane is distributed across individual devices, where protocols like BGP (Border Gateway Protocol) for inter-domain routing, (Open Shortest Path First) for intra-domain path calculation, and (Intermediate System to Intermediate System) for link-state information exchange populate forwarding tables to guide data flow. These mechanisms not only compute optimal routes but also handle tasks such as traffic prioritization, load balancing, and topology maintenance to maintain network resiliency and performance. The control plane's role extends to security, where it processes signaling for features like and , making it a critical target for protection against attacks such as distributed denial-of-service (DDoS). A key evolution in control plane design is seen in , which decouples the control plane from the data plane to centralize management through software controllers, allowing programmable configuration via for greater and in large-scale environments like data centers and cloud infrastructures. This separation enhances flexibility, as the control plane can now oversee hybrid physical-virtual networks, enforcing policies uniformly and responding to events in real-time without hardware dependencies. Additionally, the management plane complements the control plane by providing oversight for administrative tasks, such as configuration, monitoring, and fault detection, ensuring holistic network governance. The control plane's importance is underscored by its impact on network efficiency, with modern implementations supporting low-latency operations and ; for instance, in platforms, it facilitates seamless to handle massive numbers of daily. As networks grow more complex with and integration, advancements in control plane technologies continue to prioritize , with the global SDN market—driven by control plane innovations—valued at approximately $35 billion as of 2024 and projected to exceed $50 billion by 2028, reflecting its pivotal role in future-proofing connectivity.

Core Concepts

Definition and Functions

The control plane refers to the collection of processes within a , such as a router, that make decisions on how data packets should be routed and processed across the . These processes operate at a higher level to manage overall behavior, including the determination of paths based on and policies. Unlike the data plane, which executes the actual , the control plane provides the intelligence that guides these operations by maintaining state information and updating forwarding rules. Key functions of the control plane encompass topology discovery, where it identifies network structure through exchange of information between devices; policy enforcement, such as applying (QoS) rules to prioritize traffic; and to optimize and device capabilities. It populates routing tables with entries derived from learned network paths and handles protocol signaling, for example, by sending periodic hello messages in protocols like OSPF to detect neighbors and maintain adjacency. Additionally, the control plane manages error handling for issues like protocol mismatches or unreachable destinations. Historically, early implementations of control plane functions appeared in Unix systems during the late 1970s and early 1980s, where routing decisions were managed by software daemons like the routed process introduced in 4.2BSD, which used variants of routing information protocols to update kernel routing tables dynamically. By the 1980s, as networks scaled with the growth of the and early , these functions evolved into dedicated processes on specialized router hardware, separating decision-making logic from basic packet handling to improve efficiency and reliability. This foundational separation laid the groundwork for modern network architectures, where control plane processes influence packet paths without directly participating in high-speed forwarding.

Control Plane vs. Data Plane

In networking architecture, the control plane and data plane represent a fundamental separation of responsibilities designed to enhance efficiency and performance. The control plane manages deliberative, slow-path processes, such as computing routes, maintaining , and configuring policies using protocols like BGP and OSPF. In contrast, the data plane executes high-speed, fast-path forwarding operations, including packet lookup, encapsulation, and transmission based on pre-established rules. This architectural division allows the control plane to focus on complex decision-making without impeding the data plane's real-time handling of traffic volumes that can reach terabits per second in modern routers. The separation yields significant benefits, including improved by enabling independent optimization of each plane—control logic can evolve without altering forwarding hardware, while data plane components leverage specialized for low-latency processing. is bolstered through , as the control plane can be shielded from direct exposure to data traffic, mitigating risks like DDoS attacks that target protocols. Additionally, this supports seamless upgradability; control plane software updates or failures can occur without disrupting ongoing data flows, ensuring in carrier-grade networks. Interaction between the planes typically involves the control plane programming the data plane via standardized or table installations, where changes like route computations trigger updates to forwarding rules. For instance, in (SDN) environments, a centralized controller pushes match-action policies to distributed switches, allowing dynamic reconfiguration. This model decouples control logic from hardware, facilitating automated orchestration. Historically, early routers integrated and data functions on shared processors, limiting as grew. The evolution toward logical separation accelerated with the advent of SDN in the early 2010s, where protocols like enabled centralized over commodity hardware, and modern in high-end routers further reinforced this divide for programmable, resilient networks.

Unicast Routing Operations

Sources of Routing Information

In unicast IP routing, the control plane populates the with information derived from multiple sources, each contributing candidate routes that are evaluated based on trustworthiness and specificity. These sources include directly connected networks, manually configured static routes, and routes learned through dynamic protocols, ensuring comprehensive coverage of reachable destinations while allowing for prioritized selection. Local interface information provides the highest-priority routes, representing networks directly attached to the router's interfaces. When an is configured with an , such as assigning 192.0.2.1/24 to an Ethernet port, the router automatically installs a connected route for the corresponding (e.g., 192.0.2.0/24) in the , with an of 0. These routes are considered the most reliable because they reflect connectivity and require no intermediary hops. Static routes offer manually defined paths to specific destinations, configured by administrators to override or supplement dynamic learning. Each static route specifies a destination , next-hop , or outgoing interface, and carries a administrative distance of 1 on devices, making it preferable to most dynamic routes unless explicitly adjusted. For instance, a static route might direct traffic for 203.0.113.0/24 via next-hop 192.0.2.254, providing in scenarios like gateways or backup paths. Dynamic routing protocols enable automated discovery and exchange of information between routers, adapting to network changes without manual intervention. These protocols fall into categories such as distance-vector (e.g., , defined in 2453, which uses hop count as a ), link-state (e.g., OSPF, per 2328, which computes shortest paths based on link costs derived from ), and path-vector (e.g., BGP, per 4271, which selects paths using attributes like AS-path length for inter-domain ). Routes from these protocols arrive with associated metrics and administrative distances—such as 120 for and 110 for OSPF—allowing the router to compare and select optimal paths within the same protocol domain. To resolve conflicts among routes from different sources to the same destination, routers apply two key selection criteria: for source trustworthiness and for specificity. determines the preferred source first, with lower values winning (e.g., a connected route at 0 overrides a static route at 1, which in turn overrides OSPF at 110); if distances are equal, the protocol's internal (e.g., OSPF's cumulative ) breaks the tie. Subsequently, among routes from the chosen source, the selects the most specific entry, as mandated by forwarding standards, ensuring traffic for 192.0.2.64/26 uses a /26 route over a broader /24 covering the same range. This hierarchical process maintains accuracy and efficiency.

Building the Unicast Routing Table

The unicast routing table is constructed by aggregating routing information from multiple sources, including directly connected interfaces, statically configured routes, and dynamically learned routes from protocols such as OSPF and BGP. This process involves selecting the best route for each destination prefix based on administrative , which prioritizes routes from more reliable sources (e.g., connected interfaces over dynamic protocols), followed by the lowest within the same preference level. For instance, in OSPF, routes are preferred based on the lowest cumulative cost, where cost is inversely proportional to interface and configurable per . The resulting table consists of entries for each destination, typically including the network (with or ), next-hop , associated or , and the originating protocol or source. Entries support route summarization to reduce table size, such as aggregating multiple /24 into a single /16 when contiguous and allows, enabling efficient CIDR-based aggregation without loss of specificity for longest-match forwarding. Conflicts between overlapping routes are resolved through a hierarchical selection : first by administrative (e.g., static routes often assigned lower values than dynamic ones), then by , and finally by metric comparison. In BGP, for example, when metrics are equal, the route with the shortest AS-path length is selected as a tie-breaker to favor more direct inter-domain paths. For equal-cost paths, equal-cost multipath (ECMP) allows load-sharing across multiple next-hops, distributing traffic to improve utilization, with implementations commonly supporting up to 8 such paths. Table updates occur either periodically, as in RIP's scheduled advertisements every 30 seconds, or event-driven, such as recomputation following a link failure detected by (BFD), which provides sub-second fault detection to trigger rapid route recalculation.

Installing Unicast Routes

The installation process for routes involves selecting the optimal path from the routing information base (RIB) based on criteria such as and (LPM), where the route with the most specific prefix length is chosen to ensure precise forwarding decisions. Once selected, the route is translated and installed into the (FIB) or equivalent hardware structures like ternary (TCAM) for high-speed lookups in the data plane. This installation often requires recursion to resolve indirect next-hops; for instance, if a route specifies a next-hop IP address not directly connected, the control plane performs a recursive lookup in the RIB to find the outbound and updated next-hop, repeating as needed until a directly connected route is reached. Optimization techniques during installation aim to streamline the FIB for efficiency and reduced resource consumption. Redundant entries are pruned through route aggregation, where multiple more-specific routes are consolidated into a single summary route, suppressing detailed paths that are covered by the aggregate to minimize table size while maintaining reachability. Floating static routes serve as backups by configuring them with a higher administrative distance than primary dynamic routes, ensuring they are only installed and used if the preferred route becomes invalid, such as during link failures. Error handling ensures stability by promptly invalidating affected routes upon detecting failures. For example, when an goes down, all static and dynamic routes dependent on that interface are removed from the and FIB to prevent blackholing of traffic. In dynamic protocols like OSPF, graceful restart mitigates disruptions during control plane restarts by allowing the router to inform neighbors via grace LSAs, enabling them to retain forwarding entries for a configurable period (up to 1800 seconds) without purging routes, thus preserving data plane continuity until the restarting router reconverges. Vendor implementations often incorporate policy mechanisms for customized installation. In devices, route maps enable () during the installation and application of unicast routes, allowing administrators to match traffic criteria (e.g., source or ) and set specific next-hops or interfaces, overriding standard selections for tailored forwarding behavior.

Data Structures and Interaction

Routing Table vs. Forwarding Information Base

The , formally known as the Routing Information Base (), serves as a comprehensive logical in the control plane of network routers. It aggregates and stores all routing information obtained from routing protocols, static configurations, and connected interfaces, including multiple paths to destinations with detailed attributes such as metrics, administrative distances, -based tags, and preference indicators. Accessed primarily by the router's CPU for route selection and enforcement, the enables flexible computation without hardware constraints, allowing it to accommodate large volumes of routes limited mainly by available software memory and processing resources. In contrast, the (FIB) is a streamlined, data-plane-oriented structure optimized for rapid at line rates. Derived from the , it includes only the best active routes—typically one primary path per destination prefix—along with essential forwarding details like next-hop addresses, outgoing interfaces, and encapsulation information, excluding extraneous attributes to minimize lookup overhead. Implemented in specialized hardware such as Ternary Content-Addressable Memory (TCAM) or algorithmic tables, the FIB supports parallel, high-speed prefix matching to forward packets without CPU intervention, ensuring low-latency performance in high-throughput environments. The primary distinction between the and FIB lies in their scope, accessibility, and optimization goals: the RIB prioritizes completeness and richness for control-plane decision-making, while the FIB emphasizes compactness and speed for data-plane operations, often resulting in a significantly smaller focused solely on forwarding actions. This separation allows the control plane to handle complex route computations independently of the data plane's real-time requirements, with the FIB acting as a distilled, installable subset of RIB entries selected through best-path algorithms. As detailed in route processes, only FIB-eligible routes with resolved next-hops are programmed into the forwarding hardware. Synchronization from the to the FIB is orchestrated by the control plane's RIB manager, which pushes route updates to the data plane either incrementally—to apply changes efficiently without disrupting ongoing forwarding—or via full table dumps during system initialization, , or bulk reprogramming. This process ensures consistency, with mechanisms like bulk content downloaders facilitating scalable distribution across line cards in modular routers; any temporary discrepancies, such as those from , are mitigated through route dampening policies that penalize unstable paths in the before propagation to the FIB, promoting network stability. Performance implications arise from these architectural differences, particularly in scale: while the software-based can theoretically support millions of routes constrained by CPU and memory, the hardware-bound FIB faces strict limits imposed by TCAM capacity or , with modern routers typically accommodating 1 to 2 million IPv4 entries depending on the platform. For instance, 7000 series XL modules support up to 900,000 IPv4 FIB entries via 900K TCAM, beyond which overflow may require aggregation techniques or route filtering to prevent forwarding failures. These constraints underscore the need for careful route management to balance control-plane flexibility with data-plane throughput.

Multicast Routing

Multicast Routing Tables

Multicast routing tables, often referred to as the in protocols like PIM-SM, maintain forwarding state for groups to enable efficient one-to-many or many-to-many data in the control plane. These tables consist of entries keyed by source and group identifiers, such as (S,G) for source-specific forwarding trees, where S is the source and G is the group address, or (*,G) for shared trees that aggregate traffic from multiple sources to group G via a . Each entry includes an incoming interface determined by () and an outgoing interface list (OIF), which specifies the interfaces over which packets are replicated and forwarded to downstream receivers. The OIF is dynamically computed using macros like immediate_olist(S,G), which includes interfaces with active Join state minus those lost to asserts, ensuring precise control over traffic . The building process for multicast routing tables relies on RPF checks to establish loop-free paths and dynamic membership signaling to populate the OIF. An RPF check verifies that an incoming packet from source S arrives on the interface indicated by the unicast routing table as the path to S; if not, the packet is discarded to prevent loops, with the RPF neighbor computed as the next hop toward S in the multicast routing information base (MRIB). For dynamic membership, pruning removes interfaces from the OIF when no downstream interest exists, triggered by Prune messages and maintained via Prune-Pending states with override timers (default 3 seconds) to allow grafting. Grafting, conversely, adds interfaces to the OIF through Join messages when receiver interest reemerges, propagating upstream to restore traffic flow along the tree. State machines, including downstream (e.g., Join, Prune-Pending) and upstream (e.g., Joined, NotJoined), manage these transitions, with timers like the Join Timer (default 60 seconds) ensuring periodic refreshes. Unlike routing tables, which aggregate destination prefixes for point-to-point forwarding, routing tables employ group-based addressing in the IPv4 range 224.0.0.0/4 (equivalent to 1110 in the high-order four bits) and require stateful for each active (S,G) or (*,G) entry to track per-group receiver memberships and tree branches. This results in a more distributed and tree-oriented structure, where the control plane must handle replication states rather than simple longest-prefix matches, often referencing tables only for RPF computations. Scalability challenges arise from potential state explosion in environments with numerous sources and large groups, as each active (S,G) entry consumes resources for OIF maintenance across the network. In inter-domain scenarios, this is exacerbated by the need to discover remote sources without flooding every (S,G) state globally; the Multicast Source Discovery Protocol (MSDP) mitigates this by enabling rendezvous points to exchange source-active (SA) messages via peer-RPF flooding, limiting cached states through filters and SA limits to prevent denial-of-service impacts.
Entry TypeDescriptionKey Components
(S,G)Source-specific tree state for traffic from a single source S to group G.Incoming interface via RPF to S; OIF with source-tree joins.
(*,G)Shared tree state aggregating multiple sources to group G via RP.Incoming interface via RPF to RP; OIF with group joins.
(S,G,rpt)Prune state on the RP tree to suppress specific source traffic.Derived from (*,G); OIF excludes pruned interfaces.

Multicast Routing Protocols

Multicast routing protocols enable the and of trees by exchanging signaling messages among routers and hosts, allowing efficient delivery of traffic from sources to multiple receivers. These protocols populate tables through mechanisms like flooding, , and explicit joins, distinct from protocols that focus on point-to-point paths. The primary intra-domain protocol family is (PIM), which operates independently of underlying protocols such as OSPF or BGP. PIM variants include Sparse (PIM-SM) and Dense (PIM-DM), each suited to different network densities. Additional variants include Bidirectional PIM (BiDir-PIM), which builds bidirectional shared trees for many-to-many applications like video conferencing, using a designated to avoid duplicate packets and reducing state overhead compared to unidirectional trees; and (SSM), a PIM mode that uses only source-specific (S,G) channels without an RP, enhancing security by requiring receivers to know sources in advance, typically over the range FF3x::/96 or IPv4 232/8. PIM-SM builds efficient shared trees rooted at a (RP) for initial , using Join messages from receivers to propagate toward the RP and messages to remove unnecessary branches. In PIM-SM, sources register with the RP by encapsulating data packets, which the RP decapsulates and forwards down the shared tree; this is followed by a Register-Stop to halt encapsulation once a source-specific tree is established. The RP facilitates initial without requiring sources and receivers to know each other a priori, optimizing for sparse receiver populations by minimizing state and overhead. In contrast, PIM-DM assumes dense receiver distribution and initially floods datagrams to all interfaces using the underlying information base, relying on (RPF) to prevent loops. messages are sent upstream to halt forwarding to subnets without interested receivers, creating temporary prune states that expire unless refreshed; Graft messages re-enable forwarding when new receivers join. Unlike PIM-SM, PIM-DM avoids a central , reducing single points of failure but potentially wasting in sparse scenarios through initial flooding. Host-router signaling is handled by the (IGMP) for IPv4 and Multicast Listener Discovery (MLD) for , which inform routers of local group memberships. IGMP version 3 (IGMPv3) supports source-specific filtering with INCLUDE (allow only listed sources) and EXCLUDE (block listed sources) modes, enabling reports for specific (S,G) states via Membership Reports. Similarly, MLD version 2 (MLDv2) provides analogous functionality for , using Queries from routers and Reports from hosts to maintain filter states on attached links. For inter-domain multicast, (MBGP) extends BGP-4 to advertise multicast routes using the MP_REACH_NLRI attribute with Subsequent Address Family Identifier (SAFI) 2, allowing separate and multicast routing information bases. MBGP enables border routers to exchange reachability for multicast prefixes across autonomous systems. Complementing this, the Multicast Source Discovery Protocol (MSDP) connects PIM-SM domains by having RPs flood Source-Active (SA) messages over to peers, sharing active (S,G) information so remote RPs can initiate joins for interested groups. Compared to unicast protocols like OSPF, which compute link-state shortest paths for individual destinations, PIM-SM employs shared trees to aggregate state for multiple receivers per group, reducing per-flow overhead in multicast environments. This tree-based approach contrasts with OSPF's flooding of link-state advertisements for global topology awareness, prioritizing multicast's one-to-many efficiency over unicast's point-to-point precision.

Modern Developments

Software-Defined Networking

(SDN) represents a transformative approach to by decoupling the control plane from the underlying data plane hardware, enabling centralized programming and of network resources for greater flexibility and automation. This separation allows network operators to manage and optimize traffic through software interfaces rather than relying on distributed, device-specific configurations, providing a global view of the network state to facilitate intelligent decision-making. Originating from efforts to enable experimental protocols in production environments, SDN addresses limitations in traditional networking by shifting control logic to programmable software platforms. At the core of SDN architecture is the SDN controller, a centralized entity that computes and installs forwarding rules across the network using protocols like OpenFlow, which standardizes communication between the controller and switches. Examples of widely adopted open-source controllers include ONOS (Open Network Operating System), designed for carrier-grade scalability and high availability in large-scale deployments, and Ryu, a lightweight Python-based framework supporting OpenFlow and other southbound APIs for rapid prototyping and integration. These controllers employ algorithms such as constrained shortest path routing—often based on variants of Dijkstra's algorithm—to determine optimal paths considering factors like bandwidth, latency, or security policies, thereby enabling automated traffic engineering and resource allocation. The advantages of SDN stem from its centralized model, which simplifies enforcement across the entire by applying consistent rules from a single point, reducing configuration errors and operational complexity compared to distributed protocols. This approach also supports dynamic reconfiguration through northbound interfaces, such as RESTful APIs, allowing applications to request real-time adjustments like load balancing or fault recovery without manual intervention on individual devices. By providing a holistic view, SDN overcomes the silos and issues of legacy distributed control planes, fostering automation and programmability that enhance responsiveness to changing demands. SDN's evolution began with the introduction of in 2008, a protocol that exposed switch flow tables for external control, marking the shift toward programmable networks in campus and environments. Building on this foundation, advancements progressed to more expressive data plane programmability with the P4 language in 2014, which allows protocol-independent specification of packet processing behaviors directly on switches, extending SDN's scope beyond fixed match-action paradigms. By 2025, P4 has become a for next-generation switches, integrating with SDN controllers to support custom forwarding logics in diverse scenarios like and , while maintaining compatibility with earlier deployments.

Centralized Control Plane Architectures

Centralized control plane architectures in networking decouple the control logic from data forwarding devices, enabling a unified view of the network for . These architectures can be categorized into logical centralization, where control functions are distributed across multiple entities but operate as if centrally coordinated, and physical centralization, where a dedicated controller or cluster manages the entire network via standardized interfaces. Logical centralization is exemplified in the core by BGP, which distributes routing decisions among autonomous systems while enforcing centralized policy through route selection and advertisement rules, simplifying interdomain coordination without a single physical entity. In contrast, physical centralization relies on SDN controllers that interact with switches through southbound APIs like , providing direct, programmable oversight of forwarding rules. A prominent example of physical centralization is Google's B4 wide-area network, which employs a centralized traffic engineering controller to manage inter-data-center traffic across dozens of sites. The B4 architecture uses OpenFlow-based controllers at each site, augmented by a global traffic engineering server that computes multipath tunnels and allocates bandwidth via max-min fairness, achieving average link utilization of 90% and up to 100% during peaks. This setup abstracts the network into supernodes for scalability, handling thousands of daily topology changes while integrating with traditional protocols like BGP for hybrid operation, resulting in 2-3 times greater efficiency than conventional WANs. To enhance in physical centralization, controller clustering distributes the load across multiple instances using east-west interfaces for . Frameworks like the Distributed SDN Control Plane (DSF) employ publish-subscribe protocols, such as DDS-based RTPS, to enable sharing among controllers in flat or hierarchical models, supporting heterogeneous environments and handling up to 30,000 flow requests per second without bottlenecks. These interfaces ensure consistent global network views, mitigating state inconsistencies that arise in distributed setups. Despite these advantages, centralized architectures face challenges including single points of and communication between controllers and data plane devices. A controller can disrupt the entire , while southbound interactions introduce delays, especially in large-scale deployments where flow installation requests overload the system. Mitigation strategies include redundancy through hot-standby replication and mechanisms, as in B4's Paxos-based with sub-10-second recovery, alongside distributed controller designs that parallelize processing to reduce by up to 33 times via multi-threading. In 2025, centralized control planes increasingly integrate for predictive within intent-based networking frameworks, where high-level intents (e.g., latency targets) are translated into configurations via machine learning-driven orchestration. This enables proactive adjustments, such as AI-forecasted path optimizations using real-time , supported by standardized for closed-loop and enhanced autonomy in telecom networks.

References

  1. [1]
    What is the control plane? | Control plane vs. data plane - Cloudflare
    The control plane is the part of a network that controls how data is forwarded, while the data plane or forwarding plane is the actual forwarding process.
  2. [2]
    What is a Control Plane? - IBM
    A control plane is a critical part of a computer network that carries information through the network and the path data travels between devices.
  3. [3]
    Software-Defined Networking (SDN) Definition - Cisco
    SDN is an architecture designed to make a network more flexible and easier to manage. SDN centralizes management by abstracting the control plane from the data ...
  4. [4]
    [PDF] Control Plane Policing - Cisco
    Control plane—A collection of processes that run at the process level on the Route Processor (RP). These processes collectively provide high-level control for ...<|control11|><|separator|>
  5. [5]
    Control Plane Policing - Cisco
    The Control Plane Policing feature allows users to configure a quality of service (QoS) filter that manages the traffic flow of control plane packets.
  6. [6]
    [PDF] 4.2BSD Networking Implementation Notes Revised July, 1983
    The system standard ''routing daemon'' uses a variant of the Xerox NS Routing Information Protocol [Xerox82] to maintain up to date routing tables in our ...
  7. [7]
    A brief history of router architecture - APNIC Blog
    Mar 12, 2021 · Here's what we've learnt about networks and the routers that interconnect them in the last 50 years.Missing: early unix 1980s<|control11|><|separator|>
  8. [8]
    Control Plane vs. Data Plane - IBM
    By separating the task of routing data from the task of forwarding it, control and data plane architecture allows each function to be optimized independently, ...
  9. [9]
    [PDF] Control-Data Plane Separation
    • The router data plane: • Forwarding table, switching fabric. • Buffering, Scheduling. • Control plane protocols: OSPF for intra-domain routing ... • Decouple ...
  10. [10]
    Control planes and data planes - AWS Fault Isolation Boundaries
    When you launch an EC2 instance, the control plane has to perform multiple tasks like finding a physical host with capacity, allocating the network interface(s) ...
  11. [11]
    Describe Administrative Distance - Cisco
    Sep 27, 2024 · Administrative distance is the first criterion that a router uses to determine which routing protocol to use if two protocols provide route information for the ...Missing: unicast | Show results with:unicast
  12. [12]
    RFC 1812 - Requirements for IP Version 4 Routers - IETF Datatracker
    Routers must use the most specific matching route (the longest matching network prefix) when forwarding traffic. ... (2) Longest Match Longest Match is a ...
  13. [13]
  14. [14]
  15. [15]
  16. [16]
  17. [17]
  18. [18]
  19. [19]
    RFC 2992 - Analysis of an Equal-Cost Multi-Path Algorithm
    Abstract Equal-cost multi-path (ECMP) is a routing technique for routing packets along multiple paths of equal cost. The forwarding engine identifies paths ...
  20. [20]
  21. [21]
    RFC 5880 - Bidirectional Forwarding Detection (BFD)
    This document describes a protocol intended to detect faults in the bidirectional path between two forwarding engines.
  22. [22]
    Longest Prefix Match Routing - NetworkLessons.com
    Mar 31, 2022 · Routers use longest prefix match routing after the other tie-breakers (administrative distance and metric). This lesson explains everything.
  23. [23]
    Cisco Nexus 5600 Series NX-OS Unicast Routing Configuration ...
    Mar 12, 2014 · The TCAM table is shared between longest prefix match (LPM) route /32 unicast route. The hash table is shared between the /32 unicast entries ...
  24. [24]
    IP Routing Configuration Guide, Cisco IOS XE 17.x - PBR Recursive ...
    Nov 2, 2022 · The PBR Recursive Next Hop feature enhances route maps to enable configuration of a recursive next-hop IP address that is used by policy-based routing (PBR).
  25. [25]
    Configuring Route Aggregation | Junos OS - Juniper Networks
    The route aggregation methodology helps minimize the number of routing entries in an IP network by consolidating selected multiple routes into a single route ...
  26. [26]
    [PDF] Configuring Static Routing - Cisco
    A floating static route is a static route that the router uses to back up a dynamic route. You must configure a floating static route with a higher ...
  27. [27]
    IP Routing Configuration Guide, Cisco IOS Release 15.2(7)Ex ...
    Sep 17, 2020 · When an interface goes down, all static routes through that interface are removed from the IP routing table. When the software can no longer ...Missing: invalidation | Show results with:invalidation
  28. [28]
    RFC 3623 - Graceful OSPF Restart - IETF Datatracker
    This memo documents an enhancement to the OSPF routing protocol, whereby an OSPF router can stay on the forwarding path even as its OSPF software is restarted.
  29. [29]
    [PDF] Policy-Based Routing - Cisco
    To enable policy-based routing on an interface, indicate which route map the device should use by using the ip policy route-map map-tag command in interface ...
  30. [30]
    RFC 3222 - Terminology for Forwarding Information Base (FIB ...
    This document describes the terms to be used in a methodology that determines the IP packet forwarding performance of IP routers.
  31. [31]
    RFC 8430 - RIB Information Model - IETF Datatracker
    ... Routing Information Base (RIB). Protocols and configurations push data into the RIB, and the RIB manager installs state into the hardware for packet ...
  32. [32]
    Cisco Nexus 7000 Series NX-OS Unicast Routing Configuration ...
    Oct 8, 2024 · Maximum TCAM Entries and FIB Scale Limits​​ Table 16-2 describes the supported maximum FIB scale entries on the Nexus 7000 system configuration ...
  33. [33]
    Towards TCAM-based scalable virtual routers - ACM Digital Library
    Experimental results show that, by using the two approaches for storing 14 full IPv4 FIBs, the TCAM memory requirement can be reduced by about 92% and 82% ...Missing: optimization | Show results with:optimization
  34. [34]
    Routing Configuration Guide for Cisco NCS 5500 Series Routers ...
    Dec 16, 2024 · Instead, RIB downloads the set of selected best routes to the FIB processes, by the Bulk Content Downloader (BCDL) process, onto each line card.Missing: discrepancies | Show results with:discrepancies
  35. [35]
    Cisco 8000 FIB Scale - xrdocs
    Mar 22, 2023 · Until now, Cisco 8000 officially supports up to 2M IPv4 and 512k IPv6 prefixes in FIB. This is a multi-dimensional number meaning both address ...
  36. [36]
    RFC 4601 - Protocol Independent Multicast - Sparse Mode (PIM-SM)
    PIM-SM is a multicast routing protocol that can use the underlying unicast routing information base or a separate multicast- capable routing information base.
  37. [37]
    RFC 1112 - Host extensions for IP multicasting - IETF Datatracker
    ... addresses range from 224.0.0.0 to 239.255.255.255. The address 224.0.0.0 is ... An IP host group address is mapped to an Ethernet multicast address by ...
  38. [38]
    RFC 3618 - Multicast Source Discovery Protocol (MSDP)
    In addition, to mitigate state explosion during denial of service and other attacks, SA filters and limits SHOULD be used with MSDP to limit the sources and ...
  39. [39]
    RFC 5110 - Overview of the Internet Multicast Routing Architecture
    This document describes multicast routing architectures that are currently deployed on the Internet. This document briefly describes those protocols and ...
  40. [40]
    RFC 7761 - Protocol Independent Multicast - Sparse Mode (PIM-SM)
    PIM-SM is a multicast routing protocol that can use the underlying unicast routing information base or a separate multicast- capable routing information base.
  41. [41]
  42. [42]
    RFC 3376 - Internet Group Management Protocol, Version 3
    RFC 3376 specifies IGMPv3, used by IPv4 systems to report multicast group memberships, adding source filtering for specific source addresses.
  43. [43]
    RFC 3810 - Multicast Listener Discovery Version 2 (MLDv2) for IPv6
    MLD is used by an IPv6 router to discover the presence of multicast listeners on directly attached links, and to discover which multicast addresses are of ...
  44. [44]
    RFC 4760 - Multiprotocol Extensions for BGP-4 - IETF Datatracker
    This document defines extensions to BGP-4 to enable it to carry routing information for multiple Network Layer protocols (eg, IPv6, IPX, L3VPN, etc.).
  45. [45]
  46. [46]
  47. [47]
    Open Network Operating System (ONOS) SDN Controller for SDN ...
    ONOS is a leading open-source SDN controller for building next-generation SDN/NFV solutions, designed for carrier-grade solutions with simplified interfaces.Missing: Ryu | Show results with:Ryu
  48. [48]
    [PDF] OpenFlow: Enabling Innovation in Campus Networks
    ABSTRACT. This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use ev- ery day. OpenFlow is based on ...
  49. [49]
    P4: programming protocol-independent packet processors
    In this paper we propose P4 as a strawman proposal for how OpenFlow should evolve in the future. We have three goals: (1) Reconfigurability in the field: ...
  50. [50]
  51. [51]
    [PDF] Better Internet Routing Based on SDN Principles - acm sigcomm
    The separation of the network control from the data plane and the consequent logical centralization of the routing control plane can drastically simplify ...
  52. [52]
    [PDF] Logically Centralized? State Distribution Trade-offs in Software ...
    Aug 13, 2012 · In essence, SDN gives network designers freedom to refactor the network control plane, allowing network control logic to be designed and ...
  53. [53]
    [PDF] B4: Experience with a Globally-Deployed Software Defined WAN
    B4's centralized traffic engineering service drives links to near 100% uti- lization, while splitting application flows among multiple paths to balance capacity ...
  54. [54]
    DSF: A Distributed SDN Control Plane Framework for the East/West Interface
    ### Summary of DSF Framework for Distributed SDN Control Plane
  55. [55]
    A Review of the Control Plane Scalability Approaches in Software ...
    The main drawbacks of this approach is the high latency and causing bottlenecks around the controller. Additionally, a centralized architecture may form a ...<|separator|>
  56. [56]
    Intent-based networking: unlocking the full potential of AI in ...
    Jul 1, 2025 · Intent-based networking (IBN), powered by AI, can transform telecom operations by aligning network behavior with business intent.Missing: plane | Show results with:plane