Fact-checked by Grok 2 weeks ago

Virtual Extensible LAN

Virtual Extensible LAN (VXLAN) is a technology that overlays Layer 2 Ethernet networks over an underlying Layer 3 infrastructure, enabling the creation of virtualized network segments in large-scale environments such as data centers. It achieves this by encapsulating original Ethernet frames within packets, which are then routed across networks using VXLAN Tunnel End Points (VTEPs) located on hypervisors or network devices to handle encapsulation and decapsulation. This approach allows virtual machines (VMs) in different physical locations to communicate as if connected on the same , while preserving Layer 2 semantics like addressing. VXLAN was developed to overcome the scalability limitations of traditional VLANs, which are restricted to 4094 identifiers under the standard due to their 12-bit field. In contrast, VXLAN employs a 24-bit VXLAN Network Identifier (VNI) to support up to 16 million unique segments, facilitating multi-tenancy, isolation of tenant traffic, and efficient resource utilization in virtualized and setups. Key benefits include support for live VM migration (such as VMware vMotion) without IP address changes or subnet constraints, and the ability to leverage existing IP multicast or unicast mechanisms for broadcast, unknown unicast, and multicast (BUM) traffic handling, thereby reducing the flood domain size and improving performance in east-west data flows. Initiated around 2011 by a of industry leaders including Arista, , , Citrix, , , and to address growing demands for flexible networking, VXLAN gained formal standardization through 7348, published by the (IETF) in August 2014. Since then, it has seen widespread adoption in (SDN) architectures, particularly when combined with BGP (EVPN) for functions, enabling dynamic MAC and learning across distributed environments. Major implementations appear in hypervisors like , open-source projects such as , and hardware from vendors including , Juniper, and Arista, making VXLAN a cornerstone for scalable, multi-tenant cloud infrastructures.

Overview

Definition and Purpose

Virtual Extensible LAN (VXLAN) is a technology that serves as an encapsulation for extending Layer 2 Ethernet networks over an underlying Layer 3 infrastructure, utilizing tunneling to encapsulate frames within packets. This design enables the creation of virtualized Layer 2 overlays, allowing virtual machines (VMs) and other endpoints to communicate as if they were on the same local network segment, even when separated by routed networks. The primary purpose of VXLAN is to overcome the scalability limitations of traditional VLANs, which are restricted to a maximum of 4094 unique identifiers due to the 12-bit VLAN ID field in the 802.1Q header. By employing a 24-bit Virtual Network Identifier (VNI) within its encapsulation header, VXLAN supports up to 16 million distinct logical network segments, facilitating large-scale without the constraints of physical Layer 2 boundaries. This expansion is particularly essential in modern data centers, where the proliferation of virtualized workloads demands flexible segmentation to isolate traffic efficiently. In multi-tenant cloud and environments, VXLAN plays a crucial role in enabling for diverse tenants, ensuring of broadcast domains and traffic while optimizing resource utilization across distributed infrastructure. It operates on the principle of overlay networks, where the VXLAN overlay provides virtualized Layer 2 connectivity atop a physical 3 underlay , using endpoints to manage the encapsulation and decapsulation processes without altering the underlay's fabric. This separation allows administrators to scale and manage virtual networks independently of the underlying physical topology, supporting dynamic workload mobility and enhanced through tenant-specific .

Key Features and Benefits

Virtual Extensible LAN (VXLAN) employs a 24-bit Virtual Network Identifier (VNI) that enables segmentation of up to 16 million unique virtual networks, vastly surpassing the 4094-limit of traditional VLANs and addressing scalability challenges in large-scale data centers. This feature allows for fine-grained isolation of tenant networks without the constraints of Layer 2 broadcast domains. Additionally, VXLAN uses UDP-based encapsulation to transport Layer 2 frames over IP networks, leveraging the standard UDP port 4789 for seamless integration with existing infrastructure. For handling broadcast, unknown unicast, and multicast (BUM) traffic, VXLAN supports multicast group discovery mechanisms, such as PIM-SM, or unicast head-end replication (HER) to efficiently replicate packets across the overlay without flooding the underlay network. The primary benefits of VXLAN include enhanced in virtualized environments by overlaying Layer 2 networks over a robust Layer 3 underlay. This design facilitates seamless mobility of virtual machines across physical hosts and subnets, preserving their addresses and network configurations without requiring reconfiguration or address changes. Furthermore, by operating over , VXLAN reduces dependency on the (STP), mitigating its limitations in large topologies such as slow convergence and restricted , while leveraging the inherent loop prevention of Layer 3 . Quantitatively, VXLAN's 16 million segment capacity provides orders-of-magnitude improvement over VLANs, enabling massive multi-tenancy in cloud data centers. VXLAN integrates effectively with (SDN) frameworks, allowing dynamic provisioning and orchestration of overlay networks through centralized controllers that automate tenant isolation and policy enforcement.

Technical Fundamentals

Encapsulation and VTEPs

In VXLAN, the encapsulation process involves wrapping an original Ethernet frame in multiple outer headers to form an overlay tunnel across an underlay IP network. At the ingress VTEP, the inner Ethernet frame—carrying the original Layer 2 payload—is prepended with a VXLAN header, a UDP header (using port 4789), an outer IP header, and an outer Ethernet header. This creates a tunneled packet that traverses the underlay network as standard IP traffic, preserving the Layer 2 semantics of the original frame while enabling scalability beyond traditional VLAN limitations. Upon reaching the egress VTEP, the outer headers are stripped away, and the inner frame is forwarded to the appropriate local endpoint. The VXLAN Tunnel End Point (VTEP) serves as the critical ingress and egress point for this encapsulation and decapsulation process. VTEPs can be implemented in on switches or in software within hypervisors on virtualized servers, where they manage the tunneling of between virtual machines or endpoints. Key functions include learning mappings to remote VTEP addresses, performing the header additions and removals, and ensuring isolation across different overlay segments using the 24-bit VXLAN Network Identifier (VNI). The VNI provides segmentation similar to IDs but supports up to 16 million unique networks. For broadcast, unknown unicast, and (BUM) traffic, VXLAN relies on mechanisms to flood packets efficiently across the overlay without flooding the entire underlay. In the standard multicast-based approach, each VNI is mapped to a specific IP group, and the ingress VTEP sends BUM packets to that group using protocols like PIM-Sparse Mode (PIM-SM); remote VTEPs joined to the group then decapsulate and forward the traffic locally. Alternatively, in environments using (EVPN), BUM traffic can be handled via replication, where the ingress VTEP (or provider edge device) replicates and sends individual packets to each remote VTEP's , avoiding the need for underlay but potentially increasing bandwidth usage. The underlay network supporting VXLAN must provide reliable IP connectivity between VTEPs, typically over IPv4 or IPv6, to ensure seamless tunnel operation. A key consideration is the Maximum Transmission Unit (MTU), as VXLAN encapsulation adds approximately 50 bytes of overhead to the original frame; to avoid fragmentation and performance degradation, the underlay MTU should be at least 1550 bytes when supporting standard 1500-byte Ethernet payloads. VTEPs are not permitted to fragment packets, so proper MTU configuration across the path is essential for end-to-end delivery.

Header Structure and Packet Format

The VXLAN encapsulation adds an outer header stack to the original to enable transport over an IP network. The outer headers consist of an Ethernet header (14 bytes, optionally 18 with tagging), followed by an header (20 bytes for IPv4 or 40 bytes for ), and a header (8 bytes) with a destination of 4789, which is the IANA-assigned port for VXLAN traffic. The source UDP is typically dynamically assigned based on a of the inner packet to support entropy for load balancing. The core of the encapsulation is the 8-byte VXLAN header, inserted immediately after the UDP header. This header begins with an 8-bit flags field, where the I-bit (bit 6, value 1) indicates a valid VNI follows, and the remaining 7 bits are reserved and set to 0. Following the flags is the 24-bit VXLAN Network Identifier (VNI), which uniquely identifies the virtual network segment (supporting up to 16 million segments), followed by a 32-bit reserved field set to 0. The inner payload is the original , including source and destination MAC addresses, optional 802.1Q tag, , and the higher-layer payload, but excluding the (FCS) to avoid duplication. The full VXLAN packet structure thus sequences as: outer Ethernet header → outer → outer header → VXLAN header → inner . This encapsulation introduces approximately 50 bytes of overhead for IPv4 traffic (14-byte Ethernet + 20-byte + 8-byte + 8-byte VXLAN), increasing to about 70 bytes for IPv6. For extensibility beyond Ethernet payloads, VXLAN-GPE (Generic Protocol Extension) modifies the header to support protocols like or directly. It reuses the 8-byte structure but repurposes reserved bits: adding a 2-bit field (initially 0), a P-bit for indicating a next field (8 bits identifying type, e.g., 0x03 for Ethernet), a B-bit for broadcast/unknown/ traffic, an O-bit for operations// (OAM) packets, and an instance bit, while retaining the 24-bit VNI and reserving the rest to 0; it uses port 4790.

History and Development

Origins and Initial Development

The development of Virtual Extensible LAN (VXLAN) originated in the early 2010s as a collaborative effort among key industry players, including VMware, Arista Networks, and Cisco, to address the scalability challenges posed by rapid server virtualization in data centers. The surge in virtual machines (VMs) created demands for far more than the 4096 network segments supported by traditional IEEE 802.1Q VLANs, while also requiring the extension of Layer 2 connectivity over Layer 3 networks to enable VM mobility across geographically distributed sites for cloud providers and enterprises. This was particularly critical for multi-tenant environments, where elastic provisioning and isolation of resources were essential, but existing technologies like Spanning Tree Protocol struggled with loop prevention and MAC address table limitations in top-of-rack switches. The foundational specification emerged from pre-standardization work, culminating in the first experimental Internet-Draft published on August 26, 2011, authored by M. Mahalingam and T. Sridhar (), D. G. Dutt and L. Kreeger (), K. Duda (Arista), P. Agarwal (), M. Bursell (Citrix), and C. Wright (). This draft proposed VXLAN as a UDP-based encapsulation to create overlay , allowing up to 16 million unique identifiers (VNIs) for virtual segments while tunneling Ethernet frames across underlays without altering the underlying physical infrastructure. The initiative was motivated by the need to decouple identity from physical location, facilitating seamless workload migration in setups. Early prototypes focused on integrating VXLAN into existing platforms for proof-of-concept testing and interoperability. VMware began incorporating VXLAN into its vSphere and nascent NSX networking suite around 2011-2012, enabling features like live VM migration (vMotion) over Layer 3 boundaries to support dynamic cloud scaling. Arista Networks demonstrated hardware-based VXLAN termination in its EOS on the 7500 Series switches at VMworld 2012, leveraging VTEPs for efficient encapsulation and bridging in virtualized environments. Cisco introduced VXLAN support in its Nexus 1000V virtual switch in January 2012, followed by hardware integration in the Nexus 9000 series, allowing initial vendor collaborations to validate multi-tenant isolation and performance before broader IETF engagement. These efforts involved iterative draft proposals and joint interoperability testing among the vendors, ensuring VXLAN's prioritized openness and for emerging architectures without relying on extensions.

Standardization Process

The standardization of Virtual Extensible LAN (VXLAN) progressed through the efforts of the (IETF), particularly via the Network Virtualization Overlays (NVO3) Working Group, which was chartered in 2012 to develop protocols and extensions for in environments. This group emerged from a (BoF) session held at IETF 80 in March 2011, focusing on overlay technologies to support scalable multi-tenant over Layer 3 networks. Key milestones in the process included the submission of early individual Internet-Drafts in late 2011, with significant revisions and community feedback occurring throughout 2012 as the proposals aligned with NVO3 objectives. These drafts built on vendor prototypes to propose VXLAN as a UDP-based encapsulation method addressing limitations. The culmination of this phase was the publication of 7348 in August 2014, an informational document authored by a team from , , and other contributors, which defined the core VXLAN protocol, including its 24-bit Virtual Network Identifier (VNI) for up to 16 million segments and encapsulation over port 4789. Following 7348, the and related IETF activities advanced VXLAN through targeted updates for enhanced functionality and deployment. 8365, published in May 2018 by the BESS , integrated VXLAN with (EVPN) as a overlay solution, providing detailed guidance on considerations such as ingress replication for BUM traffic to avoid dependency on underlay infrastructure. Ongoing NVO3 efforts include the development of VXLAN-GPE, outlined in draft-ietf-nvo3-vxlan-gpe (version 13, last updated November 2023; expired without becoming an as of 2025), which extends the VXLAN header to support additional next protocols like IPv4/ and Ethernet, along with metadata options for policy enforcement. Throughout the process, contributions from original VXLAN developers at and , combined with broader industry input from entities like and , emphasized interoperability testing and refinements to ensure VXLAN's compatibility across hardware and software platforms. This collaborative approach, including design team reviews within NVO3, addressed gaps in and extensibility identified during iterations.

Implementations

Commercial Solutions

Cisco Systems has been a pioneer in commercial VXLAN implementations, integrating the technology into its Nexus 9000 series switches since 2013 as part of the Application Centric Infrastructure (ACI) framework, with enhanced support for Ethernet VPN (EVPN) control plane introduced in 2015 to enable scalable multi-tenant overlays. These solutions leverage hardware-accelerated ASICs in Nexus switches for high-performance encapsulation and decapsulation, supporting up to 16 million segments via 24-bit VNIs, and integrate seamlessly with the ACI SDN controller for policy-based automation and orchestration. Security enhancements include CloudSec for 256-bit AES-GCM encryption of inter-site VXLAN traffic in multi-site fabrics, ensuring confidentiality without impacting throughput. VMware's NSX-T platform, evolving from NSX-V introduced in , employs VXLAN for hypervisor-based VTEPs on ESXi hosts, enabling software-defined overlays that abstract Layer 2 networks over Layer 3 underlays for virtualized data centers. NSX-T's integration with its central management and allows dynamic VTEP provisioning and load balancing, with hardware offload support on compatible NICs for reduced CPU overhead in high-density environments. Unique to VMware's offering is tight coupling with vSphere for micro-segmentation and distributed firewalling, extending VXLAN tunnels across hybrid clouds while supporting encryption via for secure overlays. Huawei's CloudFabric solution incorporates VXLAN within its fabric architecture, deploying VTEPs on CloudEngine switches since around 2016 to support lossless Ethernet for and workloads. The platform uses iMaster NCE-Fabric as an SDN controller for automated VXLAN provisioning, BGP-EVPN signaling, and intent-based networking, optimizing for ultra-low in large-scale deployments. Commercial distinctions include via custom ASICs for VXLAN routing at wire speeds and built-in security features like encryption for VXLAN traffic in multi-tenant scenarios. Arista Networks provides VXLAN support in its EOS-based switches, such as the 7050X and 7280R series, enabling overlay networks with EVPN control plane for data center fabrics since the early 2010s. Arista's implementation emphasizes high-scale routing and multicast optimization for BUM traffic, integrating with their CloudVision platform for management and analytics in multi-tenant environments. Juniper Networks integrates VXLAN into its QFX and MX series devices, supporting both static and EVPN-based configurations for Layer 2 extensions over Layer 3 networks. Introduced in Junos OS releases around 2014, Juniper's solutions feature hardware-accelerated VTEPs and interoperability with SDN controllers like Contrail (now Mist AI-driven), facilitating scalable virtualization in enterprise and service provider data centers. Adoption of commercial VXLAN solutions has surged in hyperscale environments, with technologies akin to VXLAN underpinning () Virtual Private Cloud () overlays for traffic mirroring and segmentation since 2019, facilitating scalable isolation across global regions. Post-2020 enhancements have focused on , including environments, where vendors like and support VXLAN for scalable overlays in distributed deployments. For instance, Cisco's 2024 configuration guides detail VXLAN EVPN setups for multi-site fabrics, emphasizing resilient any-to-any connectivity with integrated analytics via Nexus Dashboard.

Open-Source Projects

The has provided native support for VXLAN since version 3.7, released in 2012, enabling the creation of VXLAN tunnel endpoints (VTEPs) directly within the operating system. This integration allows for efficient encapsulation of Ethernet frames over without requiring additional user-space software for basic functionality. Configuration of VTEPs and VXLAN interfaces is facilitated by tools in the suite, such as the ip link add type vxlan command, which supports parameters for VNI assignment, remote endpoints, and learning modes. Recent enhancements in kernel versions 6.x, including improved support for EVPN integration through extended attributes, have optimized VXLAN handling for dynamic control planes in large-scale deployments. Open vSwitch (OVS), an open-source multilayer virtual switch designed for (SDN), incorporates robust VXLAN tunneling capabilities to extend Layer 2 domains across distributed environments. OVS supports VXLAN as a primary overlay protocol, allowing automated tunnel creation between hypervisors or hosts via OpenFlow controllers, which is essential for SDN architectures in virtualized data centers. Similarly, Free Range Routing (FRR), a suite of routing daemons, provides BGP-EVPN support for VXLAN, enabling MAC and learning, route advertisement, and multi-tenancy through standards-compliant EVPN Type-2 and Type-3 routes. Community-driven development of VXLAN has been advanced through contributions to the IETF, where the core protocol was standardized in RFC 7348, and via the Foundation's networking projects, which foster and performance improvements. Testing frameworks like OFTest, originally developed for validation, have been adapted by the community to verify VXLAN behavior in OVS-based setups, ensuring compliance with encapsulation and forwarding requirements. Since 2020, VXLAN integration has expanded into container orchestration ecosystems, particularly through Container Network Interface (CNI) plugins such as Multus, which acts as a meta-plugin to attach multiple networks—including VXLAN overlays—to pods for hybrid cloud-native and virtualized workloads. This enables fine-grained control over pod networking, such as delegating VXLAN tunnels to secondary interfaces managed by plugins like OVS-CNI, supporting scalable deployments post-2020.

Standards and Specifications

Primary RFCs and Protocols

The primary specification for Virtual Extensible LAN (VXLAN) is defined in RFC 7348, published in August 2014, which outlines a for overlaying virtualized Layer 2 networks over Layer 3 . This RFC specifies VXLAN encapsulation, where Ethernet frames are tunneled within / packets, using a standardized destination port of 4789 and a 24-bit VXLAN Network Identifier (VNI) to segment up to 16 million isolated networks. It emphasizes for its simplicity and compatibility with existing hardware, while supporting both IPv4 and as outer headers to enable deployment over diverse underlay networks. Related RFCs extend VXLAN's functionality through mechanisms and advanced features. RFC 7432, published in February 2015, introduces BGP MPLS-Based (EVPN), providing a standardized for discovering and advertising MAC addresses and VNIs across provider edge devices, initially focused on MPLS but adaptable to VXLAN overlays. Building on this, RFC 8365 from May 2018 details EVPN as a Overlay (NVO3) solution, explicitly integrating VXLAN for data plane encapsulation and using BGP to distribute reachability information without relying solely on data plane learning. VXLAN interacts with several protocols to form complete overlay networks. It integrates with BGP via EVPN for overlay control plane operations, enabling dynamic endpoint discovery and route advertisement, while the underlay relies on standard protocols. For handling broadcast, unknown , and multicast (BUM) traffic in early deployments, VXLAN uses IP multicast groups mapped to VNIs, requiring underlay support from protocols like (PIM) in sparse or source-specific modes. Subsequent RFCs address VXLAN's initial limitations, particularly its dependency on multicast for efficient BUM traffic distribution, which could strain non-multicast-enabled underlays. RFC 8365 mitigates this by supporting ingress replication—where the ingress VTEP replicates packets unicast to remote VTEPs listed in EVPN Inclusive Multicast Ethernet Tag (IMET) routes—alongside optional PIM-based multicast, thus enhancing scalability in unicast-only environments.

Interoperability and Extensions

One key interoperability challenge in VXLAN deployments involves VTEP discovery, which can be achieved dynamically through protocols like BGP-EVPN for scalable, protocol-based remote VTEP learning, or via static configuration for simpler environments without a control plane. In multi-vendor setups, such as between Cisco NX-OS and Juniper Junos OS, BGP-EVPN configurations may lead to route invalidation if next-hop addresses differ—Junos uses the VTEP source IP, while NX-OS expects the physical interface IP—requiring policy adjustments like setting the next-hop to the VTEP IP on Junos with vpn-apply-export. Another common issue is handling MTU mismatches in VXLAN tunnels, where the overhead from encapsulation (typically 50 bytes) can fragment packets if underlay MTUs are not adjusted to at least 1550 bytes, necessitating Path MTU Discovery (PMTUD) enablement via configurations like ip unreachables on uplinks. VXLAN extensions enhance its flexibility beyond the core encapsulation defined in RFC 7348. An IETF draft for VXLAN-GPE introduces a "Next Protocol" field to support diverse payloads like IPv4, , Ethernet, or Header (NSH), along with bits for OAM signaling and ingress-replicated BUM traffic, enabling multi-protocol overlays and service chaining in s. Integration with SRv6 for segment allows seamless handoff at data center interconnects, where EVPN routes are imported into VRFs and mapped to SRv6 SIDs via BGP address families, supporting traffic engineering across VXLAN fabrics and SRv6 cores without . Testing and certification efforts ensure VXLAN reliability across vendors, with IETF-backed events like those organized by EANTC demonstrating multi-vendor . Multi-vendor for EVPN-VXLAN has been validated in EANTC events, including demonstrations by vendors such as in 2023 and 2025. As of 2025, VXLAN's future directions emphasize alignment with and standards to support low-latency, sliced networks. Enhancements focus on programmable data planes for edge data centers, where VXLAN enables network slice isolation via NFV and edge tools, reducing latency for applications like AI-driven services.

Alternative Technologies

Limitations of Traditional VLANs

Traditional Virtual Local Area Networks (s), defined by the standard, utilize a 12-bit VLAN Identifier (VID) field in Ethernet to tag , enabling up to 4094 unique s (values 1 to 4094, with 0 for priority-tagged and 4095 for implementation-specific use). Each functions as a separate , logically segmenting the network to contain within defined groups of devices. However, in large-scale networks, this structure leads to challenges, as expanding beyond recommended sizes—such as exceeding 1024 hosts per domain—amplifies broadcast storms and degrades performance due to excessive flooding of unknown , , and across all ports in the domain. A primary limitation arises in virtualized environments, where rapid proliferation of virtual machines ()—often termed VM or VLAN sprawl—quickly exhausts the 4094 VLAN limit, resulting in increased broadcast traffic and management complexity across data centers hosting thousands of . Additionally, traditional VLANs are inherently Layer 2 constructs confined to a single , making it difficult to extend them across Layer 3 boundaries without implementing inter-VLAN via protocols like those supported in , which introduces configuration overhead, potential single points of failure at routers or Layer 3 switches, and scalability issues in multi-site or routed topologies. In data centers characterized by high —server-to-server communications within the same facility—VLANs exacerbate inefficiencies through frequent flooding of frames to all ports in a VLAN when destination MAC addresses are unknown, consuming significant bandwidth and straining switch resources. The reliance on (STP) to prevent loops further compounds this, as STP's convergence times (up to 50 seconds in basic implementations) and per-VLAN instance overhead limit fault domains and introduce delays unsuitable for dynamic, high-volume environments, often leading to suboptimal topologies and increased latency during failures. Prior to 2010, network operators heavily depended on for segmentation, prompting the development of proprietary and standardized extensions to mitigate the 4094-tag constraint, such as (Provider Bridges), ratified in 2005, which introduced double tagging (Q-in-Q) to stack an additional VLAN tag atop the customer tag, effectively expanding the addressable space for service providers while preserving with 802.1Q.

Other Network Virtualization Methods

Network Virtualization using Generic Routing Encapsulation (NVGRE) is a tunneling protocol primarily associated with environments, such as , that leverages (GRE) over to enable multi-tenant in data centers. It incorporates a 24-bit Virtual Subnet ID (VSID) within the GRE key extension to segment virtual networks, allowing up to 16 million unique identifiers for scalability across Layer 3 underlays. While NVGRE provides lower encapsulation header overhead compared to -based alternatives—typically around 28 bytes for the IP and GRE components—it incurs higher processing demands in some due to limited support for GRE offloading and lacks the entropy from UDP ports, reducing flexibility for equal-cost multipath () routing and load balancing. Generic Network Virtualization Encapsulation (Geneve), standardized by the IETF in RFC 8926, serves as a unified and extensible alternative for overlay networks, using over or with a compact 8-byte base header on port 6081. Its key innovation lies in the variable-length Type-Length-Value (TLV) options field following the base header, which supports the insertion of arbitrary (up to 260 bytes total header size (8-byte base plus up to 252 bytes of options)) for advanced features like service chaining or security policies without protocol redesign. This extensibility positions Geneve as more future-proof than VXLAN's rigid 8-byte fixed header, enabling seamless adaptation to evolving control planes and hardware accelerations while maintaining compatibility with existing fabrics through source port entropy for ECMP. Geneve's also facilitates among diverse technologies by accommodating capabilities from predecessors like VXLAN and NVGRE. Stateless Transport Tunneling (STT), originally proposed by Nicira (later acquired by ), represents an early UDP-based approach to outlined in an expired IETF Internet-Draft from 2013. STT encapsulates Ethernet frames using a TCP-like header structure to exploit offloads such as Segmentation Offload (TSO) and Large Receive Offload (LRO), aiming for high-throughput performance in environments with minimal state maintenance at endpoints. It features a 64-bit Context ID for network identification, supporting larger segment sizes up to 64 KB, but has achieved limited adoption due to the lack of standardization and the rise of more versatile protocols. In contemporary NSX deployments, STT has been overshadowed by Geneve, rendering it effectively deprecated for new implementations. The following table summarizes key differences among these methods and VXLAN in terms of design trade-offs:
ProtocolEncapsulation Overhead (bytes, approx. tunnel header)Scalability (Network ID bits)Native Control Plane Support
VXLAN36 (IP + + header)24 (16M segments)EVPN (RFC 7432),
NVGRE28 ( + GRE + key)24 (16M segments)None; relies on external mechanisms
Geneve36+ ( + + header + TLV options, min. 36)24 (VNI) + extensible optionsEVPN compatible
STT46 ( + + header)64None

Deployment and Use Cases

Common Applications

VXLAN is widely deployed in virtualization to extend Layer 2 across Layer 3 boundaries, facilitating seamless (VM) migration in private cloud environments such as . By encapsulating Ethernet frames within packets, VXLAN enables vMotion operations between VXLAN-backed logical switches in NSX for vSphere and overlay segments in NSX-T, allowing VMs to move across hosts without reconfiguration. This approach supports dynamic workload mobility in scalable fabrics, as demonstrated in Cisco FlexPod deployments with vSphere 7.0, where VXLAN BGP EVPN provides Layer 2 extension for vMotion traffic over 100 GbE interconnects. In multi-tenant environments, VXLAN provides robust for public clouds like , enabling isolated tenant networks while supporting (NFV). uses VXLAN as an overlay for tenant-specific Layer 2 domains, with configurable VNI ranges (e.g., 1001–2000) to connect instances across regions without physical limitations. In NFV contexts, VXLAN-backed logical switches in Integrated ensure secure between Virtual Network Functions (VNFs) within tenant virtual data centers, complemented by Edge Services Gateways for North-South connectivity and isolation. specifications recognize VXLAN's role in NFV for intra-site multi-tenancy, leveraging 24-bit VNIs to support up to 16 million segments over L3 underlays. VXLAN facilitates hybrid cloud connectivity by establishing secure tunnels that bridge on-premises infrastructure with public clouds, maintaining consistent Layer 2 domains for workload portability. In Cisco's Hybrid Cloud Networking Solution, VXLAN overlays extend on-premises EVPN fabrics to cloud providers via Nexus Dashboard Orchestrator, enabling unified policy enforcement and inter-site L2 extension without proprietary gateways. This tunneling mechanism supports seamless integration in environments like Red Hat OpenStack with OpenShift, where VXLAN-based overlays connect private clusters to public resources over BGP-EVPN control planes. Post-2020, VXLAN has seen adoption in emerging applications such as core networks and edge segmentation, particularly among hyperscalers and telcos. In transport networks, Huawei's IP designs incorporate VXLAN for in Multi-access Edge Computing (MEC), providing low-latency overlays between RAN and core elements to handle ultra-reliable traffic. For edge, Juniper's EVPN-VXLAN architecture segments device traffic in distributed environments, isolating sensors and gateways to enhance security and scalability in industrial deployments. Hyperscalers like AWS and leverage VXLAN extensions in hybrid setups, such as Cisco ACI integrations, to unify on-premises and cloud segmentation for large-scale and workloads.

Challenges and Best Practices

One significant challenge in VXLAN deployment is the dependency on in the underlay for handling broadcast, unknown , and (BUM) traffic, which can lead to issues in environments lacking robust support, such as public clouds or non-multicast-enabled fabrics. This reliance floods traffic to all VTEPs in a VNI, potentially overwhelming resources unless optimized. To mitigate this, head-end replication (HER) replicates BUM packets as multiple streams at the ingress VTEP, eliminating needs, while EVPN provides a that distributes MAC/IP information via BGP, enabling ARP suppression and targeted forwarding. Another operational difficulty arises from MTU fragmentation, as VXLAN encapsulation adds approximately 50 bytes to each frame (including outer , , and VXLAN headers), necessitating an underlay MTU of at least bytes to avoid packet drops or fragmentation on default 1500-byte links. In heterogeneous networks, mismatched MTUs between data centers and links can cause issues, particularly for larger payloads like frames in high-throughput applications. tunnel visibility further complicates deployments, as encapsulated obscures endpoint-to-endpoint paths, making it hard to diagnose issues like asymmetric or VTEP failures without specialized tools. Security concerns in VXLAN stem from the exposure of overlay tunnels to underlay network attacks, such as unauthorized access or , since VXLAN packets traverse the fabric without inherent or . This vulnerability is heightened in multi-tenant or public underlay scenarios, where adversaries could inject malformed packets or exploit port 4789. To address these risks, overlays are recommended to encrypt and authenticate VXLAN traffic, providing confidentiality and integrity over untrusted networks while supporting hardware-accelerated performance on modern switches. Best practices for VXLAN operations emphasize adopting EVPN as the control plane to decouple MAC learning from data-plane flooding, enabling scalable, multicast-free BUM handling and integrated Layer 2/3 services. For monitoring, tools like should be enabled on VTEPs to export flow statistics, revealing overlay traffic patterns and anomalies despite encapsulation. Ensuring underlay QoS is critical for low-latency performance, with policies applied to prioritize VXLAN traffic (e.g., marking outer headers with DSCP values) to prevent congestion-induced delays in applications. Additionally, configuring consistent MTU sizes across the fabric and using overlay-specific diagnostics, such as / over VXLAN tunnels, aids in proactive issue resolution. Post-2020 advancements have focused on to handle large-scale VXLAN configurations, with collections like .nac_dc_vxlan enabling (IaC) workflows that generate and deploy EVPN fabrics via data models, reducing manual errors in multi-site environments. Integration with observability platforms such as has emerged as a key practice, using its pull-based metrics collection to monitor VXLAN endpoints in —scraping data from VTEPs and exporters for dashboards in —thus supporting dynamic and alerting in distributed setups.

References

  1. [1]
    RFC 7348 - Virtual eXtensible Local Area Network (VXLAN)
    This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers.
  2. [2]
    Understanding VXLANs | Junos OS - Juniper Networks
    Virtual Extensible LAN protocol (VXLAN) technology allows networks to support more VLANs. According to the IEEE 802.1Q standard, traditional VLAN identifiers ...
  3. [3]
    [PDF] Virtual Extensible LAN (VXLAN) Overview - Arista
    Virtual Extensible LAN (VXLAN) Overview. This document provides an overview of how VXLAN works. It also provides criteria to help determine when and where.
  4. [4]
    VXLAN Comes of Age with BGP-EVPN - ONUG
    Apr 27, 2015 · Back in 2011, when software-defined networking was in its infancy, Arista, Broadcom, Cisco, Citrix, and Red Hat joined together to rethink multi ...
  5. [5]
    What is VXLAN? | Glossary | HPE
    Oct 1, 2025 · A VXLAN allows a physical network to be segmented into as many as 16 million virtual, or logical, networks. It encapsulates Layer 2 Ethernet ...
  6. [6]
    RFC 8365 - A Network Virtualization Overlay Solution Using ...
    This document specifies how Ethernet VPN (EVPN) can be used as a Network Virtualization Overlay (NVO) solution and explores the various tunnel encapsulation ...<|control11|><|separator|>
  7. [7]
    Cisco Nexus 5600 Series NX-OS Layer 2 Switching Configuration ...
    Feb 23, 2016 · For a VXLAN Layer 2 gateway, the default MTU is 1500. The recommended method is to increase the MTU to 1550.
  8. [8]
    Generic Protocol Extension for VXLAN (VXLAN-GPE)
    ### VXLAN-GPE Header Structure and Extension Summary
  9. [9]
    draft-mahalingam-dutt-dcops-vxlan-00 - IETF Datatracker
    Aug 26, 2011 · This is an older version of an Internet-Draft that was ultimately published as RFC 7348. Expired & archived. This document is an Internet-Draft ...
  10. [10]
    [PDF] Arista - VXLAN Overview
    Arista, Broadcom, Intel, VMware and others developed the VXLAN specification to improve scaling in the virtualized Datacenter. VXLAN is a powerful tool for.
  11. [11]
    VMware, Cisco stretch virtual LANs across the heavens - The Register
    Aug 30, 2011 · In essence, VXLAN turns Layer 3 networks into a kind of hypervisor for Layer 2 networks, "allowing VMs to communicate with each other using a ...
  12. [12]
    Arista launches the first hardware VXLAN termination device
    Sep 19, 2012 · Broadcom's upcoming Trident-2 chipset supports VXLAN and NVGRE, so when Arista demonstrated VXLAN termination at the recent VMworld 2012, ...Missing: VMware | Show results with:VMware
  13. [13]
    Celebrating 5 Year Anniversary of Cisco Nexus 1000V Launch
    Aug 16, 2013 · Nexus 1000V was the FIRST platform in the industry to ship VXLAN-based solutions in January, 2012. And now we have plans to support NVGRE-based ...
  14. [14]
    Network Virtualization Overlays (nvo3) - IETF Datatracker
    The purpose of the NVO3 WG is to develop a set of protocols and/or protocol extensions that enable network virtualization within a data center (DC) environment.Missing: 2011 | Show results with:2011
  15. [15]
    History for draft-mahalingam-dutt-dcops-vxlan -09 - IETF Datatracker
    draft-mahalingam-dutt-dcops-vxlan-09. Status · Email expansions · History ... Date, Rev. By, Action. 2014-08-19. 09, (System), RFC ... New version available: draft- ...
  16. [16]
    draft-ietf-nvo3-vxlan-gpe-12
    Sep 22, 2021 · This document describes extending Virtual eXtensible Local Area Network (VXLAN), via changes to the VXLAN header, with four new capabilities.
  17. [17]
    draft-ietf-nvo3-vxlan-gpe-13 - Generic Protocol Extension for VXLAN ...
    This document describes extending Virtual eXtensible Local Area Network (VXLAN), via changes to the VXLAN header, with four new capabilities.
  18. [18]
    Close Peek at Cisco ACI : Network Abstraction, VXLAN ...
    Nov 11, 2013 · Routing VXLANs in Cisco Nexus 9000 switches deserve special mention. Cisco Nexus 9000 ASICs optimize routing, and avoid burning up front panel ...Missing: introduction date<|separator|>
  19. [19]
    Cisco Continues Open Standard Leadership with Support for EVPN ...
    IT organizations can run an EVPN VXLAN controller on a traditional Nexus 9000 switch in “standalone” mode. Or they can deploy Nexus 9000 switches in ACI ...Missing: date | Show results with:date
  20. [20]
    Cisco Nexus 9000 VXLAN BGP EVPN Data Center Fabrics ...
    One of the benefits of VXLAN BGP EVPN is the ability to do integrated routing and bridging (IRB) using a single control plane and address family. The ...
  21. [21]
    Multi-Site Data Center Networking with Secure VXLAN EVPN and ...
    Aug 17, 2020 · A VXLAN EVPN Multi-Site environment creates the ability to have an any-to-any communication between Sites. This full-mesh communication pattern ...
  22. [22]
    Introducing VMware NSX - The Platform For Network Virtualization
    Aug 26, 2013 · VMware NSX is a platform that delivers the entire networking and security model in software, decoupled from traditional hardware.
  23. [23]
    NSX-T Series: Part 3 – Planning NSX VXLAN - Network Bachelor
    Aug 21, 2020 · In this part, we will discuss how to decide the planning of NSX VXLAN and add the IP Address pool for the purpose of VTEP on ESXi level.
  24. [24]
    Overview of NSX - TechDocs - Broadcom Inc.
    Sep 29, 2025 · A small appliance for lab or proof-of-concept deployments. · A medium appliance for deployments up to 64 hosts. · A large appliance for customers ...
  25. [25]
    Design Guide to Run VMware NSX-T with Cisco ACI White Paper
    This VMware Knowledge Base article illustrates the required ports for different vSphere services depending on the ESXi release.Missing: timeline | Show results with:timeline<|control11|><|separator|>
  26. [26]
    [PDF] VXLAN Technology White Paper - ActForNet
    Jul 28, 2016 · VXLAN Technology White Paper. 1 VXLAN Overview. Issue 06 (2016-07-28). Huawei Proprietary and Confidential. Copyright © Huawei Technologies Co., ...
  27. [27]
    VXLAN - Huawei DCN Design Guide
    Apr 18, 2025 · VXLAN is a Network Virtualization over Layer 3 (NVO3) technology defined by IETF and adopts the MAC-in-UDP packet encapsulation mode.
  28. [28]
    Overview of VXLAN - CloudEngine S5700 and S6700 ...
    Sep 18, 2025 · VXLAN is used to address some of the problems associated with server virtualization, which is a core cloud computing technology that can significantly reduce ...
  29. [29]
    Understanding traffic mirror packet format - AWS Documentation
    Mirrored traffic is encapsulated with a VXLAN header. All appliances that receive traffic directly with this feature should be able parse a VXLAN-encapsulated ...
  30. [30]
    Industrial 5G routers and UEs - Siemens Global
    Industrial communications: support of PROFINET communication via private 5G networks thanks to VXLAN.
  31. [31]
    Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide ...
    May 9, 2024 · Results. Updated: May 9, 2024. Book Table of Contents. Preface · New and Changed Information · Overview · Configuring VXLAN · Configuring the ...
  32. [32]
    Connecting VMs Using Tunnels (Userspace)
    This document describes how to use Open vSwitch to allow VMs on two different hosts to communicate over VXLAN tunnels.Missing: SDN | Show results with:SDN
  33. [33]
    EVPN — FRR latest documentation - FRRouting User Guide
    BGP-EVPN is the control plane for the transport of Ethernet frames, regardless of whether those frames are bridged or routed. In the case of a VLAN-Based ...
  34. [34]
    RFC 7348 - Virtual eXtensible Local Area Network (VXLAN)
    This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers ...
  35. [35]
    k8snetworkplumbingwg/multus-cni: A CNI meta-plugin for ... - GitHub
    Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods.Missing: post- 2020
  36. [36]
    Demystifying Multus - Red Hat
    Apr 8, 2020 · Multus is the open source project that enables Kubernetes pods to attach to multiple networks. It does this by acting as a 'meta' plug-in.Missing: VXLAN | Show results with:VXLAN
  37. [37]
    RFC 7432 - BGP MPLS-Based Ethernet VPN - IETF Datatracker
    This document describes procedures for BGP MPLS-based Ethernet VPNs (EVPN). The procedures described here meet the requirements specified in RFC 7209.Missing: VXLAN | Show results with:VXLAN
  38. [38]
    VXLAN and BGP EVPN Configuration Guide for Dell EMC ...
    RFC 8365 describes VXLAN-based EVPN. The MP-BGP EVPN control plane provides protocol-based remote VTEP discovery, and MAC and ARP learning. This configuration ...
  39. [39]
    VXLAN EVPN Ingress Replication with BGP - NetworkLessons.com
    In a VXLAN topology, BGP EVPN can be used to deal with BUM traffic without having to use either static VTEP peer configurations or multicast infrastructure.
  40. [40]
    [PDF] EVPN VXLAN Interoperability Between NXOS and Junos OS
    May 9, 2023 · This document is a technical exploration of interoperability issues between Cisco's NXOS and Juniper's Junos. OS when running a BGP-based ...
  41. [41]
    Configuring VXLAN BGP EVPN [Cisco Nexus 9000 Series Switches]
    Mobility Sequence number of a locally originated type-2 route (MAC/MAC-IP) can be mismatched between vPC peers, with one vTEP having a sequence number K while ...Missing: challenges | Show results with:challenges
  42. [42]
    Information on RFC 7348 - » RFC Editor
    This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers.
  43. [43]
    Generic Protocol Extension for VXLAN (VXLAN-GPE) - IETF
    Sep 22, 2021 · This document describes extending Virtual eXtensible Local Area Network (VXLAN), via changes to the VXLAN header, with four new capabilities.Table of Contents · Generic Protocol Extension for... · VXLAN-GPE Examples
  44. [44]
    Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide ...
    Apr 23, 2025 · The process of handing off routes from the L3VPN SRv6 domain to the EVPN VXLAN fabric requires configuring the import condition for L3VPN SRv6 routes.Missing: extensions | Show results with:extensions
  45. [45]
    [PDF] Multi-Vendor MPLS SDN Interoperability Test Report 2023 - EANTC
    This year, EANTC proudly presents one of the richest, most advanced multi-vendor interoperability showcases we have ever staged. With 17 participating vendors.Missing: programs | Show results with:programs<|separator|>
  46. [46]
    Juniper Networks shines at EANTC interoperability testing 2025
    Jul 10, 2025 · Our participation in the EANTC 2025 tests showcased our seamless integration across a range of technologies—from EVPN/VXLAN and segment routing ...
  47. [47]
    Technology trends and challenges in SDN and service assurance ...
    We summarize the evolution to disaggregated and programmable architecture and provide a 5G edge data center architecture using a programmable data plane.
  48. [48]
    A Study on 5G Network Slice Isolation Based on Native Cloud and ...
    Feb 5, 2025 · 5G networks support various advanced applications through network slicing, network function virtualization (NFV), and edge computing, ensuring ...
  49. [49]
    Inter-Switch Link and IEEE 802.1Q Frame Format - Cisco
    Aug 25, 2006 · The VLAN Identifier is a 12-bit field. It uniquely identifies the VLAN to which the frame belongs. The field can have a value between 0 and 4095 ...Missing: limitation | Show results with:limitation
  50. [50]
    Broadcast Domain Size - Cisco Community
    Jan 20, 2006 · The IEEE standard for 802.3 recommends no more than 1024 hosts in a single broadcast domain. Some protocols and operating systems are more chatty than others.
  51. [51]
    [PDF] Secure Virtual Network Configuration for Virtual Machine (VM ...
    This phenomenon, called VLAN sprawl, may result in more broadcast traffic for the data center as a whole, and it also has the potential to introduce a ...
  52. [52]
    [PDF] EVPN Implementation for Next-Generation Data Centers | White Paper
    VXLAN overlays offer a number of benefits: • Elimination of Spanning Tree Protocol (STP). • Increased scalability. • Improved resiliency. • Fault containment.
  53. [53]
    [PDF] Efficient Data Switching in Large Ethernet Networks using VLANs
    Flooding consumes excessive link bandwidth and leads to large forwarding tables in switches. •. Inefficient forwarding paths: Ethernet use Spanning Tree ...
  54. [54]
    Understand Network Outages Due to VLAN Instance Limit - Cisco
    Mar 15, 2024 · This document describes potential network outages due to the VLAN instance limit on low-end legacy catalyst switches and their prevention.
  55. [55]
    IEEE 802.1ad Support on Provider Bridges - Cisco
    Apr 19, 2010 · This document describes the IEEE 802.1ad implementation on Cisco switches using Layer 2 switch ports.Missing: pre- | Show results with:pre-
  56. [56]
    [PDF] IEEE 802.1Q
    Mar 10, 2013 · • Note that 802.1Q-2011 incorporates amendments 802.1ad-2005, 802.1ak-2007, 802.1ag-2007, 802.1ah-2008,. 802-1Q-2005/Cor-1-2008, 802.1ap-2008 ...Missing: pre- | Show results with:pre-
  57. [57]
    RFC 7637 - NVGRE: Network Virtualization Using Generic Routing ...
    This document describes the usage of the Generic Routing Encapsulation (GRE) header for Network Virtualization (NVGRE) in multi-tenant data centers.
  58. [58]
    Hyper-V Network Virtualization Technical Details in Windows Server
    Jun 9, 2022 · In NVGRE, the virtual machine's packet is encapsulated inside another packet. The header of this new packet has the appropriate source and ...
  59. [59]
    RFC 8926 - Geneve: Generic Network Virtualization Encapsulation
    This document describes Geneve, an encapsulation protocol designed to recognize and accommodate these changing capabilities and needs.
  60. [60]
    draft-davie-stt-03 - A Stateless Transport Tunneling Protocol for ...
    Mar 12, 2013 · A Stateless Transport Tunneling Protocol for Network Virtualization (STT) draft-davie-stt-03 ; This is an older version of an Internet-Draft ...
  61. [61]
    VMware's SDN Dilemma: VXLAN or Nicira? - Network Computing
    Jan 31, 2013 · First, a little background. Before being acquired by VMware, Nicira developed the Stateless Transport Tunneling (STT) protocol for tunneling ...
  62. [62]
    EVPN control plane for Geneve - IETF
    Jul 5, 2024 · This document describes how Ethernet VPN (EVPN) control plane can be used with Network Virtualization Overlay over Layer 3 (NVO3) Generic Network ...Missing: STT | Show results with:STT
  63. [63]
    vMotion between VDS/VSS and NSX-T virtual switch
    Jul 30, 2025 · Starting with NSX-T 2.5.2, it is possible to vMotion between an NSX for vSphere VXLAN-backed logical switch to an NSX-T overlay segment.
  64. [64]
    FlexPod Datacenter with VMware vSphere 7.0, Cisco VXLAN Single ...
    Deployment Guide for FlexPod Datacenter with VMware vSphere 7.0, Cisco VXLAN BGP EVPN Single-Site Fabric, and NetApp ONTAP 9.7
  65. [65]
    Networking Guide - OpenStack Docs
    Mar 26, 2019 · If you also want to configure vxlan network, suppose the vxlan range for tenant network is 1001~2000, add the following configuration to the ...Networking Terms · Prerequisites · Service Function Chaining...
  66. [66]
    [PDF] Architecting a vCloud NFV Platform OpenStack Edition - VMware
    Tenants can provision VXLAN backed logical switches for East-West. VNF ... VMware Integrated OpenStack divides pooled resources among tenants to create a secure ...
  67. [67]
    [PDF] ETSI GR NFV-IFA 035 V5.1.1 (2023-10)
    Oct 1, 2023 · Provider network: e.g. vlan. Boundary between NFVI and WAN. Routing instance. e.g. NAT, VRF. Tenant network: e.g. vxlan, gre. Provider network.
  68. [68]
    Hybrid Cloud Connectivity Deployment for Cisco NX-OS
    Feb 1, 2023 · This document describes deployment steps for the Cisco Hybrid Cloud Networking Solution powered by Cisco Nexus Dashboard Orchestrator (NDO)
  69. [69]
    Building a scalable virtual private cloud with Red Hat OpenStack ...
    Apr 8, 2025 · The overlay network established between the compute nodes is VxLAN based on the data plane side supported by BGP-EVPN for the control plane ...
  70. [70]
    [PDF] 5G MEC IP Network White Paper - Huawei Carrier
    Apr 29, 2020 · 5G provides a good network foundation for the development of the edge computing industry, exemplified by support for the three major scenarios ...
  71. [71]
    [PDF] IoT Network Segmentation (PDF)
    Solution. Network segmentation leverages an EVPN-VXLAN architecture that supports a highly scalable and agile environment while maintaining the security and.
  72. [72]
    Extending On-Premises Cisco Cloud ACI Network Security ...
    May 24, 2019 · Automate and secure hybrid connectivity through unified management: Through a single pane of glass (MSO), users can configure inter-site ...
  73. [73]
    VXLAN BGP EVPN Enhancements - Cisco Press
    Oct 3, 2017 · The VTEP interface is a multicast router port that participates in IGMP snooping, thereby allowing Layer 2 multicast forwarding. This is ...
  74. [74]
    [PDF] Data Center Interconnection with VXLAN - Arista
    VXLAN has wide industry support and was authored by Arista, Cisco and VMware with support from. Broadcom, Citrix and Red Hat among others. Arista's solution for ...
  75. [75]
    Example: Troubleshoot a VXLAN Overlay Network with Overlay Ping ...
    To troubleshoot the issue with this data flow, you can initiate the ping overlay and traceroute overlay commands on VTEP1 (the source VTEP or tunnel-src ) and ...
  76. [76]
    Configuring BGP EVPN VXLAN over IPsec [Cisco IOS XE 17]
    Apr 5, 2024 · This document describes how to configure BGP EVPN VXLAN over IPSec to provide a secure encrypted traffic flow through the VXLAN network.Missing: attacks | Show results with:attacks
  77. [77]
    Data Center VXLAN EVPN, Release 12.1.3 - Cisco
    Apr 8, 2024 · Enable Netflow - Enables netflow monitoring on the network. This is supported only if netflow is already enabled on fabric. Interface Vlan ...Missing: low latency
  78. [78]
    BGP EVPN VXLAN Configuration Guide - Cisco
    Aug 19, 2025 · BGP EVPN VXLAN QoS enables you to provides Quality of Service (QoS) capabilities to traffic that is EVPN VXLAN-encapsulated. By configuring QoS ...
  79. [79]
    cisco.nac_dc_vxlan - Ansible Galaxy
    Oct 28, 2025 · With little to no knowledge about automation, you can use this collection to instantiate a VXLAN EVPN fabric. YAML files are created that ...
  80. [80]
    [PDF] OPTIMIZING VXLAN NETWORK MANAGEMENT: - Trepo
    Dec 1, 2023 · As a pull- centric framework, Prometheus is particularly suited for large-scale VXLAN networks spread across diverse regions. Its ...