Fact-checked by Grok 2 weeks ago

Vector Packet Processing

Vector Packet Processing (VPP) is an open-source, extensible framework that provides high-performance switch and router functionality by packets in user space on commodity CPUs, leveraging a vector-based approach to handle multiple packets simultaneously rather than one at a time as in traditional scalar . This method reduces instruction cache thrashing and read while improving overall circuit efficiency, with the per-packet cost decreasing as vector sizes increase. Developed originally by as a production-grade since 2002 and now maintained under the FD.io project, VPP has been deployed in commercial products generating over $1 billion in revenue and supports a wide range of networking protocols including IPv4, , MPLS, , and . At its core, VPP operates through a modular graph architecture composed of pluggable nodes, where each node processes a vector of packet indices, enabling efficient data plane operations across layers 2 through 4 of the OSI model. This design allows for seamless integration of plugins to extend functionality, such as hardware acceleration or custom graph rearrangements, without requiring kernel modifications, and it runs on multiple architectures including x86, ARM, and PowerPC in environments like bare metal, virtual machines, or containers. Independent benchmarks demonstrate VPP's superior throughput, achieving over 14 million packets per second (MPPS) on a single core for IPv4/IPv6 forwarding and exceeding 100 Gbps full-duplex line rate, often outperforming kernel-based networking stacks by two orders of magnitude. VPP's versatility makes it suitable for diverse applications, including virtual switches, routers, gateways, firewalls, and load balancers, with native support for integrations into cloud-native ecosystems like and . Its emphasis on , low , and stability positions it as a foundational component for high-speed networking in data centers, , and service provider environments.

Overview

Definition and Purpose

Vector Packet Processing (VPP) is an extensible, open-source that provides layer 2-4 functionality, enabling the development of high-performance switches, routers, and virtualized elements on commodity hardware. It operates in user space, bypassing traditional networking to deliver scalable packet processing for diverse applications, including virtual switches, routers, gateways, firewalls, and load balancers. As part of the FD.io project, VPP supports multi-platform deployment across architectures such as x86, , and PowerPC, making it suitable for modern networking environments. The primary purpose of VPP is to facilitate high-performance and scalable packet processing by avoiding the overhead associated with kernel-based networking stacks, which often limit throughput due to context switching and interrupt handling. This user-space approach is particularly valuable for (NFV) and (SDN) workloads, where rapid packet forwarding and low-latency operations are essential to support virtualized infrastructures and programmable networks. By running on processors, VPP achieves up to 100 times greater packet processing throughput compared to traditional networking, enabling line-rate on high-speed interfaces. At its core, VPP employs a vector processing model that handles packets in batches, known as vectors, rather than processing them individually as in scalar approaches. These s, which can contain up to 256 packets, are collected from network device receive rings and routed through a of processing nodes, allowing multiple packets to share computational resources efficiently. This batching improves efficiency by warming the instruction (I-cache) with the first packet in the vector, enabling subsequent packets to benefit from cache hits and reducing per-packet overhead, including I-cache miss stalls by up to two orders of magnitude.

Key Characteristics

Vector Packet Processing (VPP) is engineered as a highly modular framework that enables the construction of custom packet processing graphs through a . This treats plugins as first-class components, allowing developers to extend functionality by integrating new nodes while reusing existing ones for of bespoke forwarding behaviors. The core consists of a of forwarding nodes supported by an extensible infrastructure, which facilitates the separation of packet processing logic from the underlying hardware, promoting flexibility in deploying virtual switches, routers, and (NFV) elements. VPP demonstrates strong scalability across diverse hardware environments, supporting multiple processor architectures such as x86, ARM, and PowerPC, which ensures portability in both commodity servers and specialized networking appliances. It efficiently leverages multi-core systems, achieving linear throughput scaling with additional cores—for instance, delivering up to 948 Gbps aggregate performance on an Platinum 8168 processor with 512-byte packets (as demonstrated in 2017)—by distributing packet processing workloads across threads without significant contention. This multi-platform compatibility, combined with integration capabilities like DPDK plugins, positions VPP for deployment in cloud-native and scenarios requiring high aggregate bandwidth. A defining trait of VPP is its deterministic performance profile, achieved through execution in user-space and the use of poll-mode drivers that bypass interrupts for direct hardware access. This approach minimizes latency variations, ensuring predictable packet handling even under high loads, with per-core forwarding rates exceeding 50 Gbps for mix (IMIX) traffic on E5-2667 v4 processors (as of benchmarks). By avoiding the overhead of context switches and interrupt-driven I/O common in kernel-based stacks, VPP maintains consistent low-jitter processing, which is critical for applications like user plane functions. VPP employs an event-driven, non-blocking I/O model that sustains continuous packet flows by actively polling receive () queues and processing packets in vector batches, eliminating the delays associated with traditional interrupt-based mechanisms. This polling strategy, integrated with the vector processing paradigm, optimizes utilization and SIMD instructions for efficient bulk operations, contributing to its high-throughput capabilities without blocking on asynchronous events. As an open-source project governed by the FD.io collaboration, VPP benefits from contributions across multiple vendors, including , , and , fostering a robust of shared innovations and testing. This community-driven development model, hosted under the , ensures ongoing enhancements while maintaining compatibility with standards like those from the Open Network Edge Services Software (ONAP) and NFV. As of November 2025, VPP continues active development with the latest release candidate v26.02, incorporating enhancements in features and performance optimizations.

History

Origins at Cisco

Vector Packet Processing (VPP) originated within Systems in the early 2000s, initiated in 2002 as a high-performance software-based approach to , with foundational work beginning around 2004. The technology was developed to enable efficient processing of network traffic on commodity hardware, addressing the limitations of scalar packet processing by handling multiple packets simultaneously in vectors. This innovation stemmed from 's need for scalable data plane capabilities in its networking products, evolving from earlier generations of proprietary packet processing engines that integrated hardware and software stacks for optimized throughput. Central to VPP's development was Cisco Fellow David Barach, recognized as the primary inventor of the vector packet processing framework. Barach's contributions built on his expertise in high-speed networking data planes, leading to the filing of US Patent 7,961,636 in 2004, which describes vectorized software techniques for concurrent processing of packet vectors through a of nodes. The patent, assigned to Technology, Inc., and issued in 2011, outlined methods to minimize cache misses by loading instructions once per vector and adaptively controlling vector sizes to meet low-latency targets, such as 50 microseconds. Over more than two decades, this technology has undergone continuous evolution within , powering the data planes of various products and contributing to over $1 billion in shipped revenue. Initially deployed proprietarily in Cisco's high-end routers and switches, VPP enabled line-rate performance for Ethernet and IP/Multiprotocol Label Switching (MPLS) services, sustaining up to 14.88 million packets per second on 10 Gbps links in software environments. Its principles were integrated into core forwarding engines of Cisco's carrier-grade routers, such as the ASR series, to achieve wire-speed processing without dedicated hardware acceleration. This proprietary implementation focused on modularity and extensibility, allowing seamless integration with Cisco's broader ecosystem before the technology's later open-sourcing in 2016.

Open-Sourcing and FD.io

In , announced the open-sourcing of its proprietary Vector Packet Processing (VPP) technology by donating the core codebase to the Foundation's newly launched Fast Data Input/Output (FD.io) project on February 11, aimed at accelerating high-performance networking . This transition marked VPP's shift from a closed-source asset to a collaborative open-source , enabling broader industry adoption for scalable packet processing in virtualized environments. Under FD.io's governance within the Linux Foundation, VPP has benefited from multi-vendor contributions, with key supporters including Cisco, Intel, Red Hat, Ericsson, 6WIND, Huawei, AT&T, Comcast, Cavium Networks, ZTE, and Inocybe, fostering a diverse ecosystem for ongoing enhancements. The project's structure promotes modular development, allowing participants to contribute plugins, drivers, and optimizations while maintaining VPP as the central data plane component. Key milestones include the initial open-source release, VPP 16.06, in June 2016, which established the foundational vector processing stack. By 2018, VPP achieved significant integrations, such as with OpenStack Neutron for virtual networking and Kubernetes for containerized deployments, demonstrated at events like the FD.io Mini-Summit at KubeCon Europe. The project continues with biannual releases following a year.month naming convention, including VPP 25.06 in June 2025, which incorporated advancements in multi-architecture support and security features. VPP's growth under FD.io has attracted numerous contributors cumulatively, driving its adoption in and infrastructures for high-throughput applications like and service function chaining. FD.io has played a pivotal role in standardizing VPP as a universal data plane for (NFV), providing a performant, hardware-agnostic foundation that decouples control and data planes across diverse NFV environments.

Architecture

Vector Processing Model

Vector Packet Processing (VPP) employs a batching mechanism where packets are grouped into vectors, typically comprising up to 256 packets, which are processed as a single unit to minimize per-packet overhead such as function calls and context switches. This approach contrasts with scalar processing, where each packet is handled individually, leading to inefficiencies like repeated instruction fetches and deeper call stacks. By assembling these vectors from receive (RX) rings on network interfaces, VPP enables bulk operations that amortize fixed costs across multiple packets, enhancing overall throughput. In the processing pipeline, incoming vectors are classified based on packet attributes and dispatched en masse to appropriate handler nodes, allowing for parallel execution of operations on the batch. VPP leverages (SIMD) instructions, such as SSE and AVX, to perform computations across packet fields simultaneously, further optimizing parallel workloads like calculations or header parsing. This bulk dispatching reduces context switches between packets and improves utilization by keeping related data in locality, as the same code paths are executed repeatedly on the vector rather than scattering accesses. Compared to scalar methods, vector processing can achieve significantly lower cycles per packet—often under 200 cycles for basic forwarding—due to these amortizations. The efficiency of this model can be illustrated by a simplified throughput , where the processing rate (in packets per second) is approximately given by: \text{Processing rate} \approx \frac{\text{vector size} \times \text{CPU frequency}}{\text{cycles per vector}} This formulation highlights the batching benefits: larger sizes directly scale throughput by distributing the cycles required for vector-level operations across more packets, assuming constant per vector. In practice, VPP dynamically adjusts vector sizes based on input rates to balance and utilization. For exceptional cases, such as packets requiring special handling (e.g., errors or unsupported features), individual packets are diverted from the vector using the VLIB infrastructure. These packets are tagged with a reason code during node processing and routed to dedicated sink nodes or the , while the remaining continues uninterrupted to maintain bulk efficiency. This selective diversion ensures that anomalies do not degrade the performance of the majority of traffic.

Node Graph and Plugins

The core of Vector Packet Processing (VPP) lies in its modular data plane, structured as a (DAG) of s where packets are processed in vectors through a series of specialized functions. Each in the represents a discrete operation, such as , header rewriting, or forwarding, allowing packets to traverse the structure based on decisions encoded in "next" indices that route vectors to subsequent s. This -based approach enables efficient, high-throughput processing by dispatching vectors of packets (typically 128 to 256 packets) through the s, with the dispatcher subdividing vectors as needed to maintain stable frame sizes and ensure complete processing before . VPP defines several node types to control dispatch behavior and integration within the . Input nodes (VLIB_NODE_TYPE_INPUT) handle hardware-specific ingress from interfaces, generating initial work vectors, while pre-input nodes (VLIB_NODE_TYPE_PRE_INPUT) execute preliminary tasks before other processing. Internal nodes (VLIB_NODE_TYPE_INTERNAL) perform core packet manipulations and are invoked only when pending frames are scheduled, facilitating conditional routing via dispatch arcs. Process nodes (VLIB_NODE_TYPE_PROCESS) support for control-plane-like operations that suspend after brief execution, ensuring the graph remains focused on data-plane efficiency. Output nodes mirror input nodes for egress, completing the traversal. Within nodes, vector batching allows simultaneous processing of multiple packets to leverage SIMD instructions, as detailed in the vector processing model. The plugin architecture enhances VPP's extensibility by allowing of shared libraries at , without recompiling the core engine. Plugins register new graph nodes via a vlib_plugin_registration structure, which VPP discovers by scanning a designated directory for matching libraries using dlopen and dlsym for verification. This enables the addition of features such as access control lists (ACLs) or modules as first-class citizens integrated seamlessly into the . Plugins interact with the through the Binary API (VPP API), a shared-memory message-passing that supports request-reply semantics for configuration, table programming, and modifications by external control planes. Graph configurations are serialized for reproducibility, with the data plane node and its arcs captured via dedicated API messages that can be uploaded and stored in structured formats. VPP's definitions are compiled into representations, facilitating the loading and application of configurations to reconstruct the state across restarts or deployments. This supports programmatic management, ensuring consistent paths in diverse environments.

Implementation

Integration with DPDK

Vector Packet Processing (VPP) integrates with the (DPDK) primarily through its poll-mode drivers (PMDs), which provide direct user-space access to network interface controllers (NICs) and bypass the kernel networking stack to enable zero-copy input/output operations. This approach minimizes overhead from context switches and system calls, allowing VPP to achieve line-rate packet processing on commodity hardware. DPDK's PMDs, such as those for i40e and ixgbe devices, are loaded as plugins within VPP, handling low-level device initialization and queue management. At the core of this integration, VPP's input nodes utilize DPDK libraries to poll hardware queues and retrieve batches of packets directly into structures for . These nodes operate in a continuous polling loop, invoking the DPDK rte_eth_rx_burst function to assemble packet from multiple descriptors in a single call, thereby feeding them into VPP's node graph for subsequent operations. This mechanism ties directly to VPP's processing model by ensuring that incoming traffic is handled in bulk, optimizing utilization and reducing per-packet overhead. Configuration of VPP with DPDK emphasizes system tuning for performance, including the allocation of to support efficient mapping for packet buffers and mbuf pools. For example, hugepages are typically set via boot parameters such as hugepagesz=1GB hugepages=64 in , while disabling transparent hugepages prevents fragmentation. NUMA is achieved by pinning VPP worker threads to specific cores and nodes using tools like libvirt or numactl, ensuring local access and avoiding cross-node . Multi-queue NICs are configured through DPDK's device parameters, such as specifying num-rx-queues and num-tx-queues in VPP's startup to enable receive side scaling () and distribute traffic across multiple cores. The foundational integration began with VPP's initial open-source release, version 16.06 in , which was built on DPDK 16.04 and included a custom patchset for compatibility and enhancements. For handling multiple NICs in virtualized environments, VPP supports Single Root I/O Virtualization (SR-IOV) via DPDK's rte_eth_dev , treating virtual functions (VFs) as independent Ethernet ports. This allows VPP to manage VFs with dedicated queues—for instance, configuring 2 and 2 queues per VF on 82599-based devices—enabling direct assignment to virtual machines while maintaining high throughput on the physical function.

Supported Platforms and Deployment

Vector Packet Processing (VPP) primarily supports architectures on and processors, enabling high-performance packet processing on standard server hardware. It also provides full support for ARM64 architectures, including platforms like the Altra family, which feature up to 128 cores and are optimized for applications. Additionally, VPP has historical support for architectures, though recent packaging focuses on and ARM64. To achieve optimal performance, deployments typically require multi-core CPUs (at least 8 cores recommended for production) and high-speed network interface cards (NICs) supporting 10 Gbps or greater, such as X520 or Mellanox ConnectX series, often integrated via DPDK for direct I/O access. VPP operates primarily in Linux userspace, with official packages available for recent Long Term Support (LTS) releases of Debian and Ubuntu distributions. In 2024, VPP introduced an official port to FreeBSD as part of the 24.10 release, allowing integration with FreeBSD's networking stack for enhanced compatibility in BSD-based environments. Experimental support for Windows exists through community efforts, but it remains unofficial and limited to basic functionality. VPP is designed for flexible deployment across various environments, including bare-metal servers for maximum performance, virtual machines such as those hosted on KVM or for isolated workloads, and containerized setups using for lightweight orchestration. For cloud-native applications, VPP integrates with through plugins such as Calico's VPP dataplane, enabling pod-to-pod networking in clustered deployments. Installation of VPP can be accomplished via pre-built packages from FD.io repositories, which are accessible through APT for /, ensuring straightforward setup on supported OS versions. Alternatively, users can build VPP from by cloning the official repository and compiling with tools like Make and , allowing customization for specific hardware or features. Binary packages are also available for via the ports system.

Features

Packet Processing Capabilities

Vector Packet Processing (VPP) provides a comprehensive set of built-in functions for handling packets at OSI layers 2 through 4, enabling efficient forwarding and manipulation in high-performance networking environments. These capabilities are implemented through a modular of processing nodes, allowing packets to traverse specific functions based on . At Layer 2, VPP supports Ethernet bridging via configurable bridge domains that facilitate based on destination addresses. It includes MAC learning, which dynamically populates (FIB) tables with learned MAC addresses, along with configurable aging timers to remove stale entries. VLAN tagging is handled through tag rewrite operations, supporting both single VLAN tags and stacked Q-in-Q configurations for sub-interface isolation and traffic segmentation. Layer 3 capabilities in VPP encompass for both IPv4 and , using fast lookup tables in the FIB for efficient forwarding. ARP resolution is integrated to map addresses to MAC addresses, with support for static and dynamic entries. ICMP handling covers error messaging and diagnostics for IPv4 (ICMP) and IPv6 (), including echo requests and replies. support includes route configuration for group-based distribution, enabling efficient delivery to multiple recipients via FIB entries. For Layer 4, VPP offers and load balancing through the plugin, distributing traffic across multiple backends using static mappings and session affinity based on client . (NAT) is provided in NAT44 and variants, supporting endpoint-independent mapping for address conservation and IPv4-IPv6 . Lists (ACLs) enable firewalling by applying policies at and levels, including n-tuple to permit or deny traffic based on source/destination addresses, ports, and protocols. Stateful processing for these Layer 4 features relies on connection tracking, which maintains session state to handle bidirectional flows, timeouts, and SYN proxying for connections. As of February 2025, the VPP 25.02 release introduced enhancements including features and async processing support for TLS, extending these capabilities. Advanced capabilities extend these functions with support for MPLS label imposition and disposition, allowing VPP to act as an MPLS edge or core router for traffic engineering. VXLAN encapsulation and decapsulation enable overlay networking, interconnecting bridge domains across underlay networks for virtualized environments. Quality of Service (QoS) marking applies prioritization through traffic classification and marking of Differentiated Services Code Point (DSCP) fields, ensuring bandwidth allocation and low-latency handling for critical traffic.

Extensibility and APIs

Vector Packet Processing (VPP) offers extensibility primarily through its architecture, allowing developers to add custom functionality without modifying the core codebase. Plugins are developed , starting with a generated by the VPP plugin generator script, which creates essential files such as the main plugin source, node implementation, and definitions. These plugins are compiled as shared object libraries (.so files) and loaded dynamically at runtime, integrating into VPP's of processing nodes to handle specific packet processing tasks. The Binary provides a high-performance for control plane applications to interact with VPP, utilizing a mechanism to enable low-latency communication between external clients and the VPP data plane. This supports both blocking and non-blocking modes, with generated high-level bindings in languages like and C++ ensuring and efficient message handling, such as automatic byte-order conversion. It facilitates operations like updates and statistics queries over the ring, minimizing overhead compared to socket-based alternatives. VPP includes a built-in (CLI) for direct configuration and management, accessible interactively or via scripts, covering tasks from interface setup to feature enabling. The and CLI enable integration with tools for . The FD.io VPP repository hosts numerous plugins, including those for advanced protocols like (BGP) and Segment Routing over (SRv6), demonstrating the framework's modular extensibility for diverse networking features.

Performance

Benchmarks and Throughput

Vector Packet Processing (VPP) demonstrates exceptional throughput capabilities, particularly in high-speed forwarding scenarios. In 2024 tests on using processors, VPP achieved up to 108 Mpps for 64-byte packets on an 88-core instance equipped with gVNIC network interfaces supporting up to 200 Gbps. Similarly, on a 360-core instance, VPP forwarded 98 Mpps under comparable conditions, highlighting its scalability on modern x86 hardware. These results were obtained with configurations leveraging multiple RX/TX queues and poll-mode driver (PMD) threads, emphasizing VPP's in user-space packet processing via DPDK. Latency measurements further underscore VPP's performance in simple forwarding graphs. Independent validation showed average forwarding latency of 20 microseconds for 64-byte frames, comparable to switches, with larger 1518-byte frames at around 100 microseconds. Such metrics were derived using generators like , which timestamps packets to compute end-to-end delays in controlled environments. VPP's throughput scales linearly with the number of CPU cores, enabling efficient utilization of multi-core systems. confirms linear core up to high core counts, tested with millions of flows and addresses, allowing sustained as worker threads increase. Packet size significantly influences Mpps rates, with smaller 64-byte packets yielding higher throughput (e.g., over 100 Mpps) compared to larger sizes, where in Gbps becomes the limiting factor—such as 175 Gbps for 1024-byte packets in the same Cloud setup. In controlled tests, VPP outperforms forwarding by approximately an , achieving 10-20x higher throughput for IPv4 and packets due to its kernel-bypass architecture. For instance, while kernel-based solutions struggle at 1 Gbps for 64-byte packets in bridged configurations, VPP routinely exceeds 100 Mpps on commodity hardware.

Optimization Strategies

To achieve optimal performance in Vector Packet Processing (VPP), configuring CPU affinity is essential, particularly on systems with (NUMA) architectures, where improper thread placement can lead to increased due to remote access. VPP worker threads should be bound to specific CPU cores to prevent the operating system scheduler from migrating them, which minimizes context switches and conflicts. This binding can be accomplished using tools like numactl to enforce both CPU and affinity policies, ensuring that threads and their associated allocations remain local to the same . For instance, launching VPP with numactl --cpunodebind=0 --membind=0 pins processes to 0, reducing cross-NUMA traffic and improving packet processing efficiency in multi-socket environments. Graph simplification in VPP involves optimizing the directed acyclic graph (DAG) of Data Path Objects (DPOs) that constitutes the datapath, thereby reducing the number of node traversals—or "hops"—a packet must undergo during forwarding. Each hop incurs overhead from function calls and state lookups, so minimizing these by collapsing redundant or indirection DPOs (such as those used for fast convergence) into a single composite node lowers the per-packet processing cost. VPP's architecture supports this through dynamic adjacency registration, where sub-types like MPLS labels or segment routing headers are integrated into a unified ip_adjacency_t structure, avoiding unnecessary graph layers while preserving modularity. This technique trades some flexibility for reduced invocation cycles, enabling higher throughput in high-load scenarios. Tuning vector sizes, or batch sizes, allows VPP to adapt packet processing to workload characteristics, as the framework processes packets in vectors to leverage SIMD instructions and cache locality. The default vector size is 256 packets, but it can be adjusted between 32 and 512 based on traffic load; smaller sizes suit low-latency applications with sporadic bursts, while larger ones maximize under sustained high throughput by amortizing per-batch overheads. Adaptive batching strategies, such as those employing to dynamically select sizes (e.g., via models trained on load metrics), further optimize CPU utilization and power consumption by incorporating short sleeps during idle periods to yield cycles without sacrificing performance. For example, at moderate loads around 5 Gbit/s, oscillating batch sizes can maintain near-peak while bounding increases. Receive Side Scaling (RSS) configuration in VPP distributes incoming traffic across multiple s on a physical , enabling load balancing over worker s to prevent bottlenecks in multi-threaded setups. By hashing packet headers (e.g., tuple or outer L4 ports) at the level, RSS steers flows to specific queues, each polled by a dedicated VPP thread, which ensures even utilization of CPU cores. Interfaces and queue pairs are assigned to threads in a manner during startup, but this can be refined using CLI commands like "set interface placement" for fine-grained control. Enabling RSS is particularly beneficial for symmetric multi-processing environments, as it scales packet reception linearly with available cores while minimizing contention.

Applications

Networking Use Cases

Vector Packet Processing (VPP) serves as a high-performance virtual router and switch in cloud environments, particularly in deployments where it replaces (OVS) to enhance tenant isolation and forwarding efficiency. In this role, VPP acts as an ML2 mechanism driver, enabling Layer 3 routing support alongside virtual switching capabilities, which allows for scalable without the overhead of kernel-based alternatives. This configuration supports self-service networking models, where isolated virtual networks are created for multi-tenant scenarios, leveraging VPP's graph-based packet processing to handle traffic steering and isolation at line rates. As a load balancer, VPP facilitates both Layer 4 (L4) and Layer 7 (L7) traffic distribution in containerized orchestration platforms like , often integrated with proxies such as Envoy for advanced routing. VPP's extensible node architecture enables the implementation of hashing-based load balancing algorithms and session affinity, directing traffic to backend services while supporting dynamic through plugins. For L7 operations, VPP can interface with Envoy via socket-layer APIs, allowing Envoy to utilize VPP as its underlying network stack for efficient proxying and HTTP routing in environments. In architectures, VPP powers packet processing within User Plane Functions (UPF) for networks, handling high-throughput data forwarding at the network . The UPF role involves tunneling protocols like GTP-U, IP anchoring, and QoS enforcement, where VPP's vectorized processing ensures low-latency user plane operations compliant with standards. This deployment is critical for scenarios requiring ultra-reliable connectivity, such as in distributed cores where VPP manages traffic aggregation and breakout to local services. VPP enables inline security functions, including Intrusion Prevention Systems (IPS) and Intrusion Detection Systems (IDS), through its Access Control List (ACL) and Deep Packet Inspection (DPI) nodes. ACL nodes provide stateful filtering and classification on any interface, supporting security group policies that inspect and drop malicious packets in real-time, while DPI capabilities examine Layer 7 payloads for threat detection and policy enforcement. These features allow VPP to function as an inline firewall, integrating with broader security chains to mitigate attacks without disrupting legitimate traffic flows. A notable example of VPP's application is its use as a virtual router in FD.io's Honeycomb framework for service function chaining (SFC), where it orchestrates dynamic insertion of network functions like firewalls or load balancers into traffic paths. , as a VPP-based , configures the data plane to support IETF-compliant Network Service Headers (NSH) for SFC, enabling programmable forwarding that steers packets through virtualized service chains in NFV environments. This setup, often paired with controllers like OpenDaylight, facilitates automated orchestration of complex service topologies.

Real-World Deployments

In cloud environments, Google Cloud has integrated VPP to enhance high-throughput . As of 2024, deployments on x86-based instances such as and C3D achieved over 100 million packets per second (Mpps) with minimal , supporting telco-grade network functions for NFV workloads. In 2025, VPP on -based Google Axion (C4A) instances reached approximately 66 Mpps with low latency and near-zero internal packet drops, enabling efficient scaling on both x86 and platforms. VPP is utilized in commercial products by vendors such as Netgate (TNSR software router) and 9000 series and Carrier Grade Services Engine), providing high-performance networking in and environments.

Comparisons

Versus Kernel-Based Networking

Vector Packet Processing (VPP) operates in user space, bypassing the operating system's networking to eliminate overhead associated with kernel-user space switches and system calls. In traditional kernel-based networking, such as the netdev , packet processing involves frequent context switches between kernel and user space, which introduce latency and CPU overhead, particularly under high packet rates. VPP, by contrast, employs direct polling of network interfaces via libraries like DPDK, allowing continuous packet I/O without these switches, resulting in reduced processing latency. In terms of flexibility, VPP's architecture supports of plugins as shared libraries at , enabling extensions such as custom graph nodes for packet processing without recompiling the core framework or restarting the system. This contrasts with kernel-based networking, where modules are typically static, requiring kernel recompilation or module loading that often necessitates system reboots and offers limited fault isolation. VPP plugins integrate seamlessly into its modular -based processing model, facilitating rapid development and deployment of network functions in user space. Throughput benchmarks demonstrate VPP's superiority, achieving up to 10 times higher efficiency in packets per second (Mpps) compared to stacks for forwarding tasks. For instance, in IPv4 forwarding tests on a 1.2 GHz , VPP delivered 4.19 Mpps using only 2 cores, while the required 12 cores to reach 3.92 Mpps. Single-core performance further highlights this gap, with VPP sustaining over 14 Mpps for line-rate forwarding, far exceeding capabilities under similar loads. Kernel-based networking suits general-purpose operating systems handling diverse workloads, including file systems and applications, where interrupt-driven processing conserves CPU during idle periods. VPP excels in performance-critical paths, such as high-speed routers or virtual network functions, where its user-space design prioritizes sustained throughput over multi-tasking versatility. A key distinction lies in VPP's poll-mode drivers versus the interrupt-driven approach of kernel stacks, which profoundly affects CPU utilization. Kernel processing relies on interrupts to signal packet arrival, triggering context switches that limit scalability and increase overhead at high rates, often leading to underutilized CPU cycles during bursts. VPP's polling continuously checks interfaces, consuming near-100% CPU on dedicated cores but enabling predictable, low-latency handling and better overall efficiency for intensive forwarding, as evidenced by its superior Mpps per core. VPP's vector efficiency briefly contributes here by batching packets to amortize polling costs.

Versus Other User-Space Frameworks

Vector Packet Processing (VPP) distinguishes itself from other user-space frameworks through its vectorized processing model and comprehensive feature set, particularly when compared to (OVS), custom DPDK applications, and Snabb. Compared to OVS, VPP employs native DPDK integration for user-space packet processing, avoiding OVS's default -based datapath fallback, which incurs overhead from context switches between and user space. This results in VPP delivering superior performance for Layer 3 and higher operations in NFV environments, where vector processing enables efficient batch handling of packets. In contrast, OVS excels in simpler Layer 2 switching scenarios due to its mature support and ease of integration with platforms. Benchmarks show VPP achieving up to 12 Mpps for 64-byte packets in inter-container communications, significantly higher than OVS-DPDK which reaches lower rates (e.g., ~2.7 Mpps equivalent in service chaining tests), while against OVS, user-space solutions like VPP or OVS-DPDK can yield 5-8x higher throughput (e.g., 1.4 Gbps vs. 0.16 Gbps for small packets in service function chaining). In multi-VNF tests, VPP sustains higher throughput with features like and QoS, reaching 9 Gbps with 6 VNFs under IMIX traffic, outperforming OVS-DPDK which drops sharply beyond that. Against pure DPDK applications, VPP offers a full networking stack with pre-built plugins for L2-L4 protocols, reducing the need for developers to implement low-level packet I/O, buffering, and graph orchestration from scratch. Custom DPDK code, while flexible for specialized tasks like simple forwarding, demands significant engineering effort for complex pipelines, lacking VPP's modular architecture that allows hot-pluggable extensions without recompiling the core. This makes VPP preferable for deployments requiring rapid iteration and hardware portability across x86, , and architectures. Relative to Snabb, a Lua-scripted user-space emphasizing , composable functions via directed acyclic graphs, VPP provides a more robust in for enterprise-grade and . Snabb's scripting approach facilitates quick prototyping but limits throughput to around 3 Mpps for small packets due to its non-vectorized, non-DPDK design, compared to VPP's 12 Mpps. VPP's maturity in NFV, backed by a larger and contributions from multiple vendors, supports broader integration with tools, though Snabb may suit niche, low-overhead use cases. Overall, VPP's advantages include extensive L2-L4 protocol support—from bridging and to ACLs and encapsulation—and a vibrant open-source community under FD.io, enabling faster time-to-market for high-throughput applications. However, its C-based development introduces a steeper than scripting-oriented alternatives like Snabb or configuration-driven OVS.

References

  1. [1]
    VPP/What is VPP? - fd.io
    ### Summary of VPP (Vector Packet Processing)
  2. [2]
    VPP Technology - FD.io
    At the heart of FD.io is Vector Packet Processing (VPP). In development since 2002, VPP is production code currently running in shipping products.
  3. [3]
    FDio/vpp: Mirror of VPP code base hosted at git.fd.io - GitHub
    It is the open source version of Cisco's Vector Packet Processing (VPP) technology: a high performance, packet-processing stack that can run on commodity CPUs.
  4. [4]
    What is the Vector Packet Processor (VPP) - FD.io
    VPP is a fast, scalable layer 2-4 multi-platform network stack. It runs in Linux Userspace on multiple architectures including x86, ARM, and Power ...
  5. [5]
    What is Vector Packet Processing? - Netgate
    VPP is the wave of the future for all secure networking packet processing needs. Its packet processing power, programmability, manageability, and deployment ...
  6. [6]
    VPP on FreeBSD
    VPP optimizes packet processing through vectorized operations and parallelism, ideal for software-defined networking (SDN) and network function virtualization ( ...
  7. [7]
    What is vector packet processing? - FD.io VPP - Read the Docs
    The next rx vector will be larger. Larger vectors are processed more efficiently: I-cache warmup costs are amortized over a larger number of packets.Missing: batches | Show results with:batches
  8. [8]
    [PDF] Vector Packet Processing - FD.io
    VPP is a data plane, a very efficient and flexible one. It consists of a set of forwarding nodes arranged in a directed graph and a supporting framework. The ...
  9. [9]
    US7961636B1 - Vectorized software packet forwarding
    An intermediate network node is configured to forward a plurality of packets concurrently, e.g., as a vector, rather than one packet at a time.
  10. [10]
    [PDF] High-speed Software Data Plane via Vectorized Packet Processing
    David Barach is a Cisco Fellow specializing in networking data-plane codes. ... He is the inventor of the Vector Packet. Processor code: before the recent open- ...
  11. [11]
    A BIGGER HELPING OF INTERNET PLEASE! - Cisco Blogs
    Apr 4, 2016 · The packet processing graph architecture of VPP together with its powerful plugin capabilities allow anyone to write new features, or support ...
  12. [12]
    Linux Foundation lines up big guns for open I/O standard push
    Feb 11, 2016 · FD.io's initial code contributions include vector packet processing (VPP), a highly optimized packet processor for general-purpose CPUs ...
  13. [13]
    Linux Foundation Forms FD.io - Light Reading
    Feb 11, 2016 · Initial code contributions for FD.io include Vector Packet Processing (VPP), technology being donated by one of the project's founding members, ...
  14. [14]
    Announcing the vpp 16.06 release - Mailing lists for FD.io
    Jun 4, 2022 · It is with great pleasure that I announce that the 16.06 release of vpp is live. You can find instructions for Getting 16.06 here: https://wiki.
  15. [15]
    FD.io Mini-Summit - KubeCon and CloudNativeCon Europe 2018
    Apr 30, 2018 · FD.io has been integrated with Kubernetes, OpenStack Neutron, and ... A key component of Fido is the Vector Packet Processing (VPP) library ...
  16. [16]
    Empowering container-based NFVi with VPP on Arm servers - Linaro
    Mar 19, 2018 · FD.io/VPP (Vector Packet Processing). • User Space software platform ... • Integrated VPP with Kubernetes for inter-container communication with.
  17. [17]
    Releases · FDio/vpp - GitHub
    You can create a release to package software, along with release notes and links to binary files, for other people to use. Learn more about releases in our docs ...
  18. [18]
    FDio - The Universal Dataplane
    FD.io is an open-source project for a secure, fast networking data plane using Vector Packet Processing (VPP).VPP Technology · Vector Packet Processor (VPP) · Use Cases · Latest News
  19. [19]
    FD.io Bolsters Kubernetes, NFV, and Istio Support With Latest Release
    FD.io was open sourced into the Linux Foundation in early 2016, backed by Cisco's Vector Packet Processing (VPP) software code. It was founded to tackle ...
  20. [20]
    Use Cases for FD.io
    FD.io VPP supports entry hardware options from number of hardware vendors for building Customer Premise Equipment devices. FD.io based commercial options are ...Missing: NFV | Show results with:NFV
  21. [21]
    Scalar vs Vector packet processing - FD.io
    Vector packet processing is a common approach among high performance packet processing applications such FD.io VPP and DPDK. The scalar based approach tends ...
  22. [22]
    Punting Packets — The Vector Packet Processor 20.09 documentation
    Exception packets are those that a given node cannot handle via normal mechanisms. Punting of exception packets is handled via the VLIB 'punt infra'. There are ...Missing: diversion | Show results with:diversion
  23. [23]
    VPP/Software Architecture - fd.io
    Jun 29, 2018 · For performance, the vpp dataplane consists of a directed graph of forwarding nodes which process multiple packets per invocation. This schema ...
  24. [24]
    VLIB (Vector Processing Library) - FD.io
    These libraries provide vector processing support including graph-node scheduling, reliable multicast support, ultra-lightweight cooperative multi-tasking ...
  25. [25]
    Adding a plugin — The Vector Packet Processor 20.01 documentation
    This section shows how a VPP developer can create a new plugin, and add it to VPP. We assume that we are starting from the VPP <top-of-workspace>. As an example ...
  26. [26]
    Plugins - FD.io Docs Tree
    vlib implements a straightforward plug-in DLL mechanism. VLIB client applications specify a directory to search for plug-in .DLLs, and a name filter to apply.
  27. [27]
    Binary API Support - FD.io
    VPP provides a binary API scheme to allow a wide variety of client codes to program data-plane tables. As of this writing, there are hundreds of binary APIs.
  28. [28]
  29. [29]
    VPP API Language - FD.io Docs Tree
    VPP API Language . The VPP binary API is a message passing API. The VPP API language is used to define a RPC interface between VPP and its control plane.
  30. [30]
    VPP - fd.io
    Sep 3, 2025 · VPP is the core technology behind the FD.io Project. The best place to learn how VPP fits in to the larger FD.io project is the FD.io Main Site.VPP/What is VPP? · VPP/Setting Up Your Dev... · VPP/VPP Home Gateway · BIERMissing: key | Show results with:key
  31. [31]
    VPP/How To Optimize Performance (System Tuning) - fd.io
    ### Configuration Details for VPP with DPDK
  32. [32]
    VPP 16.06 DPDK Patchset - fd.io
    ### Summary: VPP 16.06 DPDK Patchset Alignment with DPDK 16.04
  33. [33]
    VPP/EC2 instance with SRIOV - fd.io
    ### Summary: VPP Support for SR-IOV and DPDK rte_eth_dev for Virtual Functions
  34. [34]
    Supported archs and OS - FD.io Docs Tree
    Operating Systems and Packaging . FD.io VPP supports package installation on the following recent LTS releases: Distributions: Debian. Ubuntu. Previous Next ...Missing: deployment 2025
  35. [35]
    Ampere Announces Altra Max Arm Processor with 128 Cores
    Jun 23, 2020 · Ampere unveiled preliminary details of the expansion of the cloud-native processor family by adding Ampere Altra Max, which has 128 cores.
  36. [36]
    VPP 24.10 Release on 30th of October 2024 - FD.io
    Oct 30, 2024 · The VPP 24.10 Release includes 19 new features such as port of VPP to FreeBSD, new device driver infra updates, Marvell Armada device driver, and others.
  37. [37]
    net/vpp: VPP: A fast, scalable layer 2-4 multi-platform network stack
    Aug 9, 2024 · FD.io's Vector Packet Processor (VPP) is a fast, scalable layer 2-4 multi-platform network stack. It runs in Linux Userspace on multiple ...
  38. [38]
    Contiv/VPP Kubernetes Network Plugin - FD.io
    Contiv/VPP is a Kubernetes network plugin that uses FD.io VPP to provide network connectivity between PODs in a k8s cluster.Missing: bare- metal
  39. [39]
    VPP Container Test Bench - FD.io Docs Tree
    VPP Container Test Bench . This project spins up a pair of Docker containers, both of which are running Ubuntu 20.04 "Focal Fossa" (x86_64) along with VPP.
  40. [40]
    Ubuntu - Setup the FD.io Repository
    For example, the URL to the VPP 22.06 stable release branch package repository is: https://packagecloud.io/fdio/2206. Install the Mandatory Packages .
  41. [41]
  42. [42]
    Network Stack Features — The Vector Packet Processor v23.06-0 ...
    Layer 2 - 4 Network Stack · Linux and FreeBSD support · Network and cryptographic hardware support with DPDK. · Container and Virtualization support · Host Stack.Missing: capabilities | Show results with:capabilities
  43. [43]
    VPP Supported Features - FD.io Docs Tree
    DPOs are stacked/joined to form a processing graph that packets traverse to describe the full set of actions a packet should experience. DPO graphs can be ...
  44. [44]
    [PDF] FD.io : The Universal Dataplane
    L2: VLan, Q-in-Q, Bridge Domains, LLDP ... • L3: IPv4, DHCP, IPSEC … • L3: IPv6, Discovery, Segment Routing … • L4: SCTP, TCP, UDP … • CP: API, CLI, IKEv2 … • ...Missing: learning | Show results with:learning
  45. [45]
    FD.io VPP: src/vnet/l2 Directory Reference
    Ethernet Bridge Learning. Layer 2 Output Classifier. Layer 2 Rewrite. Ethernet VLAN Tag Rewrite.Missing: features | Show results with:features
  46. [46]
    Layer 3 IP CLI - FD.io Docs Tree
    This command is used to add or delete IPv4 or IPv6 multicast routes. All IP Addresses ( <dst-ip-addr>/<width> , <next-hop-ip-addr> and <adj-hop-ip-addr> ) ...
  47. [47]
    FD.io VPP: Files
    Layer 3 IP Code. format.c · format.h · icmp4.c · icmp4.h · icmp46_packet.h · icmp6.c ... Definitions for all things IP (v4|v6) unicast and multicast lookup ...
  48. [48]
    FD.io VPP: Data Structures
    IP Multicast Route Details. Cvl_api_ip_mroute_dump_t, Dump IP multicast fib ... Tell client about an IP4 ARP resolution event or MAC/IP info from ARP requests in ...
  49. [49]
    VPP/NAT - fd.io
    May 28, 2020 · The VPP NAT is an implementation of NAT44 and NAT64. It is a plugin and is meant to replace the VCGN component.
  50. [50]
    VPP Supported Features - FD.io Docs Tree
    The ACL plugin allows to implement access control policies at the levels of IP address ownership (by locking down the IP-MAC associations by MACIP ACLs), and by ...
  51. [51]
    VPP/Using VPP as a VXLAN Tunnel Terminator - fd.io
    May 30, 2018 · VXLAN as defined in RFC 7348 allows VPP bridge domains on multiple servers to be interconnected via VXLAN tunnels to behave as a single bridge domain.
  52. [52]
    VPP / tcp_echo performance - vpp-dev@lists.fd.io
    My goal is to achieve at least the same throughput using VPP as I get when I run iperf3 natively on the same network interfaces (in this case 10 Gbps).Missing: balancing | Show results with:balancing
  53. [53]
    Adding a plugin - What is the Vector Packet Processor (VPP)
    This section shows how a VPP developer can create a new plugin, and add it to VPP. We assume that we are starting from the VPP <top-of-workspace>.
  54. [54]
    VPP API module — Vector Packet Processor 01 documentation
    VPP API module allows communicating with VPP over shared memory interface. The API consists of 3 parts: common code - low-level API; generated code - high-level ...
  55. [55]
    VPP Configuration - CLI and 'startup.conf' - FD.io
    VPP installs a startup config file named startup.conf in the /etc/vpp/ directory. This file can be tailored to make VPP run as desired, but contains default ...Missing: RESTCONF | Show results with:RESTCONF
  56. [56]
    ONE/Restconf Lisp Guide - fd.io
    Aug 19, 2016 · This guide is showing how to use Lisp with Honeycomb/Vpp. Restconf Calls. Lisp. Write. This call will enable/disable Lisp feature. PUT - http ...
  57. [57]
    vpp/src/plugins at master · FDio/vpp
    - **Insufficient relevant content**: The provided content does not include the plugin directories from the `src/plugins` folder in the FDio/vpp GitHub repository.
  58. [58]
    Vector Packet Processing 102: Honeycomb & hc2vpp
    hc2vpp (Honeycomb to VPP) is a configuration agent, so that configurations can be sent via NETCONF/RESTCONF. It translates the configuration to low level APIs ( ...Missing: event | Show results with:event
  59. [59]
    Forwarding over 100 Mpps with FD.io VPP on x86 - Medium
    Apr 21, 2024 · Explore high-perf packet processing on GCP using FD.io VPP. Dive into DPDK achieving 100+ Mpps with minimal packet loss.
  60. [60]
    Validating Cisco's NFV Infrastructure Pt. 1 - Light Reading
    ... VPP showed latency of 20 microseconds, which compares with hardware switches. 1518-byte sized frames were handled with an average latency of 100 microseconds.
  61. [61]
    Test Methodology — FD.io CSIT-1810.03 rls1810 documentation
    TRex Traffic Generator (TG) is used for measuring latency of VPP DUTs. Reported latency values are measured using following methodology: Latency tests are ...Data Plane Throughput · Mlrsearch Tests · Search ImplementationMissing: 10μs | Show results with:10μs
  62. [62]
    Performance - The Vector Packet Processor (VPP) - FD.io
    Reducing cache and TLS misses by processing packets in vectors. Realizing IPC gains with vector instructions such as: SSE, AVX and NEON. Eliminating mode ...Missing: batches | Show results with:batches
  63. [63]
    [PDF] IP Packet Forwarding Performance Comparison of the FD.io VPP ...
    Feb 5, 2025 · The Fast Data Project /. Vector Packet Processing (FD.io VPP) is a novel and prominent solution. This paper investigates its performance and ...
  64. [64]
    [PDF] Performance benchmarking of state-of-the-art software switches for ...
    Nov 27, 2020 · For instance, as reported by [21], the kernel-based Linux. Bridge and Open vSwitch only achieved 1 Gbps through- put with 64B packets, while ...
  65. [65]
    VPP/How To Optimize Performance (System Tuning) - fd.io
    Feb 8, 2022 · This page describes system configuration tweaks that can help maximize the packet processing performance of VPP applications.Missing: linear | Show results with:linear<|separator|>
  66. [66]
    The Data Plane — The Vector Packet Processor 20.01 documentation
    The data-plane data model is a directed, acyclic 1 graph of heterogeneous objects. A packet will forward walk the graph as it is switched.
  67. [67]
    [PDF] Adaptive Batching for Fast Packet Processing in Software Routers ...
    As explained in Section II, the batch size used by VPP depends on the load at the NIC. Although VPP tries to create maximum-size batches in high-load, it may ...
  68. [68]
    Multi-threading in VPP - FD.io Docs Tree
    With RSS (Receive Side Scaling) enabled multiple threads can service one physical interface (RSS function on NIC distributes traffic between different queues ...
  69. [69]
    Networking-vpp/L3 routing support - OpenStack Wiki
    This project will add code to networking-vpp repository to enable L3 support when the networking-vpp driver is used in conjunction with an vpp based vswitch.
  70. [70]
    naveenjoy/networking-vpp: ML2 Mechanism driver and ... - GitHub
    The remote mac addresses are pushed into VPP by the vpp-agent each time a port is bound on a remote node only if that binding is interesting to it. So the way ...Networking-Vpp · Your Questions Answered · How Do I Do Devstack, Then?
  71. [71]
    Networking-VPP A fast forwarding vSwitch/vRouter for OpenStack
    Feb 5, 2018 · by Jerome Tollet At: FOSDEM 2018 Room: H.1301 (Cornil) Scheduled start: 2018-02-03 12:00:00+01.Missing: virtual router switch replacing OVS
  72. [72]
    What is the Vector Packet Processor (VPP) - FD.io
    Some VPP Use-cases include vSwitches, vRouters, Gateways, Firewalls and Load Balancers, to name a few. For more details click on the links below or press next.
  73. [73]
    Using VPP as Envoy's Network Stack - Florin Coras - YouTube
    Nov 12, 2020 · ... Load Balancer and TCP Proxy. This talk will discuss how some of the recent socket layer API changes can be leveraged to cleanly integrate Envoy ...Missing: L4 Kubernetes
  74. [74]
    Cisco 5G Ultra Cloud Core - User Plane Function (UPF) Data Sheet
    The Cisco UPF, using Vector Packet Processing (VPP) technology, achieves ultra-fast packet forwarding while retaining compatibility with all the user plane ...
  75. [75]
    Cisco User Plane Function (for 5G and 4G) Data Sheet
    Cisco User Plane Function (UPF) advances converged mobile networks by supporting 5G and 4G on a single virtualized network function.
  76. [76]
    VPP/SecurityGroups - fd.io
    Mar 20, 2024 · VPP Security Groups support classifiers/filters on any interface type, with a security group specific API in a plugin.Missing: inline IPS DPI
  77. [77]
    UCC 5G UPF Configuration and Administration Guide, Release ...
    Apr 30, 2025 · One of the key product capability of Cisco 5G-UPF is integrated Deep Packet Inspection (DPI) based services. DPI is the examination of layer 7 ( ...
  78. [78]
    [PDF] Next Generation Firewall – Optimizations with 4th Gen Intel® Xeon ...
    This implements the basic stateful firewall including routing, NAT, stateful ACLs, and. IPsec VPN. This is a multi-threaded process, with each VPP worker thread ...
  79. [79]
    [PDF] FastDataStacks | OPNFV
    The “OpenStack - OpenDaylight – Honeycomb – FD.io/VPP” scenario provides the capability to realize a set of use cases relevant to the deployment of NFV nodes.
  80. [80]
    Energy-efficient packet processing in 5G mobile systems - Ericsson
    Jun 21, 2022 · In this article, a team of Ericsson researchers evaluate a promising new energy-saving method that applies micro-sleeps in packet processing nodes.Packet Processing In... · User Plane Packet Processing... · Kernel Packet Processing
  81. [81]
    Pushing VPP Limits on GCP: Google Axion Takes on Intel C4 and ...
    Apr 28, 2025 · ... latency reaching 63 microseconds. In contrast, the RT kernel system (right) keeps the maximum latency down to around 50 microseconds.
  82. [82]
    Cisco Cloud Services Router 1000v Data Sheet
    The Cisco CSR 1000v is a virtual router for WAN gateway and network services in virtual and cloud environments, deployed as a virtual machine.
  83. [83]
    StarlingX - Open Source Industrial IoT Cloud Platform
    StarlingX is an integration and development project to provide a full software stack suitable to fulfill the strict requirements of edge computing use cases.Missing: VPP | Show results with:VPP
  84. [84]
    [PDF] Vector Packet Processor Documentation - Read the Docs
    Jul 6, 2018 · The benefits of FD.io VPP are its high performance, proven technology, its modularity and flexibility, integrations and rich feature set. For ...<|control11|><|separator|>
  85. [85]
    [PDF] Comparative Evaluation of Kernel Bypass Mechanisms for High ...
    Abstract: This work presents a framework for evaluating the performance of various virtual switching solutions, each widely adopted on Linux to provide ...
  86. [86]
    The Effects of High-Performance Cloud System for Network Function ...
    The results show that SR-IOV is better than OVS-DPDK or FD.io VPP in IPv4 forwarding and the SFC scenario.Missing: 10x | Show results with:10x
  87. [87]
    [PDF] Characterizing the Performance of Concurrent Virtualized Network ...
    Apr 9, 2018 · We show how OVS-DPDK compares from a throughput perspective to SR-IOV and FD.io VPP as the number of VNFs is increased under multiple feature ...