Fact-checked by Grok 2 weeks ago

Data Plane Development Kit

The Data Plane Development Kit (DPDK) is an open-source framework consisting of libraries and drivers designed to accelerate packet processing workloads in user space, enabling high-performance networking applications such as routers, firewalls, and video streaming services by bypassing the operating system and utilizing a run-to-completion model for efficient . Developed initially by in 2010 and open-sourced in 2013 under a permissive by 6WIND, DPDK has evolved into a community-driven project hosted by the , with contributions from over 940 individuals across more than 70 organizations and support for major CPU architectures including x86, , and PowerPC. At its core, DPDK provides the Environment Abstraction Layer (EAL) for portability across environments like user space and multi-process support, along with specialized libraries for (e.g., memory pools and mbuf packet buffers), ring-based lockless queues for inter-core communication, timers, hashing, and longest prefix matching (LPM) to facilitate rapid data plane operations. It incorporates Poll Mode Drivers (PMDs) for low-, high-throughput access to network interface cards (NICs) supporting a wide range of speeds from 1 GbE up to 800 GbE, including virtio Ethernet controllers, while offering flexible programming models such as polling for maximum performance, interrupt-driven modes for power efficiency, and event-based pipelines for staged packet processing. These components collectively enable developers to prototype custom protocol stacks, integrate with ecosystems, and achieve significant improvements in and on supported from multiple vendors.

Introduction

Definition and Purpose

The Data Plane Development Kit (DPDK) is an open-source collection of libraries and drivers that enables fast packet processing directly in user space, circumventing the operating system to deliver low-latency and high-throughput networking capabilities. By leveraging poll-mode drivers and avoiding kernel interrupts, DPDK allows applications to poll network interface controllers (NICs) efficiently, reducing overhead from context switches and system calls that plague traditional kernel-based stacks. The primary purpose of DPDK is to provide a straightforward, vendor-neutral for developing data plane applications, including routers, switches, and firewalls, suitable for both rapid prototyping and production deployment. This supports the creation of performance-sensitive network functions by offering modular components that handle packet reception, processing, and transmission without relying on the kernel's networking subsystem. Key benefits of DPDK include its support for run-to-completion and pipeline processing models, where the former dedicates cores to sequential packet handling and the latter uses ring buffers for staged, multi-core workflows. It pre-allocates memory pools for packet buffers (mbufs) at initialization to minimize runtime allocation overhead, optimizing efficiency and enabling acceleration of workloads across multi-core processors. DPDK was initially developed to overcome the performance bottlenecks of kernel-based networking, particularly in emerging paradigms like (NFV) and (SDN), where high-speed packet processing is essential for virtualized and programmable infrastructures.

History and Development

The Data Plane Development Kit (DPDK) originated in 2010 as an internal project at , led by engineer Venky Venkatesan, who is widely recognized as "The Father of DPDK." Venkatesan focused initially on optimizing packet processing for Intel x86 platforms, addressing performance bottlenecks in high-speed networking applications. He passed away in 2018 after a battle with cancer, leaving a lasting legacy in the field. The project transitioned to with its first public release in , spearheaded by 6WIND, which established the community hub at DPDK.org to foster collaborative development. This move enabled broader adoption and contributions beyond Intel's proprietary framework. Key milestones followed, including DPDK's integration into the in April 2017, which provided neutral governance and accelerated ecosystem growth. By 2018, the project had garnered contributions from over 160 developers across more than 25 organizations, reflecting its expanding influence. That year also saw the release cadence shift to biannual starting from mid-2017, allowing for more stable and feature-rich updates. As of 2025, the latest stable release is version 25.07 from July 2025, following an API freeze in October, with the upcoming 25.11 release scheduled for November 19. Recent developments include expanded support for and PowerPC architectures, broadening DPDK's applicability beyond x86. Growth metrics underscore this evolution: early releases like 18.05 incorporated over 1700 commits, while the project now supports more than 100 Poll Mode Drivers (PMDs) from multiple vendors.

Core Architecture

Environment Abstraction Layer

The Environment Abstraction Layer (EAL) serves as the foundational component of the Data Plane Development Kit (DPDK), providing a generic interface that abstracts low-level resources such as hardware devices and memory from the operating system and hardware specifics, thereby enabling high-performance, portable packet processing applications. By initializing the runtime environment and managing multi-core execution, the EAL allows DPDK applications to operate efficiently in user space without relying on dependencies, which is crucial for bypassing traditional OS stack overheads. The EAL's initialization process begins with the invocation of the rte_eal_init() function, which parses command-line arguments to configure the environment, such as specifying mappings, allocation modes, and hugepage directories. During initialization, it creates shared memory segments using mechanisms like hugetlbfs on or contigmem on , facilitating multi-process support where multiple DPDK instances can share resources via (IPC) primitives for synchronization. This setup ensures that applications can run in primary or secondary process modes, with the EAL handling and affinity settings to optimize performance across . Key functions of the EAL include mapping physical CPU cores to logical cores (lcores) using options like --lcores='lcore_set[@cpu_set]' for precise control, and allocating hugepage in either legacy mode (preallocating all pages) or dynamic mode (growing/shrinking as needed) via APIs such as rte_memzone_reserve(). It also manages a service core for background tasks through rte_thread_create_control(), provides abstractions for interrupts using user-space polling mechanisms like on or on , and offers timer facilities based on Time Stamp Counter (TSC) or (HPET) for alarm callbacks. Additionally, the EAL includes and tools, such as the rte_panic() for stack traces and CPU detection via rte_cpu_get_features(), enhancing and . To ensure portability, the EAL abstracts differences across operating systems including (using and hugetlbfs), (using contigmem), and Windows (using Win32 APIs), as well as architectures such as x86 (Intel/AMD), , and PowerPC. This abstraction enables DPDK applications to execute in user space on diverse platforms without modifications, supporting features like I/O virtual addressing (IOVA) modes (physical or virtual) configurable via --iova-mode to handle varying memory models.

Key Libraries

The Data Plane Development Kit (DPDK) provides several core libraries essential for efficient packet handling and processing in high-performance networking applications. These libraries enable developers to build scalable data plane software by abstracting memory management, queue operations, and packet representation. Among the core libraries, librte_mempool manages fixed-size object pools, such as packet buffers, using a ring-based structure to store free objects and supporting per-core caching for reduced contention. This library ensures efficient allocation and deallocation, with objects aligned to promote even distribution across RAM channels. Complementing it, librte_ring implements a lockless, fixed-size multi-producer multi-consumer (MPMC) queue as a table of pointers, optimized for bulk enqueue and dequeue operations to facilitate inter-core communication without overhead. The librte_mbuf library handles buffers (mbufs), providing mechanisms to create, free, and manipulate these buffers stored in mempools, including for packet attributes like length and offsets. Networking-specific libraries build on these foundations for protocol processing. The librte_ethdev library abstracts Ethernet devices, supporting poll-mode drivers (PMDs) for various speeds from 1 GbE up to 400 GbE and higher (as of DPDK 25.11) and enabling interrupt-free packet I/O through port identifiers and configuration APIs. The librte_net offers utilities for protocol handling, including parsing and construction of headers like IPv4, , and . For fragmentation, librte_ip_frag enables IPv4 and reassembly and fragmentation, converting input mbufs into fragments based on MTU size via functions like rte_ipv4_fragment_packet(), supporting both threaded and non-threaded modes for high-throughput scenarios. Utility libraries further enhance application capabilities. librte_timer delivers a per-core configurable timer service for asynchronous callback execution, supporting periodic or one-shot timers based on the Environment Abstraction Layer (EAL)'s time reference, which requires EAL initialization for use. For lookup operations, librte_hash provides a high-performance for exact-match searches in packet classification and forwarding, with multi-threaded support and optimizations for efficiency. Similarly, librte_lpm implements (LPM) tables for lookups, using a trie-based structure to achieve wire-speed performance on IPv4 addresses. A key feature of librte_mbuf is its support for chaining multiple mbufs via the next pointer to represent segmented packets, such as jumbo frames, allowing transmission and reception without data copying by passing the chain directly to drivers. This scatter-gather capability minimizes latency and CPU overhead, as only the first mbuf in the chain carries primary , enabling efficient handling of large payloads across non-contiguous buffers.

Drivers and Plugins

Poll Mode Drivers

Poll Mode Drivers (PMDs) in the Data Plane Development Kit (DPDK) are user-space drivers that enable direct access to (NIC) hardware by polling receive () and transmit () queues, thereby bypassing the operating system's networking stack and avoiding interrupt-driven processing for reduced latency and higher throughput. This polling mechanism allows applications to process packets in bursts using functions like rte_eth_rx_burst and rte_eth_tx_burst, which retrieve or send multiple packets in a single call to minimize overhead. PMDs operate within the DPDK's Ethernet device (ethdev) abstraction layer, providing a standardized for device configuration, queue management, and statistics collection. PMDs are categorized into physical drivers for NICs and virtual drivers for emulated environments. Physical PMDs support a variety of Ethernet controllers, such as the i40e driver for 40 Gigabit Ethernet (GbE) adapters and the Mellanox (now ) mlx5 driver for ConnectX series NICs, enabling high-speed packet I/O on bare-metal systems. Virtual PMDs, like the virtio driver, facilitate integration in virtualized setups such as virtual machines (VMs) or containers, allowing DPDK applications to interface with para-virtualized devices without hardware-specific dependencies. Key features of PMDs include support for Ethernet speeds ranging from 1 GbE to over 100 GbE, accommodating diverse network infrastructures. They incorporate to distribute incoming packets across multiple queues using hashing, improving scalability in multi-core environments. Additionally, PMDs expose statistics through the , such as packet counts, byte totals, and error metrics, which can be queried via functions like rte_eth_stats_get for and . DPDK provides over 100 PMDs across various categories from leading vendors, including (e.g., ice for 100 GbE), (e.g., bnxt), and (e.g., mlx5 for ConnectX-6). Crypto PMDs, such as AES-NI and , offload cryptographic operations to hardware accelerators for secure packet processing, while eventdev PMDs, like those for Octeontx, enable event-driven scheduling for complex data flows. These drivers ensure broad hardware compatibility and performance acceleration in DPDK-based applications.

Extension Plugins

Extension plugins in the Data Plane Development Kit (DPDK) provide modular extensions to core libraries, enabling users to incorporate custom functionality without altering the foundational codebase. These plugins support of specialized features, such as custom flow classifiers for advanced packet steering and (QoS) modules for traffic prioritization and shaping, thereby enhancing the framework's adaptability for diverse networking scenarios. Introduced in later DPDK releases to address growing demands for extensibility, they allow developers to tailor packet processing pipelines to specific requirements while maintaining the high-performance ethos of the kit. Representative examples include flow classification plugins that implement programmable rules for identifying and directing traffic flows based on , and security extensions like offload plugins that accelerate cryptographic operations by offloading them to . Third-party plugins contributed by ecosystem partners, such as those integrating vendor-specific accelerators for or metering, further exemplify how these extensions broaden DPDK's applicability in production environments. These plugins build upon base libraries like librte_ethdev for device interactions. Plugins are integrated via the Environment Abstraction Layer (EAL), which facilitates runtime loading of shared object files during application initialization. Developers register plugins using dedicated in librte_ethdev for ethernet-related extensions or eventdev for event-driven processing, allowing seamless attachment to existing device or event schedules. This approach ensures low-overhead incorporation, with EAL handling and dependency resolution to support multi-threaded operations. The prominence of extension plugins emerged post-2017, coinciding with increased community contributions that emphasized principles under DPDK's . Support for plugin versioning was added to ensure , enabling independent updates and reducing upgrade friction across DPDK releases. This development has solidified plugins as a key mechanism for fostering innovation within the ecosystem.

Development Environment

Supported Platforms

The Data Plane Development Kit (DPDK) supports a range of CPU architectures to enable deployment across diverse hardware environments. Primary support includes x86 processors from and in both 32-bit and 64-bit modes, providing broad compatibility for server and desktop systems. ARM architectures, particularly ARMv8 (), are fully supported, including platforms like the Ampere Altra family for and networking applications. Additionally, PowerPC architectures, such as , offer 64-bit support tailored for enterprise-scale deployments. DPDK's operating system compatibility centers on as the primary platform, requiring kernel version 5.4 or later and 2.7 or higher, with support for libc in distributions like since version 21.05. is also supported through dedicated ports and compilation tools, enabling use in environments. Windows support is experimental and limited to 64-bit systems, focusing on user-mode networking with specific kernel-mode drivers. Across these OSes, DPDK enables multi-process , allowing multiple instances to collaborate via primary and secondary process models for resource sharing. The build toolchain for DPDK utilizes the build system (version 0.57 or later) since 2018, paired with , and supports compilers such as 8.0 or higher, 7 or later, and specialized toolchains like Advance ToolChain for PowerPC. 3.6 or newer is required for scripting, along with hugepage support for efficient memory allocation, which is essential for performance by reducing TLB misses. The Environment Abstraction Layer (EAL) provides the underlying abstraction for these platform specifics. Hardware requirements emphasize network interface cards (NICs) compatible with DPDK's Poll Mode Drivers (PMDs), which bypass networking for direct access. Systems must support hugepages (2 MB or 1 ), with NUMA awareness via libnuma for optimal performance on multi-socket configurations, allowing allocation per NUMA node to minimize . While no strict minimum is mandated, production deployments typically require at least 4 to accommodate hugepage reservations and application pools effectively.

Prerequisites

The installation of DPDK on Linux requires a kernel version of at least 5.4 and glibc 2.7 or later, with HUGETLBFS enabled in the kernel configuration to support large memory pool allocations for packet buffers. For user-space drivers like VFIO, IOMMU must be enabled in the BIOS and kernel by adding parameters such as intel_iommu=on or amd_iommu=on to the GRUB command line (e.g., via GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on" in /etc/default/grub, followed by sudo update-grub and reboot). VFIO kernel modules, included since Linux 3.6, should be loaded with sudo modprobe vfio-pci and optionally sudo modprobe vfio_iommu_type1 for IOMMU support. Hugepages are essential for DPDK to avoid TLB pressure, with support for 2MB and page sizes. For runtime allocation of 2MB hugepages, use commands like echo [1024](/page/1024) | sudo tee /sys/[kernel](/page/Kernel)/mm/hugepages/hugepages-2048kB/nr_hugepages (adjusting the number based on memory needs, e.g., for ). For boot-time setup, add hugepages=[1024](/page/1024) to the command line in for 2MB pages, or default_hugepagesz=1G hugepagesz=1G hugepages=4 for pages, then mount the filesystem with sudo mkdir /mnt/huge; sudo mount -t hugetlbfs -o pagesize=1G none /mnt/huge and add it to /etc/[fstab](/page/Fstab) for persistence (e.g., nodev /mnt/huge hugetlbfs pagesize=1G 0 0). NUMA-aware allocation can be set per node, such as echo [1024](/page/1024) | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages. If IOMMU is unavailable, VFIO can operate in no-IOMMU mode by setting enable_unsafe_noiommu_mode=1 via module parameter, though this reduces .

Build Process

DPDK sources are obtained by cloning the repository from GitHub at git://dpdk.org/dpdk or downloading a release tarball from the official site. The recommended build system is Meson with Ninja backend; install Meson (version 0.57+) and Ninja via package managers (e.g., sudo apt install meson ninja-build on Ubuntu). To configure and build, navigate to the source directory and run meson setup build to create the build environment, optionally specifying the platform with -Dplatform=generic for broad compatibility or a specific target like x86_64-native-linux-gcc for optimized builds. Additional options include -Dmax_lcores=8 to limit logical cores for smaller systems. Then, cd build and execute ninja to compile the libraries, drivers, and examples (use ninja -j$(nproc) for parallel builds). For system-wide installation, run sudo meson install, which places files in /usr/local by default, followed by sudo ldconfig to update the library cache.

Runtime Configuration

DPDK applications are initialized via the Environment Abstraction Layer (EAL), which accepts command-line arguments for core mapping, memory, and device binding. Core affinity is set with --lcores, e.g., testpmd --lcores '(1-2)@0,(3-4)@1' -n 4 to assign cores 1-2 to task 0 and 3-4 to task 1. Memory configuration uses --huge-dir=/mnt/huge to specify the hugepage mount point, or --socket-mem 1024,0 for NUMA node allocations (1GB on node 0). Logging can be enabled with --log-level=8 for debug output or --log-level lib.eal:debug for EAL-specific details. Network interfaces must be bound to a DPDK poll-mode driver like VFIO-PMD using the dpdk-devbind.py tool from the usertools directory. First, list devices with ./dpdk-devbind.py --status, then bind a (e.g., at PCI address 0000:01:00.0) via ./dpdk-devbind.py --bind=vfio-pci 0000:01:00.0; unbinding from kernel drivers may require blacklisting (e.g., add blacklist igb to /etc/modprobe.d/blacklist.conf). For multi-process support, use --file-prefix to share memory files.

Testing

Validation of a DPDK setup typically involves running sample applications, such as the Forwarding (l2fwd) , which performs Layer 2 to test performance and configuration in real or virtualized environments. After building, execute it from the build directory with ./examples/dpdk-l2fwd -l 0-3 -n 4 -- -p 0x3 --config="(0,0,0),(1,0,0)", where -l specifies cores, -n channels, -p portmask (e.g., ports 0 and 1), and --config defines /port mappings; use a traffic generator like to send packets and verify forwarding with optional MAC updating disabled via --no-mac-updating. Successful runs confirm zero and expected throughput, with debugging aided by EAL log levels.

Ecosystem and Community

Governance and Contributions

The Data Plane Development Kit (DPDK) has been hosted by the since April 2017, providing neutral for its open-source development. Under this structure, DPDK is overseen by a Governing Board responsible for administrative, financial, marketing, legal, and licensing matters, chaired by Tim O’Driscoll of , with representatives from companies including , , , , , Marvell, , , NXP, and . A separate Technical Board handles technical decisions, such as approving new sub-projects, deprecating outdated ones, and resolving disputes; it is led by Maintainer Thomas Monjalon of and includes representatives from , , NXP, , Marvell, and . DPDK follows a regular release cycle, with mainline versions issued quarterly (e.g., in March, July, and November) and (LTS) releases maintained for up to three years, all coordinated by project maintainers listed in the official MAINTAINERS file. These maintainers, drawn from the Technical Board and community, ensure stability through backported fixes and compatibility policies. The DPDK community comprises 1,961 contributors from 214 organizations, with active participation tracked through the project's repository and insights. Corporate members, including , , and , provide funding and strategic input via the Governing Board, fostering collaboration on core libraries and drivers. Community engagement occurs through events like the annual DPDK Summit, with more than 10 such gatherings held since 2017 in locations including San Jose, , , and . Contributions to DPDK are submitted as patches via the [email protected] , where they undergo tracked in and automated testing through (CI) systems to validate functionality across platforms. Guidelines emphasize adherence to coding standards, ABI stability, and , with a focus on enhancements to poll mode drivers (PMDs) and key libraries for packet processing. Since joining the , DPDK has seen diverse inputs from over 30 organizations, enabling milestones such as multiple summits and sustained growth in contributor base. The Data Plane Development Kit (DPDK) integrates with (OVS) through OVS-DPDK, enabling accelerated virtual switching by leveraging DPDK's poll-mode drivers for high-throughput packet processing in virtualized environments. This integration allows OVS to bypass the kernel network stack, routing packets directly from network interface cards to virtual machines with reduced latency. Similarly, DPDK powers the FD.io (VPP) framework, where VPP uses DPDK as its primary data plane for efficient routing and forwarding, supporting features like layer-2 cross-connections and packet tracing in high-performance scenarios. In (NFV) testing, DPDK contributed to the Open Platform for NFV (OPNFV) project (2014–2021) by providing benchmarks for virtual switch performance through projects like VSPERF, ensuring compliance and interoperability in NFV infrastructures. For validation and testing, DPDK Test Plans within the DPDK (DTS) offer automated frameworks to verify ABI stability, unit tests for components like the Environment Abstraction Layer (EAL), and performance checks for poll-mode drivers. generation tools include pktgen-DPDK, a DPDK-powered application that generates wire-rate with customizable packet sizes and rates for performance evaluation of network interfaces. Complementing this, serves as a stateful built on DPDK, supporting advanced stateful (STF) and advanced stateful (ASTF) modes to emulate realistic L3-L7 patterns, including sessions, for comprehensive network testing. Ecosystem expansions include DPDK's support in via the Multus CNI plugin, which acts as a meta-plugin to attach multiple network interfaces to pods, enabling DPDK-accelerated secondary networks alongside standard networking. Additionally, DPDK integrates with to form hybrid kernel-user space architectures, where handles programmable kernel-side processing while DPDK manages user-space data paths, optimizing scenarios like virtual network functions without full kernel bypass. Adoption trends highlight DPDK's role in 5G telecommunications stacks, such as free5GC, where it accelerates the user plane function (UPF) via integrations like VPP-UPF with DPDK for low-latency packet processing in open-source cores. DPDK also maintains strong compatibility with Single Root (SR-IOV), allowing virtual functions of Ethernet controllers to be partitioned and directly assigned to virtual machines for hardware-accelerated I/O sharing in NFV and cloud environments.

Applications and Use Cases

Performance Optimizations

The Data Plane Development Kit (DPDK) employs several models to optimize packet handling for high-throughput and low-latency applications. The run-to-completion model assigns a single core to fully process each packet from reception to , minimizing inter-core communication and overhead, which is particularly effective for simple forwarding tasks on multi-core systems. In contrast, the model divides packet processing into sequential stages, with each stage executed by a dedicated core and inter-stage data transfer facilitated by lockless ring queues, enabling parallelization and scalability for complex workflows such as . The eventdev library introduces asynchronous event handling, allowing dynamic scheduling of packets as across cores, which supports both run-to-completion and pipeline paradigms while providing load balancing and flexibility for irregular workloads. Core optimizations in DPDK focus on eliminating traditional bottlenecks to achieve wire-speed . Poll mode drivers (PMDs) continuously poll interface card (NIC) queues instead of relying on interrupts, enabling I/O through and reducing context switches, which is essential for maintaining high packet rates. Hugepages allocation mitigates (TLB) misses by using larger memory pages (typically 2MB or 1GB), significantly improving virtual-to-physical address translation efficiency and utilization. NUMA-aware memory allocation ensures that buffers and data structures are placed on the same (NUMA) node as the processing core, minimizing remote memory access latencies that can degrade in multi-socket systems. Additionally, via mbuf structures groups packets into bursts (e.g., up to 32 packets per operation), amortizing per-packet overheads like descriptor fetches and cache invalidations. These techniques yield substantial performance gains, with DPDK applications routinely achieving over 100 million packets per second (Mpps) aggregate throughput on multi-port 10GbE configurations, such as 160 Mpps across eight 10GbE ports (using four dual-port NICs) on dual-socket processors. is reduced to under 10 microseconds in optimized setups, for instance, averaging 3-10 μs for interrupt-driven equivalents but lower in poll mode due to eliminated overheads. Receive Side Scaling () enhances CPU utilization by hashing packet flows to distribute them across multiple queues and cores, while flow isolation via dedicated queues prevents contention and ensures predictable processing times. Tuning mechanisms further refine these optimizations for specific hardware. Core pinning, configured via the EAL option --lcores, binds worker threads to logical cores, reducing OS scheduler-induced migrations and improving cache locality. Vectorized instructions, such as AVX and , are integrated into PMDs (e.g., for packet in the mlx5 driver), leveraging SIMD to process multiple packets or headers simultaneously and boosting throughput by up to 2x in vector-enabled paths. is addressed through service cores, which offload auxiliary tasks like handling or crypto operations from data-plane cores, allowing the latter to enter C-states for without compromising responsiveness during idle periods.

Real-World Deployments

The Data Plane Development Kit (DPDK) has been widely adopted across various industry sectors for high-performance packet processing. In data centers, DPDK powers load balancers and routing applications by enabling direct access to network interfaces, reducing latency and increasing throughput for virtualized environments. In the telecommunications sector, DPDK supports 5G virtual radio access networks (vRAN), as demonstrated by Nokia's collaboration with Intel and Verizon to develop cloud-native RAN architectures for efficient packet handling in disaggregated baseband processing. Ericsson employs DPDK for data plane acceleration in cloud-native telco applications on platforms like VMware Telco Cloud Infrastructure, achieving high performance for production-scale communication service provider networks. At the network edge, DPDK facilitates and applications by providing low-overhead packet I/O, essential for real-time threat detection in scenarios. In environments, DPDK enhances accelerators on platforms like (GCP), where it supports high-speed forwarding in virtualized network functions; for instance, FD.io (VPP), built on DPDK, achieves over 100 million packets per second on GCP instances with minimal loss. Notable projects illustrate DPDK's practical impact. uses DPDK to optimize packet processing in telco clouds, enabling scalable deployment of functions while addressing challenges in transitioning from proprietary hardware. For , the integrates DPDK to provide deployable defenses against volumetric attacks, allowing operators to filter at line rate on commodity hardware. Case studies highlight DPDK's performance benefits in (NFV). In environments, with DPDK (OVS-DPDK) integration delivers line-rate throughput for virtual network functions, with benchmarks showing up to 12x aggregate switching improvements compared to kernel-based alternatives, enabling efficient NFV workloads. VPP, leveraging DPDK's poll-mode drivers, supports high-performance processing in cable access networks, contributing to DOCSIS-compliant deployments by handling vectorized packet bursts for aggregation. As of November 2025, DPDK continues to expand in cloud-native environments, including Kubernetes-native (SDN) through projects like dpservice, which enables high-performance virtual switching and routing integrated with DPDK for scalable containerized deployments. Additionally, advancements in (DPI) leverage DPDK for analyzing encrypted traffic in security applications, supporting emerging needs in privacy-preserving intelligence. Deployments in 2025 continue to address scaling challenges, particularly for 400GbE interfaces, where DPDK enables wire-speed monitoring and processing using FPGA-accelerated SmartNICs to manage high-bandwidth traffic in data centers. Adaptations include hybrid models combining DPDK with kernel bypass tools like XDP, allowing userspace acceleration for performance-critical paths while retaining kernel integration for management tasks, thus balancing speed and ecosystem compatibility.

References

  1. [1]
    About – DPDK
    Data Plane Development Kit (DPDK) is a Linux Foundation project that consists of libraries to accelerate packet processing workloads running on a wide variety ...Governance · White Paper · Project Calendar · Corporate Membership
  2. [2]
    2. Overview — Data Plane Development Kit 25.07.0 documentation
    This section gives a global overview of the architecture of Data Plane Development Kit (DPDK). The main goal of the DPDK is to provide a simple, complete ...
  3. [3]
    DPDK – The open source data plane development kit accelerating ...
    DPDK's run-to-completion model and optimized libraries significantly enhance network application performance by pre-allocating essential resources.About · Download · Documentation · Supported Hardware
  4. [4]
    project charter - DPDK
    1. Mission of the Data Plane Development Kit. The mission of DPDK is to: Create an open source, production quality, vendor neutral software platform for ...
  5. [5]
    [PDF] Creating a Better NFV Platform: Dell, Red Hat, and Intel Foster ...
    Sep 3, 2015 · the data plane development kit. (DPDK), improves the performance of virtualized network functions. Dell, Red Hat, Intel, and a wide range of ...
  6. [6]
    [PDF] Open vSwitch* Enables SDN and NFV Transformation
    This section explains what virtual switching is and provides details about OvS and OvS with DPDK. The Software Virtual Switch. SDN and NFV are transforming how ...Missing: limitations | Show results with:limitations
  7. [7]
    Data Plane Development Kit (DPDK) Further Accelerates Packet ...
    Jun 21, 2018 · The DPDK and FD.io communities recently lost a key founding member of the communities: Venky Venkatesan, known as “the father of DPDK,” passed ...Missing: origins | Show results with:origins
  8. [8]
    OpenSwitch and the DPDK projects deliver open networking ...
    Jun 22, 2018 · Named in honour of the late Venky Venkatesan, who is regarded as “the father of DPDK”, Release 18.05 Venkyaccelerates packet processing ...Missing: history origins
  9. [9]
    Venky Venkatesan - DPDK
    Venky regularly spoke at technical conferences including Intel Developer Forum, DPDK Summit, and various other workshops spanning FD.io, NFV, SDN, and ...Missing: origins | Show results with:origins
  10. [10]
    [dpdk-announce] Sad News from Intel - Venky Venkatesan
    * [dpdk-announce] Sad News from Intel - Venky Venkatesan @ 2018-04-05 18:36 ... Venky left behind a wife, Priya, and two young girls, Adhiti and Namrata.Missing: origins | Show results with:origins
  11. [11]
  12. [12]
    [dpdk-dev] [dpdk-announce] DPDK joins The Linux Foundation
    Today, April 3rd 2017, DPDK becomes a Linux Foundation Project. Please join me to thank the 13 funding members: http://dpdk.org/about#members We can share ...
  13. [13]
    Data Plane Development Kit (DPDK) Further Accelerates Packet ...
    Jun 21, 2018 · A true community effort, DPDK 18.05 'Venky' release was built with contributions from over 160 developers, and over 1700 commits across more ...
  14. [14]
    Release Notes — Data Plane Development Kit 25.07.0 documentation
    Release Notes · 1. DPDK Release 25.07 · 2. DPDK Release 25.03 · 3. DPDK Release 24.11 · 4. DPDK Release 24.07 · 5. DPDK Release 24.03 · 6. DPDK Release 23.11 · 7. DPDK ...9. DPDK Release 22.11 · 12. DPDK Release 21.11 · 5. DPDK Release 24.03Missing: cadence biannual 2017
  15. [15]
    Roadmap - DPDK Core
    Cycle model​​ A typical release should be done after 4 months. It is designed to allow DPDK to keep evolving at a rapid pace while giving enough opportunity to ...Missing: biannual 2017
  16. [16]
    Supported Hardware - DPDK Core
    DPDK Core supports arm, ppc, x86 CPUs; HiSilicon, Intel, Marvell DMA engines; various NICs from AMD, Amazon, Aquantia, and more; and AMD, Broadcom, Intel, ...NICs · Intel · CPUs · DMA Engines
  17. [17]
    1. Environment Abstraction Layer (EAL) Library - Documentation
    The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level resources such as hardware and memory space.
  18. [18]
  19. [19]
  20. [20]
  21. [21]
    3. Packet (Mbuf) Library — Data Plane Development Kit 25.07.0 documentation
    ### Mbuf Chaining and Packet Transmission/Reception Without Copying
  22. [22]
    4. DPDK Release 19.08 — Data Plane Development Kit 20.05.0 ...
    ip_frag: The IP fragmentation library converts input mbuf into fragments using input MTU size via the rte_ipv4_fragment_packet() interface. Once fragmentation ...
  23. [23]
    2. Hash Library — Data Plane Development Kit 25.07.0 documentation
    The DPDK provides a Hash Library for creating hash table for fast lookup. The hash table is a data structure optimized for searching through a set of entries ...2.1. Hash Api Overview · 2.3. Multi-Thread Support · 2.8. Use Case: Flow...
  24. [24]
    15. Poll Mode Driver - Documentation - DPDK
    Poll Mode Driver. The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers. A Poll Mode Driver (PMD) consists of ...
  25. [25]
    NICs - DPDK Core
    Supported Hardware - CPUs - DMA Engines - NICs - Crypto Engines - Compress Engines - Baseband Accelerators - Miscellaneous - Paravirtualization
  26. [26]
    Event Device Drivers - Documentation - DPDK
    The following are a list of event device PMDs, which can be used from an application through the eventdev API. 1. Event Device Supported Functionality Matrices ...
  27. [27]
  28. [28]
    4. Cross compiling DPDK for aarch64 and aarch32 - Documentation
    Note. Whilst it is recommended to natively build DPDK on aarch64 (just like with x86), it is also possible to cross compile DPDK for aarch64.
  29. [29]
    2. System Requirements - Documentation
    System Requirements. This chapter describes the packages required to compile the DPDK. 2.1. BIOS Setting Prerequisite on x86. For the majority of platforms, ...
  30. [30]
    1. DPDK Release 18.11 - Documentation
    DPDK 18.11 adds support for external memory, a new function to check device addressability, hot-unplug handle, and new Flow API actions.<|control11|><|separator|>
  31. [31]
    Getting Started Guide for FreeBSD - Documentation
    2.1. Installing the DPDK Package for FreeBSD · 2.2. Installing the DPDK FreeBSD Port · 2.3. Compiling and Running the Example Applications.
  32. [32]
    Getting Started Guide for Windows - Documentation
    Getting Started Guide for Windows. 1. Introduction · 2. Limitations · 3. Compiling the DPDK Target from Source · 3.1. System Requirements · 3.2. Option 1.
  33. [33]
    4. Multi-process Support - Documentation
    The key element in getting a multi-process application working using the DPDK is to ensure that memory resources are properly shared among the processes making ...
  34. [34]
    Network Interface Controller Drivers - Documentation - DPDK
    DPDK supports drivers like AF_PACKET, Aquantia Atlantic, BNX2X, ENA, I40E, and IXGBE, among others.7. Aquantia Atlantic DPDK Driver · 39. NVIDIA MLX5 Ethernet Driver · Virtio-net
  35. [35]
    2. System Requirements — Data Plane Development Kit 25.11.0-rc1 documentation
    ### Prerequisites for DPDK on Linux (Official DPDK Linux GSG)
  36. [36]
    7. Linux Drivers - Documentation
    It is recommended that vfio-pci be used as the kernel module for DPDK-bound ports in all cases. If an IOMMU is unavailable, the vfio-pci can be used in no-iommu ...
  37. [37]
    1. Installing DPDK Using the meson build system
    This will compile DPDK in the build subdirectory, and then install the resulting libraries, drivers and header files onto the system - generally in /usr/local.
  38. [38]
    9. EAL parameters - Documentation - DPDK
    This document contains a list of all EAL parameters. These parameters can be used by any DPDK application running on Linux.Missing: runtime | Show results with:runtime
  39. [39]
    16. L2 Forwarding Sample Application (in Real and Virtualized ...
    The L2 Forwarding sample application performs L2 forwarding for each packet received, using DPDK and SR-IOV, and can be used for VM-to-VM communication.16.4. 2. Mbuf Pool... · 16.4. 4. Rx Queue... · 16.4. 5. Tx Queue...
  40. [40]
    Networking Industry Leaders Join Forces to Expand New Open ...
    Apr 3, 2017 · “Cavium welcomes the move of the DPDK Project to The Linux Foundation,” said Larry Wikelius, Vice President Software Ecosystem and Solutions ...
  41. [41]
    Governing Board - DPDK
    A Governing Board which deals with budget, marketing, lab resources, administrative, legal and licensing issues.
  42. [42]
    Technical Board - DPDK Core
    Meetings happen every two weeks at 3 pm UTC. Everybody is welcome to join. The quorum required for a meeting to proceed is a 70% majority of the Technical ...
  43. [43]
  44. [44]
    12. DPDK Stable Releases and Long Term Support
    DPDK Stable Releases and Long Term Support. This section sets out the guidelines for the DPDK Stable Releases and the DPDK Long Term Support releases (LTS).Missing: biannual 2017
  45. [45]
    DPDK/dpdk: Data Plane Development Kit - GitHub
    DPDK is a set of libraries and drivers for fast packet processing. It supports many processor architectures and both FreeBSD and Linux.
  46. [46]
    Past Events - DPDK
    DPDK Userspace, Dublin - September 26-27, 2017 · Archive | Videos · DPDK Summit, Shanghai - June 27, 2017 · Archive · DPDK Summit, Bangalore - April 25-26, 2017.
  47. [47]
    DPDK Summit | LF Events
    The DPDK Summit is a community event designed for software developers and business professionals who contribute to or use DPDK.Missing: governance | Show results with:governance
  48. [48]
    9. Contributing Code to DPDK — Data Plane Development Kit 25.07 ...
    Contributing Code to DPDK. This document outlines the guidelines for submitting code to DPDK. The DPDK development process is modeled (loosely) on the Linux ...Missing: Gerrit | Show results with:Gerrit
  49. [49]
    Contribute - DPDK Core
    The following instructions should be enough to start contributing. Regular contributors should read the full contribution guidelines. Patches should be sent ...Missing: Gerrit | Show results with:Gerrit
  50. [50]
    Contributor's Guidelines — Data Plane Development Kit 25.07.0 ...
    1. DPDK Coding Style · 2. Design · 3. ABI Policy · 4. ABI Versioning · 5. DPDK Documentation Guidelines · 6. DPDK Unit Testing Guidelines · 7. Adding a new library · 8 ...Missing: Gerrit | Show results with:Gerrit
  51. [51]
    How Red Hat is Using DPDK
    Nov 25, 2019 · Red Hat primarily uses DPDK with Open vSwitch to get packets routed within servers from Network Interface Cards (NIC), to VMs.
  52. [52]
    The LTS (Upstream) Maintainer's Guide - DPDK
    Dec 4, 2023 · This detailed blog aims to shed light on these queries and steer you towards the appropriate DPDK LTS release for your specific requirements.
  53. [53]
    VPP/Tutorial DPDK and MacSwap - fd.io
    Jan 31, 2018 · This tutorial will cover several basic aspects of VPP, namely: using DPDK interfaces, connecting interfaces at layer 2, experimenting with packet traces.Using DPDK interfaces · Attaching a DPDK interface to... · Layer-2 cross-connection
  54. [54]
    Benchmarking Virtual Switches in the Open Platform for NFV (OPNFV)
    Sep 21, 2017 · This memo describes the contributions of the Open Platform for NFV (OPNFV) project on Virtual Switch Performance (VSPERF), particularly in the areas of test ...
  55. [55]
    The DPDK Test Plans - Documentation
    The DPDK Test Plans¶. The following are the test plans for the DPDK DTS automated test system. 1. DPDK ABI Stable Tests · 2. AddressSanitizer Smoke Test · 3 ...Missing: TST | Show results with:TST
  56. [56]
    Pktgen-DPDK - Read the Docs
    Pktgen, (Packet Gen-erator) is a software based traffic generator powered by the DPDK fast packet processing framework. Some of the features of Pktgen are:.Getting Started with Pktgen · Pktgen-DPDK - Traffic Generator powered by DPDK
  57. [57]
    TRex
    TRex is an open source, low cost, stateful and stateless traffic generator fuelled by DPDK. It generates L3-7 traffic and provides in one tool capabilities.TRex Advance stateful support · TRex Documentation · TRex Stateless support
  58. [58]
    k8snetworkplumbingwg/multus-cni: A CNI meta-plugin for ... - GitHub
    Multus CNI is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods.
  59. [59]
    [PDF] Architecture for Building Hybrid Kernel-User Space Virtual Network ...
    DPDK can achieve high performance by bypassing the kernel net- work stack and processing packets entirely in the user space. However, compared to DPDK, eBPF and ...
  60. [60]
    Accelerating free5GC Data Plane Using Programmable Hardware
    We also show how to integrate a number of current UPF data plane free5GC implementations such as Data Plane Development Kit (DPDK), Linux kernel module, and ...Missing: telco | Show results with:telco
  61. [61]
    34. Intel Virtual Function Driver - Documentation
    The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode. Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC ...
  62. [62]
    37. Internet Protocol (IP) Pipeline Application - Documentation
    The Internet Protocol (IP) Pipeline application is intended to be a vehicle for rapid development of packet processing applications on multi-core CPUs.
  63. [63]
    1. Performance Optimization Guidelines - Documentation
    The following sections describe optimizations used in DPDK and optimizations that should be considered for new applications.
  64. [64]
    [PDF] Impressive Packet Processing Performance Enables Greater ...
    ... Mpps (with Intel HT Technology disabled) and 4x 10GbE dual port PCI Express* Gen. 2 NICs on each processor. 1 Performance tests and ratings are measured ...
  65. [65]
    [PDF] HIGH PERFORMANCE, OPEN STANDARD VIRTUALIZATION WITH ...
    Figure 8: Intel Data Plane Development Kit (Intel DPDK) performance ... L3 Forwarding Performance 8 x 10GbE performance on Ivytown (Packets/Sec). 2MB ...<|separator|>
  66. [66]
    Use Case - DPDK
    DPDK use cases include data centers, network edge, infrastructure, experimental, load balancing, routing, SDN, VNFs, CNFs, and security.Missing: real- world deployments studies
  67. [67]
    Nokia, Intel and Verizon collaborate on new virtualized RAN ...
    Feb 26, 2018 · Nokia, Intel and Verizon are collaborating on new groundbreaking Cloud RAN architectures to provide the flexibility needed for the operator's future services.
  68. [68]
    How Ericsson Leverages DPDK for Data Plane Acceleration in the ...
    Sep 5, 2023 · This user story delves into Ericsson's utilization of DPDK, the benefits it has brought, and the challenges associated with transitioning to a cloud-native ...
  69. [69]
    Ericsson and VMware: Virtualized Evolved Packet Core on Telco ...
    Jan 20, 2022 · Ericsson virtualized Evolved Packet Core (vEPC) on VMware Telco Cloud Infrastructure is ready for production deployment in communication service provider (CSP) ...
  70. [70]
    GCP DPDK support | FortiGate / FortiOS 7.2.0
    GCP DPDK support. You can now enable DPDK on FortiGate-VMs deployed on the Google Cloud Platform (GCP). DPDK allows improved network performance.
  71. [71]
    Forwarding over 100 Mpps with FD.io VPP on x86 - Medium
    Apr 21, 2024 · Explore high-perf packet processing on GCP using FD.io VPP. Dive into DPDK achieving 100+ Mpps with minimal packet loss.
  72. [72]
    Enhancing DDos Mitigation with Gatekeeper & DPDK
    Gatekeeper puts theory into practice by providing network operators with an instantly deployable and affordable solution for defending against DDoS attacks.
  73. [73]
    High-Speed Software Data Plane via Vectorized Packet Processing
    When combined with DPDK's underlying fast I/O and IPsec hardware acceleration, VPP enables a highly modular, scalable, and efficient software-based ...
  74. [74]
    Monitoring 400G Traffic in DPDK Using FPGA-Based SmartNIC with ...
    Jul 22, 2024 · Monitoring 400G Traffic in DPDK Using FPGA-Based SmartNIC with RTE Flow - David Vodák, Cesnet As 400G wire speed becomes more widely adopted ...
  75. [75]
    The Hybrid Networking Stack - Red Hat Emerging Technologies
    Dec 7, 2022 · This article demonstrates how to build a hybrid networking stack application that accomplishes Data Plane Development Kit (DPDK)-like speeds with kernel smarts.Missing: integration | Show results with:integration