Fact-checked by Grok 2 weeks ago

Express Data Path

eXpress Data Path (XDP) is a high-performance, programmable networking framework integrated into the that enables fast packet processing directly within the kernel's network driver context, allowing for efficient handling of incoming network packets at the earliest possible stage without requiring kernel bypass techniques. Developed as part of the IO Visor Project, XDP leverages (extended ) programs to inspect, modify, forward, or drop packets, providing a safe and flexible environment for custom data plane operations while maintaining compatibility with the existing networking stack. XDP was first introduced in 2016 through contributions from developers at Facebook and Red Hat, with its design formalized in a 2018 research paper presented at the ACM CoNEXT conference, marking its integration into the mainline Linux kernel starting from version 4.8. The framework executes eBPF bytecode—compiled from high-level languages like C—early in the receive (RX) path of network interface controllers (NICs), enabling decisions such as packet rejection before memory allocation or stack traversal, which minimizes overhead and enhances security by avoiding userspace involvement for common tasks. Key actions supported include XDP_DROP for discarding packets, XDP_PASS for forwarding to the kernel stack, XDP_TX for immediate transmission, and XDP_REDIRECT for rerouting to other interfaces or sockets, all verified at load time via static analysis to prevent kernel crashes. In terms of performance, XDP achieves up to 24 million packets per second (Mpps) on a using , outperforming traditional paths and even some userspace solutions by reducing and CPU utilization for high-throughput scenarios. It supports advanced features like stateful processing through maps for hash tables and counters, as well as integration with AF_XDP sockets for user-space , making it suitable for applications such as , load balancing, and inline firewalls. Since its inception, XDP has been adopted in production environments by organizations like and , with ongoing enhancements in recent kernels expanding offload support and metadata for even greater .

Overview

Definition

Express Data Path (XDP) is an eBPF-based technology designed for high-performance within the . It integrates directly into the network interface card () driver at the earliest receive (RX) point, allowing eBPF programs to execute on incoming packets before they proceed further into the . The core purpose of XDP is to enable programmable decisions on incoming packets prior to kernel memory allocation or involvement of the full networking stack, thereby minimizing overhead and maximizing throughput. This approach supports processing rates up to 26 million packets per second per core on commodity hardware. In contrast to traditional networking paths, XDP bypasses much of the operating system stack—for instance, avoiding initial allocation of socket buffer (skb) structures—to achieve lower and reduced CPU utilization. Originally developed as a GPL-licensed component of the , XDP received a Windows port in 2022, released under the . As of 2025, developments like XDP2 are being proposed to further extend its capabilities for modern high-performance networking.

Advantages

XDP provides significant performance benefits by enabling line-rate packet processing directly in the network driver, achieving throughputs exceeding 100 Gbps on multi-core systems while maintaining low . This is accomplished by executing programs at the earliest possible stage in the receive path, before the creation of socket buffer (skb) structures or the invocation of generic receive offload (GRO) and segmentation offload (GSO) layers, which reduces processing overhead for high-volume traffic scenarios such as and traffic filtering. For instance, simple packet drop operations can reach up to 20 million packets per second (Mpps) per core, far surpassing traditional methods. In terms of , XDP minimizes CPU utilization by allowing early decisions on packet fate—such as dropping invalid packets—thereby freeing resources for other tasks and avoiding unnecessary memory allocations or context switches deeper in the networking . This approach supports scalable deployment across multiple cores without the need for kernel bypass techniques like DPDK, while retaining the security and interoperability of the networking subsystem. Additionally, XDP's potential for operations further reduces memory bandwidth consumption, enhancing overall system efficiency in bandwidth-intensive environments. The flexibility of XDP stems from its with , enabling programmable custom logic for packet processing without requiring modifications or recompilation, which facilitates rapid adaptation to evolving requirements. Compared to conventional tools like or , XDP can be significantly faster for basic filtering tasks, with speedups of up to 5 times, due to its position in the data path and avoidance of higher-layer overheads. Furthermore, XDP enhances through seamless with tools like bpftrace, allowing for efficient and of events in production environments.

History and Development

Origins

The development of Express Data Path (XDP) was initiated in 2016 by Jesper Dangaard Brouer, a principal engineer at , in response to the growing demands for high-performance networking in environments where traditional Linux networking stacks struggled with speeds exceeding 10 Gbps. Traditional processing, including socket (SKB) allocation and memory management, created significant bottlenecks under high packet rates, often limiting throughput to below line-rate performance for multi-gigabit interfaces. The project aimed to enable programmable, -integrated packet processing that could rival user-space solutions like DPDK while maintaining compatibility with the existing networking stack. Key contributions came from the open-source Linux kernel community, with significant input from engineers at , , and , who helped refine the design through collaborative patch reviews and testing. Early efforts built upon the (extended ) framework, which had advanced in 2014 to support more complex in-kernel programs, allowing XDP to extend programmable packet processing beyond existing hooks like traffic control (). Initial prototypes focused on integrating XDP hooks into network drivers, with testing conducted on Netronome SmartNICs to evaluate offloading capabilities and on Mellanox ConnectX-3 Pro adapters (supporting 10/40 Gbps Ethernet) to demonstrate drop rates up to 20 million packets per second on a single core. These prototypes validated the feasibility of early packet inspection and processing directly in the driver receive path, minimizing overhead from higher-layer components.

Milestones

XDP was initially merged into the version 4.8 in 2016, introducing basic support for programmable packet processing at the driver level, with initial implementation in the Intel ixgbe Ethernet driver. In 2018, Linux kernel 4.18 added AF_XDP, a family enabling efficient user-space access to XDP-processed packets, facilitating data transfer between and user space. Microsoft ported XDP to the Windows kernel in 2022, releasing an open-source implementation that integrated with the MsQuic library to accelerate QUIC protocol processing by bypassing the traditional network stack. Between 2023 and 2024, XDP driver support expanded to additional Intel Ethernet controllers, such as the E810 series, while Netronome hardware offloading achieved greater stability through kernel enhancements for reliable eBPF program execution on smart NICs. In 2024 and 2025, updates addressed critical issues, including a fix for race conditions in the AF_XDP receive path identified as CVE-2025-37920, where improper synchronization in shared umem mode could lead to concurrent access by multiple CPU cores; this was resolved by relocating the rx_lock to the buffer pool structure. The ecosystem around XDP also grew, with the introduction of uXDP as a userspace for executing verified XDP programs outside the while maintaining compatibility, and innovative workarounds enabling XDP-like processing for egress traffic via loopholes. XDP's core implementation in Linux remains under the GPL license, ensuring integration with the kernel's licensing requirements, whereas the Windows port adopts the more permissive MIT license to broaden adoption across platforms.

Core Functionality

Data Path Mechanics

The eXpress Data Path (XDP) hook is integrated at the earliest point in the receive (RX) path within the Linux kernel's network device driver, immediately following the network interface card (NIC)'s direct memory access (DMA) transfer of packet data into kernel memory buffers from the RX descriptor ring, but prior to any socket buffer (skb) allocation or engagement with the broader network stack. This placement minimizes latency by allowing programmable processing before traditional kernel overheads. In cases where a driver lacks native support, XDP falls back to a generic mode (also known as SKB mode) that integrates into the kernel's NAPI processing after skb allocation, resulting in slightly higher overhead but ensuring compatibility. Upon transfer, the raw packet data resides in a buffer, where an program attached to the XDP hook executes directly on it, utilizing from the xdp_md —such as packet , ingress , and RX queue ID—for contextual analysis. This flow enables rapid decision-making on packet disposition without propagating the frame through the full network stack, thereby reducing CPU cycles and memory usage for high-throughput scenarios. XDP supports three execution s: native , which embeds the hook directly in for optimal performance on supported ; generic , a universal software fallback that integrates into the standard RX path with slightly higher overhead; and offload , where the program is transferred to the for hardware-accelerated execution, bypassing the host CPU entirely. To enhance efficiency, XDP leverages the kernel's page pool API for , allocating and recycling page-sized pools dedicated to XDP frames and associated skbs, which avoids frequent page allocations and reduces cache misses in high-rate environments. This approach supports traffic handling, where replicated packets can be processed across relevant queues, and integrates with Receive Side Scaling () to distribute ingress load via hardware hashing to multiple RX queues for parallel execution. Traditionally limited to ingress processing on the RX path, XDP saw 2025 advancements enabling egress support through eBPF-based techniques that manipulate kernel packet direction heuristics, extending its applicability to outbound traffic without native TX hooks.

Actions

In eXpress Data Path (XDP), the possible decisions an XDP program can make on a received packet are determined by returning one of the values from the enum xdp_action, which the kernel uses to execute the corresponding handling without further program involvement. These actions enable efficient packet processing at the driver level, allowing for high-performance decisions such as dropping unwanted traffic or redirecting packets to alternative paths. XDP_DROP instructs the to immediately discard the packet, freeing the underlying buffer directly in the driver without allocating kernel data structures like sk_buff or passing the packet to the network stack. This action is particularly effective for early-stage filtering, such as mitigating DDoS attacks, as it minimizes and compared to traditional stack-based dropping. XDP_PASS forwards the packet to the standard networking stack for further processing, such as routing, firewalling, or delivery to user space. It allows the XDP program to inspect or minimally modify the packet before normal handling resumes, preserving compatibility with existing network functionality. XDP_TX causes the to transmit the packet back out through the same network interface it arrived on, often used for reflecting packets or simple redirects without changing the egress device. This action reuses the original for transmission, enabling low-overhead operations like packet mirroring or bouncing invalid ingress traffic. XDP_REDIRECT redirects the packet to a different network interface, CPU queue, or AF_XDP socket, typically invoked via the eBPF helper bpf_redirect() or map-based variants like bpf_redirect_map(). It supports advanced forwarding scenarios, such as load balancing across devices, by handing off the to another or processing context. XDP_ABORTED, with a value of 0, signals an error or abort condition in the XDP program, resulting in the packet being dropped along with a warning via bpf_warn_invalid_xdp_action(). This action is primarily intended for or testing purposes and is rarely used in environments due to its punitive overhead. The interprets the returned enum xdp_action value to perform the specified operation atomically after the program execution, ensuring minimal overhead in the data path. Statistics for these actions, including counts of drops, passes, transmissions, and redirects, are exposed by supported network drivers through the ethtool utility, allowing administrators to monitor XDP performance and efficacy.

eBPF Integration

Program Development

eBPF programs for XDP are written in a restricted subset of , leveraging kernel headers such as <linux/bpf.h> and <bpf/bpf_helpers.h> to access necessary types and helper functions. Developers define the main program function and annotate it with the SEC("xdp") macro to place it in the appropriate ELF section, ensuring it is recognized as an XDP program during loading. The function signature typically takes a struct xdp_md *ctx , providing access to packet metadata like ingress_ifindex for the incoming index. Programs must return an enum xdp_action value, such as XDP_DROP to discard packets or XDP_PASS to continue processing. To compile the C source into an ELF object file containing eBPF bytecode, developers use LLVM/Clang with the BPF target architecture. The command clang -O2 -target bpf -c program.c -o program.o generates the object file, enabling features like bounded loops and helper function inlining supported by the LLVM BPF backend. This process ensures the bytecode adheres to eBPF instruction constraints verified by the kernel. Loading the program into the kernel utilizes the libbpf library, which provides the bpf_prog_load() function with BPF_PROG_TYPE_XDP as the program type. Once loaded, the program file descriptor is attached to a network device using bpf_set_link_xdp_fd() on the netdevice or, in newer kernels, bpf_link_create() with BPF_LINK_TYPE_XDP. Alternatively, the iproute2 suite offers a command-line interface for attachment: ip link set dev <interface> xdp obj program.o sec xdp, simplifying deployment without custom userspace code. For inspection and management, bpftool from iproute2 allows querying loaded programs via bpftool prog show or attached XDP links with bpftool net show. A representative example is a simple XDP program that drops packets with non-IPv4 Ethernet types:
c
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <linux/if_ether.h>

SEC("xdp")
int xdp_drop_non_ip(struct xdp_md *ctx) {
    void *data_end = (void *)(long)ctx->data_end;
    void *data = (void *)(long)ctx->data;
    struct ethhdr *eth = data;

    if ((void *)(eth + 1) > data_end)
        return XDP_PASS;

    if (eth->h_proto != htons(ETH_P_IP))
        return XDP_DROP;

    return XDP_PASS;
}
This program accesses the Ethernet header via the ctx metadata, performs bounds checking to prevent verifier rejection, and selectively drops non-IP traffic. Metadata like ctx->ingress_ifindex can be used for interface-specific logic, such as conditional actions based on the receiving device. Debugging XDP programs involves kernel-side tracing with bpf_trace_printk() for logging messages to the kernel ring buffer, viewable via dmesg, though it is limited for production due to performance overhead. For more scalable telemetry, developers populate userspace-accessible eBPF maps with counters or statistics, which can be read and aggregated from userspace applications. The kernel verifier plays a role in accepting programs by statically analyzing bytecode for safety, but development focuses on iterating through compilation and loading to resolve verification failures.

Safety Mechanisms

The verifier serves as a critical in-kernel static analyzer for XDP programs, simulating their execution path to ensure safety before loading. It performs exhaustive checks for potential issues such as unreachable instructions, out-of-bounds memory accesses relative to packet boundaries (e.g., ensuring offsets do not exceed the data_end pointer in XDP contexts), invalid use of helper functions, and violations of the kernel's security model. If any unsafe behavior is detected, the verifier rejects the program, preventing it from being loaded and executed, thereby avoiding kernel crashes or exploits. This verification process is mandatory for all program types, including XDP, and operates on the program's without requiring runtime overhead during packet processing. To enforce bounded execution, the verifier prohibits unbounded loops in eBPF programs, a restriction that originated with early eBPF designs to guarantee termination; since Linux kernel 5.3, bounded loops are permitted but only if the verifier can prove they will not exceed resource limits. A key safeguard is the fixed instruction limit, capped at 1 million instructions per program invocation (increased from 4,096 in earlier kernels like 5.2), which prevents excessive computation and potential denial-of-service scenarios. Additionally, map accesses—such as those to eBPF maps used for state in XDP filtering—are validated at load time, ensuring pointers remain within allocated bounds and avoiding arbitrary memory corruption. These mechanisms collectively ensure that XDP programs remain deterministic and resource-bounded, maintaining kernel stability even under high packet rates. Following successful verification of the , the may optionally apply just-in-time () to translate it into native for improved performance during execution. However, the verifier's safety checks occur solely on the portable , independent of the process, ensuring that optimizations do not introduce vulnerabilities. For error handling, XDP programs must return one of predefined actions (e.g., XDP_PASS to continue processing, XDP_DROP to discard the packet, or XDP_REDIRECT for forwarding), which the interprets to dictate packet fate; in cases of unrecoverable errors like , the program returns XDP_ABORTED, triggering a kernel tracepoint for logging while defaulting to a safe fallback such as XDP_PASS to avoid disrupting traffic flow. In recent developments through 2025, the verifier has seen enhancements to support more complex operations, including improved precision for packet redirects (e.g., via XDP_REDIRECT with tail calls) and metadata handling in XDP programs, where additional packet can be safely accessed without bound violations. These updates, such as proof-based refinement mechanisms, allow the verifier to handle intricate control flows more accurately while rejecting fewer valid programs, building on ongoing efforts to balance safety and expressiveness in high-performance networking scenarios.

User-Space Access

AF_XDP Sockets

AF_XDP sockets, introduced in version 4.18, provide a specialized address family (PF_XDP) designed for high-performance, operations that enable direct packet transfer from kernel-space XDP programs to user-space applications, bypassing much of the traditional networking stack. This raw socket type facilitates efficient packet processing by allowing XDP programs to redirect ingress traffic straight to user-space buffers, supporting applications requiring low-latency and high-throughput networking. To create an AF_XDP socket, applications invoke the standard socket syscall with the address family AF_XDP, socket type SOCK_RAW, and protocol 0: fd = socket(AF_XDP, SOCK_RAW, 0);. Following creation, the socket must be bound to a specific network interface and receive queue ID using the bind() syscall, specifying parameters such as the interface index, queue identifier, and socket options via setsockopt() for features like shared user memory (UMEM) registration. This binding associates the socket with a particular hardware receive queue, enabling targeted packet reception from XDP-processed traffic on that queue. The core of AF_XDP's efficiency lies in its user memory (UMEM) model, where user-space allocates a contiguous region registered with the via setsockopt() using the SOL_XDP level. This UMEM is divided into fixed-size frames, and communication between and user-space occurs through four lock-free ring buffers: the RX ring for incoming packet descriptors from the to user-space, the TX ring for outgoing descriptors from user-space to the , the FILL ring for user-space to supply empty frames to the , and the COMPLETION ring for the to notify user-space of processed frames. Descriptors in these rings reference UMEM frame addresses and lengths, allowing shared access without data copying in optimal configurations. AF_XDP supports two operational modes for packet handling: copy mode, which relies on traditional sk_buff structures for data transfer and is compatible with all XDP-capable drivers, and mode, which grants the driver direct page access to UMEM via for ingress and egress, minimizing overhead but requiring explicit driver support through XDP_FLAGS_DRV_MODE. Upon binding, the kernel attempts if available; otherwise, it defaults to copy mode. Driver support for has expanded in recent kernels, enhancing performance for supported hardware. As of 2025, AF_XDP has seen integrations aimed at broader ecosystem compatibility, including patches in the DPDK framework to enable AF_XDP poll-mode drivers for seamless migration from kernel-bypass libraries, allowing DPDK applications to leverage AF_XDP sockets for raw packet I/O on supported NICs. Additionally, experimental implementations in DNS servers such as utilize AF_XDP to achieve higher query processing rates by directly handling packets in user-space, demonstrating improved query processing rates with minimal CPU overhead, such as a 1.7x improvement over traditional handling in experimental tests.

Zero-Copy Features

AF_XDP enables packet handling through a region known as UMEM, which consists of a contiguous block of user-allocated memory divided into fixed-size frames, typically 2 KB or 4 KB each, to hold packet data without intermediate copies between and user space. The driver writes packet descriptors directly into ring buffers mapped to this UMEM, allowing the network interface card () to DMA packet data straight into the frames, while the user-space application accesses the data via these descriptors. This structure supports multiple AF_XDP sockets sharing the same UMEM for efficient resource utilization in multi-queue setups. Ring buffer operations in zero-copy mode rely on four memory-mapped rings associated with the UMEM: the fill ring, where the user space provides available frames for incoming packets; the RX ring, where the enqueues receive descriptors pointing to filled frames; the TX ring, for user-submitted transmit descriptors; and the completion ring, where the signals TX completions. The user space polls the head and tail pointers of these single-producer/single-consumer rings to synchronize access, minimizing system calls through techniques like busy-polling or eventfd notifications, while the updates them atomically to reflect buffer states. This design ensures seamless data flow without memcpy operations, as both and user space operate on the . By eliminating the overhead of data copying between and user space, AF_XDP achieves significant performance improvements, such as line-rate processing exceeding 40 Gbps for receive-only workloads in user-space applications like packet capture on 40 Gbps NICs. These gains stem from reduced CPU cycles on memory transfers and fewer context switches, enabling applications to handle high-throughput traffic more efficiently than traditional interfaces. To enable zero-copy mode, applications must set the XDP_ZEROCOPY flag during socket binding via the bind() , which requires compatible drivers supporting direct UMEM access, such as Intel's i40e for 40 Gbps Ethernet; if unsupported, the operation falls back to copy mode using SKB buffers. Driver support is typically provided in XDP_DRV mode for native , contrasting with the generic XDP_SKB mode that always copies data. In 2025, advancements addressed reliability issues, including a fix for race conditions in the generic path under shared UMEM scenarios (CVE-2025-37920), where improper locking could lead to races across multiple sockets, now resolved by relocating the rx_lock to the buffer pool level in versions post-6.9. Performance studies on mixed-mode deployments, combining and copy-based sockets on programmable NICs, highlighted benefits but noted potential bottlenecks from uneven buffer allocation, informing optimizations for hybrid environments.

Hardware Support

Offloading Modes

XDP supports hardware offloading through specific modes that enable execution of programs directly on the network interface card (NIC), bypassing the host CPU for packet processing. The primary mode is specified by the XDP_FLAGS_HW_MODE flag, which attaches the eBPF program for full offload to the NIC when both driver and hardware support this capability. As a fallback when hardware offload is unavailable or unsupported, XDP_FLAGS_SKB_MODE is used, directing the program to run in software mode using the kernel's socket buffer (SKB) path. Additionally, launch-time offload for transmit (TX) metadata allows the NIC to schedule packets based on specified timestamps without host intervention, merged in Linux kernel 6.14 (April 2025). The offloading process involves compiling the eBPF program into a format compatible with the target hardware, such as P4 for programmable switches or NIC-specific , before loading it onto the . This compilation ensures the program adheres to the hardware's instruction set limitations. The program is then loaded using the devlink interface, a subsystem for managing resources, which handles the transfer to the NIC . The verifier performs compatibility checks during loading to confirm that the program and hardware align, preventing mismatches that could lead to failures. Driver-specific hooks facilitate the attachment, ensuring seamless integration with the NIC's data path. Offloading provides significant benefits, including zero involvement from the host CPU after initial setup, enabling line-rate packet processing on SmartNICs even under high traffic loads. It supports core XDP actions such as and entirely on the hardware, allowing packets to be filtered or forwarded without reaching the host stack, which is particularly useful for and performance-critical applications. However, hardware offload is constrained by a of eBPF features, excluding complex operations like advanced map manipulations or certain helper functions to match hardware capabilities. It also requires periodic firmware updates to incorporate new offload support, limiting adoption to compatible devices. Integration with (TSN) has advanced, enabling XDP offload to support deterministic traffic scheduling in and environments.

Supported Devices

Express Data Path (XDP) hardware support is available on select network interface controllers (NICs) and platforms, enabling native or offloaded execution of XDP programs for high-performance packet processing. Ethernet controllers provide full native XDP offload through the driver for E810 series devices, supporting XDP and AF_XDP operations on kernels 4.14 and later. The 700-series controllers, such as those based on X710, achieve similar support via the i40e driver for native XDP on kernels 4.14 and later, with iavf handling virtual functions in SR-IOV configurations on kernels 5.10 and above. Netronome Agilio SmartNICs have offered early and stable XDP offload support since 2016, allowing /XDP programs to execute directly on the hardware for packet filtering and processing tasks. NVIDIA (formerly Mellanox) ConnectX-6 and later s support driver-level XDP execution, enabling high-throughput packet handling in native mode on . Hardware offload for XDP is not supported on BlueField-2 DPUs as of the latest available information (2023), with development ongoing. Other vendors include Broadcom's family of SmartNICs, which support XDP offload by running full distributions on the device, facilitating eBPF program deployment for network functions. Marvell OCTEON DPUs, such as those in the TX and 10 series, provide XDP and acceleration in configurations like Asterfusion's SmartNICs, targeting security and load balancing workloads. Software-based XDP support extends to virtualized environments via the virtio-net driver, available since 4.10 for both host and guest packet processing. On Windows, basic XDP functionality is available through the (WDK) via the open-source XDP-for-Windows project, which implements a high-performance packet I/O similar to XDP. Hardware offload is supported on select NICs, such as Mellanox adapters in virtualized setups, though primarily optimized for guests with experimental Windows extensions. To query XDP support and status on , administrators can use [ethtool](/page/Ethtool) -l <[interface](/page/Interface)> to view channel configurations relevant to XDP multi-queue operations and [ethtool](/page/Ethtool) -S <[interface](/page/Interface)> for statistics including XDP drop counts. For offload flags and parameters, devlink dev param show <device> displays offload capabilities, such as XDP mode settings on supported NICs.

Applications

Use Cases

XDP has been widely deployed for , where it enables early packet dropping of malformed or suspicious traffic at the network interface level, often using action to discard packets before they consume kernel resources. Integration with intrusion detection systems like allows XDP to apply custom filters for real-time threat detection and blocking, as demonstrated in deployments throughout 2024 that handle high-volume attacks efficiently. For instance, Cloudflare's L4Drop tool leverages XDP to filter Layer 4 DDoS traffic, achieving rapid mitigation by processing packets directly in the driver. In load balancing and telemetry applications, XDP supports packet redirection to specific queues or devices using the REDIRECT action, facilitating efficient traffic distribution in containerized environments. , an eBPF-based networking solution for , employs XDP to accelerate service load balancing and enable flow sampling for monitoring, providing cluster-wide visibility into network traffic without kube-proxy overhead. This approach is particularly effective in dynamic cloud-native setups, where XDP programs dynamically update rules based on telemetry data to optimize routing and detect anomalies. For high-speed packet capture and forwarding, XDP combined with AF_XDP sockets enables user-space applications to bypass the kernel stack, serving as a foundation for tools that outperform traditional utilities like . Cloudflare's xdpcap, for example, uses XDP to capture packets at line rate directly from the driver, supporting forwarding scenarios in monitoring and analysis pipelines. Red Hat's xdpdump further illustrates this by integrating XDP for efficient traffic examination in enterprise environments. XDP accelerates and HTTP processing by enabling receive-side scaling, distributing incoming connections across CPU cores for better throughput in modern web protocols. Microsoft's MsQuic implementation incorporates XDP to bypass the for packet handling, improving and in high-performance networking stacks. Research on acceleration confirms XDP's role in offloading receive processing, making it suitable for and content delivery networks. Recent 2025 advancements highlight XDP's expanding versatility, such as a technique exploiting virtual Ethernet (veth) interfaces to apply XDP programs to egress traffic for shaping and , previously limited to ingress paths. In DNS servers, the Name Server Daemon () integrates AF_XDP sockets to handle elevated query rates, enhancing protection against amplification attacks by enabling rapid filtering and processing of traffic on port 53. As of Linux kernel 6.11 (September 2025), XDP includes improved multi-buffer support for AF_XDP, boosting performance in cloud-native environments. Enterprise adoption of XDP is evident in cloud providers, where it supports VPC traffic filtering through integrations for and flow optimization. AWS employs , including XDP capabilities via tools like , to enforce network security groups and tune VPC flows for enhanced observability and threat detection. Similarly, Google Cloud integrates XDP in Google Kubernetes Engine via , enabling efficient packet filtering and load balancing within shared VPC architectures.

Performance Metrics

XDP programs demonstrate high throughput in packet drop operations, achieving up to 14.9 million packets per second (Mpps) per core on i7 processors, as measured in 2019 tests on 4.18 systems with simple filters. With hardware offload to SmartNICs, performance scales to up to 18 Mpps, enabling efficient processing on 25 Gbps interfaces without host involvement. Comparisons highlight XDP's efficiency for filtering tasks, delivering 5-10 times higher throughput than , with XDP sustaining up to 7.2 Mpps under heavy drop loads while tops out at around 1.5 Mpps with minimal rules. For user-space access via AF_XDP sockets, mode reaches approximately 90% of line rate on high-speed links, compared to 50% with traditional copy-based sockets, by avoiding kernel-to-user transfers. Key metrics include decision latencies under 1 μs for basic operations in the driver hook, though average forwarding latency measures around 7 μs at 1 Mpps loads. CPU utilization remains below 5% when handling 40 Gbps traffic with multi-core scaling via Receive Side Scaling (RSS), allowing efficient resource use across cores. Recent 2025 evaluations of userspace XDP (uXDP) implementations report up to 40% performance improvements over kernel-mode execution for certain network functions, such as load balancing, enhancing throughput for complex network functions. Performance testing commonly employs tools like for generating high-volume traffic and pktgen for kernel-based , while xdp-bench provides detailed statistics on XDP program execution across modes. Throughput scales linearly with the number of CPU cores and queues, and hardware offload modes completely bypass host CPU cycles for processed packets. In mixed deployments, 2024 studies on AF_XDP confirm end-to-end delays below 10 μs when using busy polling and optimized socket parameters, supporting latency-sensitive applications.

References

  1. [1]
    fast programmable packet processing in the operating system kernel
    We present the design of a novel approach to programmable packet processing, called the eXpress Data Path (XDP).
  2. [2]
    XDP - IO Visor Project
    Introduction to XDP. XDP or eXpress Data Path provides a high performance, programmable network data path in the Linux kernel as part of the IO Visor Project.
  3. [3]
    Using eXpress Data Path (XDP) maps in RHEL 8: Part 2
    Dec 17, 2018 · A good starting point is with this header inside the Linux kernel sources, which contains the official documentation for the implemented helpers ...
  4. [4]
    Program Type 'BPF_PROG_TYPE_XDP' - eBPF Docs
    XDP (Express Data Path) programs can attach to network devices and are called for every incoming (ingress) packet received by that network device. XDP ...
  5. [5]
    Chapter 42. Using xdp-filter for high-performance traffic filtering to ...
    For example, during testing, Red Hat dropped 26 million network packets per second on a single core, which is significantly higher than the drop rate of ...
  6. [6]
    [PDF] Leveraging Kernel Tables with XDP
    Linux networking stack operates on skbs and the XDP context is before an skb is allocated. This means bpf programs have to do their own packet parsing ...
  7. [7]
    Early packet drop — and more — with BPF - LWN.net
    Apr 6, 2016 · If the XDP approach is able to achieve its performance and functionality goals, it should give user-space stacks a run for their money. But ...<|separator|>
  8. [8]
    [PDF] Fast Programmable Packet Processing in the Operating System Kernel
    ABSTRACT. Programmable packet processing is increasingly implemented us- ing kernel bypass techniques, where a userspace application takes.
  9. [9]
    Debating the value of XDP - LWN.net
    Dec 6, 2016 · The core benefit of XDP is that the system can make quick decisions about packets without the need to involve the rest of the networking code.Missing: advantages | Show results with:advantages
  10. [10]
    [PDF] XDP – eXpress Data Path - people
    Jesper Dangaard Brouer. Principal Engineer, Red Hat. Visiting One.com. Sep, 2016. Page 2. XDP – eXpress Data Path. 2/23. Overview: Topics. ○ What is XDP – ...
  11. [11]
    Fast Programmable Packet Processing in the Operating System Kernel
    Jan 11, 2020 · To overcome this limitation, we present the design of a novel approach to programmable packet processing, called the eXpress Data Path (XDP). In ...
  12. [12]
    Microsoft brings Linux XDP project to Windows - Neowin
    May 25, 2022 · Linux has a high performance networking project called eXpress Data Path (XDP) that has been part of the Linux kernel since version 4.8.<|separator|>
  13. [13]
    [PDF] The Path to DPDK Speeds for AF XDP - The Linux Kernel Archives
    Nov 2, 2018 · Another benefit with this is that it improves ease of use and adoption of AF XDP as an external XDP program is no longer required the ...
  14. [14]
  15. [15]
    Intel® Ethernet Connection E82X — Feature Support Matrix
    Dec 8, 2024 · 1.7. July 15, 2024. Updates include the following: General updates in support of Software Release 29.1 and NVM. 3.39/3.40 (NIC 1.3.3). 1.6.Missing: expansion | Show results with:expansion
  16. [16]
    CVE-2025-37920 Detail - NVD
    May 20, 2025 · This could result in race condition where two CPU cores access RX path of two different sockets sharing the same umem. Protect both queues by ...
  17. [17]
    uXDP: Frictionless XDP Deployments in Userspace
    Sep 8, 2025 · uXDP ensures compatibility and preserves the verification-driven safety, portability, and familiar workflows of eBPF while moving execution into ...
  18. [18]
    Using XDP for Egress Traffic
    ### Summary: How the Loophole Allows XDP for Egress Traffic Shaping
  19. [19]
    A gentle introduction to XDP - Datadog
    May 12, 2023 · XDP, or eXpress Data Path, is a Linux networking feature that enables you to create high-performance packet-processing programs that run in the kernel.
  20. [20]
    The eXpress Data Path - Unweaving the Web
    Jan 10, 2019 · In this article I review the eXpress Data Path, the new kernel component for fast packet processing.<|control11|><|separator|>
  21. [21]
    eBPF XDP: The Basics and a Quick Tutorial | Tigera - Creator of Calico
    XDP is a technology that allows developers to attach eBPF programs to low-level hooks, implemented by network device drivers in the Linux kernel.
  22. [22]
    eBPF and XDP for Processing Packets at Bare-metal Speed
    Mar 19, 2025 · The mission of XDP is to achieve programmable packet processing in the kernel while still retaining the fundamental building blocks of the networking stack.Ebpf And Xdp For Processing... · Ingress Packet Flow Through... · Programming Xdp In Go
  23. [23]
    Page Pool API - The Linux Kernel documentation
    The page pool API recycles pages for skb packets and xdp frames, replacing alloc_pages() with page_pool_alloc() and tracks in-flight pages.Missing: multicast RSS
  24. [24]
    XDP support - Documentation - NXP Semiconductors
    Sep 1, 2021 · XDP is a high performance data path in the Linux kernel, which allows for fast and programmable frame processing. XDP programs are based on eBPF ...
  25. [25]
  26. [26]
    Ethtool counters - The Linux Kernel documentation
    These counters provide information on the amount of traffic that was accelerated by the NIC. The counters are counting the accelerated traffic in addition to ...
  27. [27]
    Program Types and ELF Sections - The Linux Kernel documentation
    The table below lists the program types, their attach types where relevant and the ELF section names supported by libbpf for them.Missing: return | Show results with:return
  28. [28]
    Libbpf eBPF macro 'SEC' - eBPF Docs
    Nov 16, 2024 · This page documents the 'SEC' libbpf eBPF macro, including its definition, usage, and examples.
  29. [29]
    Development Tools — Cilium 1.18.3 documentation
    Here, the LLVM BPF back end queries the kernel for availability of BPF instruction set extensions and when found available, LLVM will use them for compiling the ...
  30. [30]
    BPF Syscall BPF_PROG_LOAD command - eBPF Docs
    This page documents the 'BPF_PROG_LOAD' eBPF syscall command, including its definition, usage, program types that can use it, and examples.
  31. [31]
    File not found · xdp-project/bpf-examples
    Insufficient relevant content. The provided URL (https://github.com/xdp-project/bpf-examples/blob/main/xdp1/xdp1.c) does not display the requested C code or compilation/loading instructions. Instead, it shows a "File not found" error page from GitHub.
  32. [32]
    BPF Architecture — Cilium 1.18.3 documentation
    LLVM provides a BPF back end, so that tools like clang can be used ... XDP sets up and loads the program into the kernel and terminates itself eventually.
  33. [33]
    eBPF verifier - The Linux Kernel documentation
    The eBPF verifier checks for unreachable instructions, simulates execution, tracks register values, and uses DAG checks to ensure program safety.Missing: XDP | Show results with:XDP
  34. [34]
    Verifier - eBPF Docs
    Jan 28, 2024 · The general idea is that BPF programs are not allowed to break the kernel in any way and it should not violate the security model of the system.Basics · Analysis · Features
  35. [35]
    Harnessing the eBPF Verifier - The Trail of Bits Blog
    Jan 19, 2023 · For example, eBPF programs were limited to a maximum of 4,096 instructions until kernel version 5.2, which increased that number to 1 million.
  36. [36]
    What is eBPF? An Introduction and Deep Dive into the eBPF ...
    The verifier is meant as a safety tool, checking that programs are safe to run. It is not a security tool inspecting what the programs are doing. Hardening.What is eBPF? · Introduction to eBPF · eBPF Safety · Why eBPF?
  37. [37]
    The eBPF Verifier - How Linux Safely Runs User Code in Kernel ...
    Sep 1, 2025 · Note: The instruction count limit is higher for privileged programs—up to 1 million instructions. For unprivileged programs, it is typically ...
  38. [38]
  39. [39]
    eBPF Ecosystem Progress in 2024–2025: A Technical Deep Dive
    Feb 12, 2025 · XDP Offloads and Driver Support: Over the last year, more network drivers gained robust XDP support, including for advanced features like XDP ...Kernel Improvements In... · Ebpf In Security: Threat... · Networking: Ebpf's Impact On...
  40. [40]
    [PDF] AF_XDP Sockets: High Performance Networking for Cloud
    Feb 21, 2020 · AF_XDP was introduced in Linux 4.18 and is a new socket type for fast, raw packet data delivery to user-space processes. It is present in most ...
  41. [41]
    AF_XDP: introducing zero-copy support - LWN.net
    Jun 4, 2018 · This patch serie introduces zerocopy (ZC) support for AF_XDP. Programs using AF_XDP sockets will now receive RX packets without any copies.Missing: sk_buff access
  42. [42]
    5. AF_XDP Poll Mode Driver - Documentation - DPDK
    AF_XDP sockets enable the possibility for an XDP program to redirect packets to a memory buffer in userspace. Further information about AF_XDP can be found in ...
  43. [43]
    Experimental support for AF_XDP sockets in NSD
    May 26, 2025 · We have been developing support for AF_XDP sockets in NSD. They allow handling a higher rate of network packets than through the Linux network ...
  44. [44]
    Using XDP/AF_XDP sockets — NSD 4.13.1 documentation
    AF_XDP sockets introduce a fast-path for network packets from the device driver directly to user-space memory, bypassing the rest of the Linux network stack and ...
  45. [45]
    AF_XDP - The Linux Kernel documentation
    AF_XDP is an address family that is optimized for high performance packet processing. This document assumes that the reader is familiar with BPF and XDP.Missing: eXpress | Show results with:eXpress
  46. [46]
    [PDF] PERFORMANCE ANALYSIS OF AF_XDP SOCKETS ON ...
    AF_XDP sockets are a socket type introduced in the Linux kernel that allow for in-kernel network stack bypass using an Express Data Path program. The socket ...<|separator|>
  47. [47]
    Accelerating networking with AF_XDP - LWN.net
    Apr 9, 2018 · While the network stack is highly flexible, XDP is built around a bare-bones packet transport that is as fast as it can be.Missing: advantages | Show results with:advantages
  48. [48]
    Linux 5.14 To Bring Intel IGC Driver Support For AF_XDP Zero-Copy
    May 24, 2021 · ... zero-copy functionality. Intel found this AF_XDP zero-copy support to greatly improve performance and initially was tailored to their i40e ...Missing: Gbps | Show results with:Gbps
  49. [49]
    [PDF] Performance Implications at the Intersection of AF_XDP and ...
    Sep 11, 2025 · AF_XDP depends on NIC hardware features to provide compara- ble performance as other zero-copy networking libraries: it depends on hardware flow ...
  50. [50]
    LKML: Song Yoong Siang: [PATCH bpf-next v4 1/4] xsk
    Extend the XDP Tx metadata framework so that user can requests launch time hardware offload, where the Ethernet device will schedule the ...
  51. [51]
    [PDF] eBPFlow: A Hardware/Software Platform to Seamlessly Offload ...
    Oct 25, 2023 · Users can compile. eBPF programs to bytecode before loading it on the kernel. Languages such as C and P4 support this technology. More details ...Missing: devlink | Show results with:devlink
  52. [52]
    eBPF/XDP hardware offload to SmartNICs - NetDev conference
    This talk will lay out a method we use to offload eBPF/XDP programs to SmartNICs, which allows the general acceleration of any eBPF/XDP programs.<|separator|>
  53. [53]
    [PDF] eBPF Hardware Offload to SmartNICs: cls bpf and XDP
    Abstract. This paper will lay out a method used to offload eBPF/XDP programs to SmartNICs which allows the general acceleration of any eBPF programs.
  54. [54]
    BPF Networking: How Linux Accelerates Packets with XDP
    Oct 8, 2025 · The performance gains of XDP are substantial: Traditional Stack: ~1–3 million packets per second (MPPS) per core; XDP: 20–30 MPPS per core (10– ...
  55. [55]
    [PDF] The Challenges of XDP Hardware Offload
    Feb 3, 2018 · Helpers: white-list of kernel functions to call from eBPF programs: get current time, print debug information, lookup or update maps, shrink.Missing: devlink | Show results with:devlink
  56. [56]
    Add launch time hardware offload support to XDP Tx metadata
    Feb 6, 2025 · Extend the XDP Tx metadata framework so that user can requests launch time hardware offload, where the Ethernet device will schedule the packet for ...
  57. [57]
    Traffic Steering in Edge Compute Devices using eXpress Data Path ...
    Authors propose the use of XDP programming to reduce the latency introduced by the network stack in the device when steering Ethernet PDU and gPTP traffic to ...
  58. [58]
    Linux Base Driver for the Intel(R) Ethernet Controller 800 Series
    This driver supports XDP (Express Data Path) and AF_XDP zero-copy. Note that XDP is blocked for frame sizes larger than 3KB.
  59. [59]
    intel/ethernet-linux-i40e: The i40e Linux Base Drivers for ... - GitHub
    This driver supports XDP (Express Data Path) on kernel 4.14 and later and AF_XDP zero-copy on kernel 4.18 and later. Note that XDP is blocked for frame sizes ...<|separator|>
  60. [60]
    Linux Base Driver for Intel(R) Ethernet Adaptive Virtual Function
    This driver was formerly called i40evf. The iavf driver supports the below mentioned virtual function devices and can only be activated on kernels running the ...
  61. [61]
    Netronome/bpf-samples: Sample BPF offload apps. - GitHub
    This repository contains eBPF XDP demo applications. Most demos can be run on XDP compatible drivers or hardware.
  62. [62]
    NICs for XDP/BPF Offloading on Linux Servers : r/networking - Reddit
    Aug 9, 2024 · It looks like Netronome and Mellanox ConnectX6/7 are purpose built for these types of workloads. I've also read Cloudflare's blog post about ...Missing: 2023 | Show results with:2023
  63. [63]
    BlueField-2 support for XDP Hardware Offload
    Nov 23, 2023 · According to my check, XDP, is supported at the driver level. Offload is not tested/supported at the moment, and that's why it's not present in the ...
  64. [64]
    XDP Offloading - main@lists.iovisor.org | Home
    Hi all, I am wondering except for Netronome Cards if there are any other NICs that support XDP Offloading? Thank you in advance!
  65. [65]
    Marvell OCTEON TX2 CN9670 Based SOC SmartNIC - Asterfusion
    Asterfusion Helium SmartNIC supports security function accelerations such as IPSec, SSL, XDP/eBPF, vFW/vLB/vNAT, DPI, and DDoS defense. Storage Acceleration.<|separator|>
  66. [66]
    Testing XDP with virtio-net - openSUSE Wiki
    Jun 21, 2017 · The XDP support was added into virtio-net since kernel 4.10. It's convenient to test the XDP programs without any high-end network card.
  67. [67]
    microsoft/xdp-for-windows: XDP speeds up networking on Windows
    MIT license; Security. XDP For Windows. "XDP for Windows" is a Windows interface similar to XDP (eXpress Data Path), used to send and receive packets at high ...Issues 66 · Activity · Releases 25Missing: 2022 | Show results with:2022
  68. [68]
    ethtool(8) - Linux manual page - man7.org
    -K --features --offload Changes the offload parameters and other features of the specified network device. The following feature names are built-in and others ...Missing: XDP | Show results with:XDP
  69. [69]
    L4Drop: XDP DDoS Mitigations - The Cloudflare Blog
    Nov 28, 2018 · XDP uses an extended version of the classic BPF instruction set, eBPF, to allow arbitrary code to run for each packet received by a network card ...
  70. [70]
    20.4. eBPF and XDP — Suricata 9.0.0-dev documentation
    Netronome cards support hardware bypass. In this case the eBPF code is running in the card itself. This introduces some architectural differences compared to ...20.4. 1.1. Xdp · 20.4. 9. Setup Xdp Bypass · 20.4. 9.4. The Xdp Cpu...Missing: expansion 2023 2024
  71. [71]
    Protect from DDoS attacks with eBPF XDP - Dev in the Cloud
    Nov 29, 2024 · XDP (eXpress Data Path) is a high-performance, in-kernel packet processing framework in Linux. It enables the execution of user-defined ...
  72. [72]
    XDP Load Balancing, Cluster-wide Flow Visibility, Host ... - Cilium 1.8
    Jun 22, 2020 · Cilium 1.8 brings XDP acceleration, cluster-wide visibility, host network policies, session affinity, and performance improvements wi...Missing: telemetry | Show results with:telemetry
  73. [73]
    Cilium Technical Deep Dive — Understanding the eBPF Data Path
    Oct 2, 2025 · XDP: The Fastest Path. For scenarios requiring maximum performance (load balancers, DDoS mitigation), Cilium can use XDP (eXpress Data Path).
  74. [74]
    xdpcap: XDP Packet Capture - The Cloudflare Blog
    Apr 24, 2019 · We built a tcpdump replacement for XDP, xdpcap. We are open sourcing this tool: the code and documentation are available on GitHub.
  75. [75]
    Capturing network traffic in an eXpress Data Path (XDP) environment
    Feb 5, 2021 · In this post we're going to take a look at how to capture and examine network traffic using xdpdump and Wireshark to troubleshoot eXpress Data Path (XDP) ...Missing: forwarding AF_XDP
  76. [76]
    [PDF] Accelerating QUIC with XDP
    Apr 16, 2024 · XDP is a high- speed network data path in the Linux kernel based on the eBPF virtual machine. It allows bypassing the in-kernel network stack. A ...
  77. [77]
    Experimental support for AF_XDP sockets in NSD - APNIC Blog
    Aug 4, 2025 · At NLnet Labs, we have been developing support for AF_XDP sockets in the Name Server Daemon (NSD). They allow handling a higher rate of ...
  78. [78]
    How AWS uses eBPF to identify security risks - Fierce Network
    Feb 10, 2023 · Within VPC, eBPF is used to observe and tune TCP flow and parameters. EBPF is also used to implement network access control lists and security ...
  79. [79]
    A multi-cluster shared services architecture with Amazon EKS using ...
    Jun 15, 2021 · At its core is the Linux kernel technology called eBPF, which enables the dynamic insertion of programming logic into the Linux kernel. Cilium ...<|separator|>
  80. [80]
    Bringing eBPF and Cilium to Google Kubernetes Engine
    Aug 19, 2020 · GKE's new dataplane uses the eBPF-based Cilium project to better integrate Kubernetes and the Linux kernel.Missing: XDP VPC
  81. [81]
    [PDF] Demystifying the Performance of XDP BPF - COMSYS | RWTH Aachen
    The NIC already performs worse than the device driver (4 cores) when only doing 6 checksums per packet due to a slower CPU on the NIC than in the host system.<|control11|><|separator|>
  82. [82]
    XDP-Based SmartNIC Hardware Performance Acceleration for Next ...
    Sep 9, 2022 · The key difference is that XDP hooks allow executing programs to process packets at very early stages, before they arrive at the Linux network ...
  83. [83]
    [PDF] Performance Implications of Packet Filtering with Linux eBPF
    Our measurements have shown that XDP provides a sig- nificant performance increase in comparison to iptables or nftables. Enabling the eBPF JIT compiler ...
  84. [84]
    xdp-project/xdp-tools: Utilities and example programs for use with XDP
    This repository contains the libxdp library for working with the eXpress Data Path facility of the Linux kernel, and a collection of utilities and example code ...Issues 40 · Pull requests 11 · Security · Releases 22
  85. [85]
    None
    ### Summary of Metrics on Delays and Latency for AF_XDP