Fact-checked by Grok 2 weeks ago

Berkeley Packet Filter

The Berkeley Packet Filter (BPF) is a kernel-resident mechanism for high-performance packet filtering and capture, designed to enable user-space applications to selectively process network traffic at the data-link layer without copying irrelevant packets into user space, thereby reducing overhead in systems like BSD Unix. Developed by Steven McCanne and at , BPF employs a compact, register-based interpreter executed in the , which evaluates user-defined filter expressions compiled from a resembling C. This architecture delivers filtering speeds up to 20 times faster than prior kernel-user packet transfer models by discarding non-matching packets early in the kernel path. BPF interfaces with the system via a raw socket-like device node, allowing attachment of filter programs to network interfaces for protocol-independent access to all inbound and outbound frames, including those not destined for the host. Its just-in-time (JIT) compilable bytecode and bounded execution model ensure deterministic performance and security, preventing arbitrary code execution while supporting complex predicates on packet headers and contents. Originally integrated into 4.3BSD Tahoe and later BSD variants, BPF underpins foundational networking tools such as and libpcap, facilitating efficient for , monitoring, and research. The technology's influence extends beyond filtering; ported to Linux as socket filters in the mid-1990s, it evolved into extended BPF (eBPF) starting around 2014, expanding the virtual machine's capabilities for safe, kernel-verified programs in areas like performance tracing, load balancing, and custom security policies without module loading. This progression underscores BPF's defining characteristic: a lightweight, extensible in-kernel computation framework that prioritizes efficiency and generality over traditional system call overheads.

History

Origins and Initial Development

The Berkeley Packet Filter (BPF), initially known as the BSD Packet Filter, was developed by Steven McCanne and at the , with completion dated December 19, 1992. The work was supported by the U.S. Department of Energy under contract DE-AC03-76SF00098 and presented at the USENIX Winter Technical Conference from January 25–29, 1993, in , . Development arose from the performance bottlenecks in existing packet capture mechanisms, which required kernel-to-user-space copies of entire packets for filtering—a inefficient for high-volume on emerging gigabit networks and RISC-based processors. As stated in the foundational paper, "To allow such tools to be constructed, a kernel must contain some facility that gives user-level programs access to raw, unprocessed network ," highlighting the causal need for kernel-resident filtering to reduce overhead while preserving user-level control. BPF addressed limitations in prior systems like the 1980 CMU/Stanford Packet Filter (CSPF), which used stack-based evaluation suboptimal for RISC CPUs, and Sun's interface, which incurred 10–150 times greater costs. The innovation centered on a register-based filter evaluator and a non-shared model exploiting larger virtual address spaces, yielding 1.5–20 times better performance than CSPF on equivalent . Initial implementation occurred directly in the BSD , providing a protocol-independent raw interface to data-link layers and enabling applications to attach programs for in-kernel packet inspection and selective delivery. This facilitated tools like the analyzer, with BPF code distributed in tcpdump version 2.2.1 via FTP from the laboratory's servers.

Early Adoption in Systems

The Berkeley Packet Filter (BPF) was first implemented in BSD kernels as a high-performance alternative to prior packet filtering mechanisms, such as the stack-based CMU/Stanford packet filter introduced in 4.3BSD. Developed by Steven McCanne and at Berkeley Laboratory, BPF's architecture—featuring a register-based evaluator and —was detailed in a December 1992 preprint paper presented at the 1993 Winter Conference. This enabled efficient user-level packet capture by executing filters in the kernel before packet copies to user space, yielding 10–150 times the performance of Sun's NIT interface and 1.5–20 times that of the CMU/Stanford filter on RISC processors. Initial integration occurred in systems including 4.4BSD, 4.3BSD Tahoe/Reno, SunOS 4.x, SunOS 3.5, and HP-300/HP-700 BSD, where it supported applications like for real-time network analysis without kernel modifications. In these early BSD environments, BPF operated via a model (bpf devices), attaching filters to network interfaces to selectively deliver packets matching user-defined criteria, such as protocol types or numbers, expressed in a domain-specific interpreted by a virtual machine. Adoption facilitated tools for intrusion detection precursors and monitoring, with BPF's non-shared buffering leveraging expanded address spaces to handle high packet rates—up to millions per second on capable hardware—while minimizing context switches. By replacing less efficient interfaces like in SunOS 4.x utilities (e.g., etherfind), BPF became the for low-overhead packet tapping in BSD-derived networking stacks. BSD derivatives rapidly incorporated BPF from their inception, inheriting it as a core component for protocol-independent access. FreeBSD, evolving from post-4.3BSD efforts since 1993, included BPF in its initial releases for raw packet interfaces supporting tools like libpcap. NetBSD, forked from 4.3BSD-Reno in 1993, embedded BPF to enable portable network diagnostics across architectures. OpenBSD, branching from NetBSD in 1995, retained BPF for secure, efficient filtering in its emphasis on proactive auditing. These systems extended early BPF usage to firewall precursors and , with the filter's safety guarantees—via bounded execution and no —ensuring kernel stability during high-volume captures. Ports to other systems, including Solaris (from SunOS), followed suit in the mid-1990s, broadening BPF's role in enterprise network tools before socket filtering adaptations emerged later.

Technical Fundamentals

Packet Capture Mechanism

The Berkeley Packet Filter (BPF) facilitates user-level packet capture by embedding a kernel-resident that executes user-supplied programs on incoming packets, thereby selectively delivering only matching packets to applications and discarding others without user-space involvement. A user-space program compiles a filter expression into BPF —a sequence of instructions—and attaches it to a , such as a raw socket or packet socket, via the setsockopt with the SO_ATTACH_FILTER option, passing a structure containing the instruction array, jump tables, and constants. This attachment associates the filter with a specific network interface, enabling the kernel to invoke it on packets arriving at that interface. Upon packet reception, the network captures a copy of the packet from the and passes it to any attached BPF filters before normal processing or delivery to other sockets. The operates in context directly on the packet to avoid data copying overhead, accessing the packet via a pointer to its start and the total packet as inputs to the . The BPF VM employs a -based model with a 32-bit accumulator (A) for primary computations, a 32-bit index (X) for offsets, and a fixed 16-entry of 32-bit scratch memory (M[0-15]) for temporary storage. Instructions manipulate these elements: load operations fetch byte, half-word, or word values from the packet at absolute or indexed offsets (e.g., ld [x + k] loads from packet offset X + k into A); ALU operations perform addition, subtraction, multiplication, division, negation, modulo, AND, OR, XOR, shifts, or comparisons on A and X; branch instructions enable conditional jumps based on zero/non-zero results (with 8-bit true/false offsets for control flow); store instructions move values between A, X, and M[]; and return instructions terminate execution, specifying the accepted prefix (up to the packet ) and implicitly the decision via the accumulator value. Filtering decisions derive from the program's control flow graph, which evaluates packet header fields—such as Ethernet types, IP addresses, ports, or protocol identifiers—through sequential or branched execution, accommodating variable-length headers via forward jumps to skip padding or options. If the accumulator returns a non-zero value (e.g., -1 or 0xffff in some implementations) at the program's end, the kernel accepts the packet, copies the specified prefix (including metadata like capture timestamp, wire length, and captured length) to the socket's receive buffer, and signals the application via standard socket notifications. A zero return discards the packet entirely within the kernel, preventing bandwidth waste from irrelevant traffic. This in-kernel evaluation eliminates per-packet context switches and minimizes memory traffic, yielding performance gains of 1.5 to 20 times over stack-based alternatives like the Common Socket Packet Filter (CSPF) and 10 to 150 times over earlier mechanisms like Sun's Network Interface Tap (NIT), as measured on a Sparcstation 2 in 1992 with average overheads of 6 microseconds per filtered packet versus 89 microseconds for NIT. Each BPF instruction uses a fixed 8-byte encoding: a 16-bit , 8-bit jump-true (jt) and jump-false (jf) displacements, and a 32-bit immediate constant (k), enabling compact programs typically under 100 for complex filters. Originally interpreted for portability across architectures, the mechanism supports just-in-time () compilation in modern (e.g., via CONFIG_BPF_JIT on x86_64) to translate to native code, further reducing execution latency while maintaining the original VM semantics for safety. Filters run atomically per packet, isolated from other kernel paths, ensuring deterministic behavior without shared state. This design, introduced in the 4.3BSD Tahoe release and detailed in a 1993 USENIX paper, prioritizes efficiency by offloading filtering logic to kernel space while exposing a simple, verifiable set resistant to malformed inputs.

Filtering and Instruction Set

The Berkeley Packet Filter (BPF) employs a to execute user-supplied filter programs on network packets, enabling selective capture based on packet content without copying entire packets to user space. Each filter program consists of a linear sequence of instructions compiled from high-level expressions, forming a (DAG) to preclude loops and ensure termination. Upon packet arrival, the kernel loads packet data into a contiguous and runs the filter , accessing data via offsets from the buffer start; out-of-bounds references or cause immediate termination with rejection. The program returns a value indicating acceptance (non-zero, capped at packet length to specify bytes to copy) or rejection (zero). The BPF virtual machine is accumulator-based, featuring a 32-bit accumulator A for primary computations, an index X for offset calculations (e.g., variable-length header ), and a scratch array M of sixteen 32-bit words for temporary storage. Packet data loads occur in network byte order, with automatic host-order conversion for words and halfwords; byte loads are zero-extended. Instructions are encoded as 64-bit words: a 16-bit (combining class and subclass), 8-bit true/false jump offsets (jt/jf), and a 32-bit constant/ k. Addressing modes include immediate (BPF_IMM), absolute packet (BPF_ABS), indexed (BPF_IND: k + X), (BPF_MEM), and packet length (BPF_LEN). Jump offsets limit programs to 256 instructions for efficiency, with safety enforced by rejecting invalid accesses. BPF's instruction set comprises eight classes: load/store, (ALU), jumps, return, and miscellaneous, supporting essential operations for protocol dissection and matching. Loads (ld/ldx) fetch from packet or constants into A or X; stores (st/stx) write A or X to M. ALU operations (add, subtract, multiply, divide, AND, OR, left/right shift) apply to A with k or X, handling signed/unsigned as needed (division traps on zero). Jumps (jeq, jgt, jge, jset) branch conditionally on A versus k, using jt/jf for taken/fallen paths; unconditional ja uses k directly. The ret instruction halts execution, returning k (or A if specified) bytes. Miscellaneous transfers (tax/txa) move between A and X. No direct packet stores or unbounded loops exist, prioritizing verifier safety and performance.
Instruction ClassKey Opcodes and FormatsPurpose
Load (BPF_LD, BPF_LDX)BPF_LD | BPF_W | BPF_ABS (ld word at k); BPF_LDX | BPF_W | BPF_IMM (ldx immediate to X); variants for byte (BPF_B), halfword (BPF_H), indexed (BPF_IND)Load packet data, immediates, or memory into A or X, enabling header field extraction.
Store (BPF_ST, BPF_STX)BPF_ST (A to M); BPF_STX (X to M)Temporary storage for intermediate values, e.g., offsets or masks.
ALU (BPF_ALU)BPF_ADD | BPF_K (A += k); similar for SUB, MUL, DIV, AND, OR, LSH, RSH, NEG (A = -A); register variants with BPF_XArithmetic and bitwise operations on A, supporting comparisons and adjustments.
Jump (BPF_JMP)BPF_JEQ | BPF_K (if A == k, jump jt else jf); JGT (A > k), JGE (A >= k), JSET (A & k != 0); BPF_JA (unconditional k offset)Conditional branching for protocol-specific logic, e.g., IP type checks.
Return (BPF_RET)BPF_RET | BPF_K (return k); or BPF_RET | BPF_A (return A)Terminate filter, specifying accepted bytes or rejection.
Miscellaneous (BPF_MISC)BPF_TAX (A to X); BPF_TXA (X to A)Register transfers for indexing.

Virtual Machine Architecture

The Berkeley Packet Filter (BPF) implements a lightweight (VM) within the to execute user-provided for packet filtering, enabling efficient and secure processing without risking instability. Introduced in the 1993 USENIX paper by Steven McCanne and , the VM design prioritizes simplicity and verifiability, using a register-based model with limited resources to bound execution time and memory access. Programs are loaded as sequences of fixed-size instructions, which the verifies before execution, either via interpretation or just-in-time () compilation to native code for performance. The VM maintains a minimal state comprising a 32-bit accumulator A for primary computations, a 32-bit index X for auxiliary operations, and a fixed 16-entry of 32-bit scratch locations M[0..15] for temporary storage. Packet data resides in an implicit read-only buffer, accessible via offset-based load instructions that incorporate runtime bounds checking against the packet's actual length to prevent overruns. Constants are embedded directly in instructions as 32-bit immediates K, eliminating the need for explicit load sequences. The advances sequentially through instructions unless altered by conditional jumps, with no support for unbounded loops or indirect addressing to enforce . BPF instructions follow an 8-byte encoding: a 1-byte , followed by jump offsets or modifiers, a 1-byte jump-true/false , a 1-byte jump-false , and a 32-bit immediate or offset value. Opcodes fall into categories including loads (e.g., from packet , constants, or ), arithmetic/logic units (ALU) operations on A with X or K (addition, subtraction, multiplication, division, AND, OR, XOR, shifts), stores to M or X, conditional jumps (forward only, based on A or X comparisons to zero or K), and a instruction that outputs the final A value (non-zero accepts the packet; zero drops it). This restricted set, with 10 core opcodes extensible via modes, supports complex filters like dissection while remaining verifiable. Safety is integral to the , achieved through kernel-side prior to program attachment. The verifier simulates execution paths, confirming no invalid memory accesses (e.g., offsets exceeding packet length), no backward jumps that could loop indefinitely, and termination within a bounded step count tied to length and packet size. Dynamic checks during further validate packet-relative loads, while the absence of pointers, mutable globals, or system calls isolates the VM from broader state. These mechanisms allow unprivileged users to attach filters without access risks, a feature demonstrated in early BSD implementations to achieve up to 40% efficiency gains over traditional copy-user-filter models.

Extensions and Evolution

Transition from Classic BPF to eBPF

The limitations of classic BPF (cBPF), including its reliance on a stack-based accumulator with only two registers (A and X), conditional jumps that precluded efficient loops, and absence of data structures like maps, restricted it primarily to basic packet filtering and capture tasks. These constraints became evident in the early amid demands for programmable kernel extensions in networking and , where recompiling the kernel or loading modules risked stability and security. Engineers at PLUMgrid, led by Alexei Starovoitov, initiated development to enable safe, verifiable programs that could execute arbitrary logic in kernel space without such risks, leveraging just-in-time () compilation for near-native performance. The first eBPF patches, introducing an extended instruction set with fall-through jumps and support for kernel function calls, were merged into version 3.15 in April 2014, with user-space exposure following later that year. By 3.18 in December 2014, included a verifier for bounded execution, replacing the cBPF interpreter and yielding up to four times the performance of cBPF on for packet processing. This evolution was motivated by needs, as cBPF's 32-bit operations and interpreted execution failed to scale with multi-core systems and 64-bit architectures. Backward compatibility ensured a smooth transition: modern kernels translate cBPF bytecode to eBPF instructions at load time, preserving opcode semantics while extending them (e.g., adding BPF_ALU64 for 64-bit arithmetic). eBPF's register-based architecture (10 64-bit registers R0-R9 plus frame pointer R10) and calling convention (up to five arguments via R1-R5) facilitated this, allowing gradual adoption for new hooks like XDP in 2015 without disrupting socket-level cBPF filters. Over time, eBPF's verifier and helper functions supplanted cBPF for observability and security, though cBPF persists in legacy contexts due to its simplicity.

Key Technical Enhancements

The extended (eBPF) significantly expands the capabilities of the original classic BPF (cBPF) through an enriched instruction set, supporting up to 4096 instructions with fall-through jumps, direct calls to helper functions via bpf_call, and explicit program termination with bpf_exit, in contrast to cBPF's conditional jumps and limited opcode classes. This allows for more expressive and efficient , including 64-bit ALU operations (BPF_ALU64) and 32-bit jump variants (BPF_JMP32), enabling complex computations previously infeasible in packet filtering contexts. eBPF employs ten 64-bit registers (R0–R9), with R0–R5 as scratch registers for arguments and returns, and R6–R9 as callee-saved, compared to cBPF's two 32-bit registers (accumulator A and index X), facilitating direct manipulation of larger data structures and reducing stack spills during packet processing. Programs can access a bounded stack via the read-only frame pointer (R10), supporting spill/fill operations for register pressure relief, which enhances handling of variable-length packet headers or metadata without excessive memory accesses. A core enhancement is the in-kernel verifier, which performs static analysis using and path simulation to ensure programs terminate safely, avoid out-of-bounds access, and prevent infinite loops—features absent in cBPF—thus enabling verifiable execution in privileged kernel space for high-throughput filtering. Complementing this, eBPF introduces maps as versatile key-value data structures (e.g., hash maps, arrays) for stateful operations, allowing programs to store and retrieve packet flow statistics or counters across invocations, extending beyond cBPF's stateless design. Helper functions, invoked through bpf_call with up to five arguments in registers R1–R5, provide bounded access to APIs such as retrieval or map lookups, isolating programs from direct manipulation and enhancing safety while supporting advanced packet actions like encapsulation or load balancing. For performance, eBPF's just-in-time () compiler maps instructions one-to-one with hardware registers (e.g., x86_64's rax for R0), minimizing overhead and achieving near-native speeds for packet ingress/egress processing, a marked over cBPF's simpler interpretation. These features collectively transform BPF from a basic filter into a programmable extension framework, initially developed starting in 3.15 (2014) and maturing in subsequent 4.x releases.

Motivations and Design Principles

The development of was motivated by the need to extend the capabilities of classic BPF beyond its original focus on efficient packet filtering, enabling safe and programmable extensions to the for diverse applications such as tracing, observability, networking, and security. Traditional methods like kernel patches or loadable modules were deemed inadequate due to their high risk of instability, security vulnerabilities, and maintenance burdens across kernel versions, while static tracing tools such as and perf_events lacked the flexibility for custom, low-overhead instrumentation like dynamic latency histograms. addressed these limitations by allowing users to load and execute custom directly in kernel space without modifying , thereby accelerating innovation in kernel functionality while preserving system stability. Central to eBPF's design is a commitment to safety through a rigorous verifier that statically analyzes loaded programs to prevent issues like infinite loops, out-of-bounds memory access, or privilege escalations, ensuring sandboxed execution even in privileged contexts. Performance is prioritized via just-in-time () compilation of eBPF to native , supporting architectures like x86_64 and , which enables near-native speeds for in-kernel operations without excessive overhead. Flexibility is achieved through extensible features including maps—data structures such as tables and arrays for and user-kernel communication—and helper functions for tasks like packet manipulation or timestamping, attached to various hooks (e.g., tracepoints, kprobes, events). These principles collectively emphasize causal efficiency and empirical reliability, drawing from first-hand kernel development experiences where unchecked programmability had previously led to crashes or exploits, while enabling verifiable, high-performance extensions that classic BPF's limited instruction set and filtering-only scope could not support.

Implementations

BSD Derivatives and Original Systems

The Berkeley Packet Filter (BPF) was initially implemented in the 4.3BSD Tahoe and Reno releases of the BSD Unix operating system, enabling efficient user-level packet capture through a kernel-resident that evaluates filter programs on incoming packets before copying them to user space. This design addressed performance limitations of prior stack-based filters by introducing a register-based evaluator optimized for RISC architectures, as detailed in the architecture's foundational description. The implementation provided a protocol-independent interface to data-link layers, allowing applications like to attach filters directly to network interfaces via pseudo-devices. BPF was retained and standardized in subsequent 4.4BSD, forming the basis for packet filtering in Berkeley-derived kernels. In BSD derivatives, BPF remains a core kernel component, inherited from the original BSD codebase and adapted for modern variants. includes BPF as a standard feature since its inception from and 4.4BSD-Lite, with the bpf pseudo-device providing raw access to network packets for filtering and capture, independent of protocol specifics. incorporates BPF similarly, offering a raw interface for data-link layer access and supporting filter programs that process all network packets, including those not destined for the host. and also maintain BPF implementations, enabling attachment to interfaces for protocol-independent packet handling and integration with tools like libpcap for applications such as . These systems preserve the original BPF's just-in-time (JIT) compilation capabilities where applicable, ensuring low-overhead execution of filter in kernel space. Across these derivatives, BPF's role emphasizes selective packet delivery to user space, minimizing kernel-to-user copies by discarding non-matching packets early, a principle originating from the 4.3BSD design to support high-speed network analysis without overwhelming system resources. While core functionality remains consistent, derivative-specific kernel evolutions—such as FreeBSD's support for multiple BPF instances per interface—enhance scalability for concurrent monitoring tasks.

Linux Kernel Integration

The Linux kernel integrated the classic Berkeley Packet Filter (cBPF) into its networking subsystem to support socket-level ing, enabling user-space applications to attach programs that inspect and selectively drop incoming packets directly in kernel space, thereby avoiding costly copies to userspace. This mechanism relies on a simple register-based with bounded execution to ensure safety and efficiency. To address cBPF's constraints, such as limited instruction set, absence of persistent state, and restriction to packet filtering, the extended BPF (eBPF) framework was developed by Alexei Starovoitov and merged into the , with initial support appearing in version 3.15 in mid-2014 and stable implementation in version 3.18 released on December 7, 2014. eBPF expands the with 64-bit registers, bounded loops (added in 5.3), direct access to data structures via helper functions, and hash/array/ring buffer maps for stateful operations, all verified at load time by a kernel verifier that rejects unsafe programs to prevent crashes or exploits. eBPF programs are loaded and managed through the bpf(2) family, which handles creation of programs, maps, and links for attachment to hooks; just-in-time () compilers translate to native for each , optimizing performance across supported platforms like x86, ARM, and RISC-V. For backward compatibility, the automatically translates loaded cBPF to equivalent instructions, rendering cBPF obsolete for new development while maintaining legacy support. Integration extends beyond networking to multiple kernel subsystems: in the packet processing pipeline via (XDP) for early ingress drops at the driver level (introduced in kernel 4.8) and classifier/actions in traffic control (tc); in tracing via kprobes, tracepoints, and uprobes for dynamic without recompilation; and in security via seccomp-bpf for fine-grained filtering and Landlock LSM hooks for sandboxing. This broad attachability, combined with ring buffers for efficient user- data transfer (enhanced in kernel 5.4+), positions as a runtime-extensible kernel primitive, with ongoing evolution through annual kernel releases adding features like task-local storage and advanced verifier capabilities.

Cross-Platform and Userspace Variants

Microsoft's for Windows implements an extended Berkeley Packet Filter natively in the Windows , enabling sandboxed program execution for kernel extensibility in areas such as denial-of-service mitigation and system observability. The project integrates Linux components as submodules, supporting user-mode APIs including libbpf compatibility, hook mechanisms via ebpf_nethooks.h, and helper functions across program types, with execution modes encompassing interpretation, , and native driver code generation. As a work-in-progress initiative, it facilitates cross-platform reuse of toolchains originally developed for , though full feature parity with Linux implementations remains under development. Userspace variants execute BPF or programs independently of integration, supporting scenarios like , unprivileged environments, and cross-architecture testing without requiring administrative privileges. bpftime, developed by the Eunomia-bpf project and released in 2023, serves as a high-performance userspace runtime compatible with standard toolchains such as , libbpf, and bpftrace. It incorporates a verifier, loader, and multiple JIT backends (including and ubpf), alongside dynamic binary rewriting for uprobes, syscall tracepoints, and GPU tracing, while enabling through shared-memory maps; benchmarks indicate up to 10x lower overhead for uprobes relative to -based alternatives. Active development continues, with features demonstrated at the 2023 Linux Plumbers Conference and detailed in a 2025 OSDI paper, positioning bpftime for applications in , network processing, and policy enforcement outside contexts. Earlier experimental efforts, such as the libebpf library porting kernel BPF infrastructure to userspace for tracing and performance analysis, supported raw BPF instructions but omitted maps and packet filtering, remaining archived since 2020 with origins in 2015 code. These variants underscore BPF's adaptability beyond original kernel-bound designs, though userspace runtimes generally trade kernel-level efficiency for enhanced portability and ease of deployment.

Programming and Development

BPF Program Structure

A Berkeley Packet Filter (BPF) program is a sequence of instructions executed by the BPF in the to process input data, such as network packets. In classic BPF, used originally for filtering, the program is represented as an array of struct sock_filter instructions within a struct sock_fprog structure, which specifies the program length and pointer to the filter array; this is attached to a via the setsockopt with option SO_ATTACH_FILTER. Each instruction is encoded in 8 bytes: a 16-bit (code) defining the operation and , an 8-bit jump offset for true condition (jt), an 8-bit jump offset for false condition (jf), and a 32-bit immediate value or offset (k). Classic BPF employs a minimal register set consisting of a 32-bit accumulator (A), an auxiliary 32-bit register (X), and an array of 16 32-bit scratch memory locations (M[0-15]), with packet data accessible via implicit pointer operations. Execution begins at instruction 0, proceeding linearly or via conditional jumps based on jt and jf offsets until a return instruction (opcode BPF_RET) computes and returns an accept/reject value, typically based on ALU operations (add, subtract, multiply, divide, modulo, bitwise, shifts), loads/stores, or comparisons against packet bytes. Addressing modes include direct immediate (#k), indirect via X ([x + k]), memory (M[k]), or packet-relative (k bytes from start). Extended BPF (eBPF) programs, which supersede BPF in modern kernels since version 3.15 (2014), use a more expressive 64-bit encoding aligned to 8-byte boundaries, forming an of struct bpf_insn. Each basic spans 64 bits: an 8-bit specifying class (e.g., ALU, load/store, jump) and mode, 4-bit destination (dst_reg), 4-bit source (src_reg), a signed 16-bit (off) for jumps or pointer arithmetic, and a signed 32-bit immediate (imm); wide instructions extend to 128 bits for larger immediates. supports 11 64-bit registers (R0-R9 general-purpose, R10 read-only frame pointer), with R0-R5 as caller-saved registers and R6-R9 callee-saved; operations zero-extend 32-bit subregisters and enable 64-bit arithmetic, unlike BPF's 32-bit limitations. eBPF programs access a 512-byte for register spilling and local variables, bounded by verifier-enforced limits such as a maximum of 4096 per program to prevent excessive resource use. Jumps use fall-through semantics with signed offsets instead of dual jt/jf fields, supporting bounded loops and calls to kernel helper functions via bpf_call (up to 5 arguments). Instruction classes include 32/64-bit ALU (e.g., add, subtract, AND, OR, shifts), loads/stores (with variants), and jumps (conditional, unconditional, ), enabling complex computations beyond filtering, such as tracing and enforcement.
c
// Example classic BPF instruction encoding (struct sock_filter)
struct sock_filter {
    __u16 code;  /* opcode and mode */
    __u8  jt;    /* jump true */
    __u8  jf;    /* jump false */
    __u32 k;     /* [operand](/page/Operand) */
};

// Example eBPF instruction encoding (struct bpf_insn, simplified)
struct bpf_insn {
    __u8  code;      /* [opcode](/page/Opcode) */
    __u8  dst_reg:4; /* dst register */
    __u8  src_reg:4; /* src register */
    __s16 off;       /* offset */
    __s32 imm;       /* immediate */
};
This expanded structure in facilitates just-in-time (JIT) compilation to native code for performance, while a verifier ensures and termination before loading.

Compilation and Loading Process

Classic BPF programs originate from filter expressions, such as those used in packet capture tools, which are translated into by compiler routines in libraries like libpcap. These expressions, exemplified by "tcp port 80", are parsed and optimized into a sequence of cBPF instructions—a stack-based language with operations like loads, jumps, and arithmetic—limited to 4096 instructions for safety. The resulting is attached directly to sockets using the with the SO_ATTACH_FILTER option, enabling kernel-level filtering without userspace intervention on supported systems like BSD derivatives and . In contrast, eBPF programs are developed in a restricted C dialect, incorporating kernel headers for context such as packet structures or trace events, and compiled to eBPF bytecode—a register-based instruction set with 11 64-bit registers and bounded stack—via the /LLVM toolchain targeting the BPF architecture (triple: bpf-unknown-none). Compilation produces an ELF object file embedding multiple sections: PROG sections for executable bytecode, MAP sections defining data structures like hash tables, and metadata for relocations. Tools like libbpf or bpftool parse this ELF, resolve symbols, and prepare it for kernel ingestion, often integrating with build systems via or direct clang invocations such as clang -target bpf -O2 -c prog.c -o prog.o. Loading eBPF programs into the occurs via the bpf(2) , invoked through BPF_PROG_LOAD with the compiled object as input; this command creates program file descriptors for attachment to hooks like XDP for ingress packet processing or kprobes for function tracing. The verifier then performs exhaustive static analysis—simulating up to 1 million instructions per program iteration—to enforce bounds checking, loop limits (default 4 iterations in early versions, configurable later), and , rejecting unsafe code to avert crashes or exploits. Upon verification, the bytecode may be JIT-compiled to host-native instructions for low-overhead execution, with fallback to interpretation on unsupported architectures; programs remain loaded until explicitly detached via bpf(2) or process exit.

Verification and Safety Mechanisms

The eBPF verifier, integrated into the , performs static analysis on BPF bytecode during the loading process via the bpf(2) syscall to enforce safety invariants, preventing programs from causing kernel crashes, infinite loops, or invalid operations. This verification occurs before any execution, simulating all possible paths to confirm bounded resource usage and adherence to a restricted instruction set. If the analysis fails, the program is rejected outright, ensuring only provably safe code attaches to kernel hooks. The process unfolds in two primary phases: an initial (DAG) validation of the , which detects and disallows cycles (loops), unreachable instructions, and invalid jumps to maintain a loop-free structure; followed by stateful path exploration that symbolically tracks values (R0–R10), slots, and pointer states across every feasible execution . tracking employs structures like struct bpf_reg_state to categorize values (e.g., PTR_TO_CTX for context pointers, PTR_TO_STACK for references) and enforces initialization checks, rejecting unreadable registers or writes without prior reads on locations. is upheld through bounds checking on pointers—using min/max offsets and truncated number (tnum) representations for variable precision—alignment verification, and restrictions on arithmetic to avoid overflows or invalid dereferences. Termination is guaranteed by the absence of unbounded loops in early implementations, with the verifier pruning redundant states via equivalence checks to avoid exponential exploration complexity. Since 5.3 (released September 2019), bounded loops have been permitted, where the verifier analyzes iteration counters and conditions to confirm finite iterations, often unrolling simple cases or rejecting those exceeding configurable limits (e.g., 1 million instructions by default). Additional safeguards include helper function validation—ensuring calls to kernel-provided functions like bpf_trace_printk respect argument types and privileges—and release checks for resources like socket references to prevent leaks. Upon successful verification, programs undergo just-in-time (JIT) compilation to native machine code for performance nearing kernel-native speeds, with runtime protections such as read-only executable , Spectre variant mitigations (e.g., array bounds masking, return prediction barriers), and constant blinding to obscure JIT-spray attacks. These mechanisms collectively provide strong guarantees: programs terminate, access only authorized via typed helpers, and cannot destabilize the through memory corruption or resource exhaustion. However, the verifier functions primarily as a gate rather than a auditor, focusing on mechanical correctness without evaluating program semantics or intent, thus permitting benign but resource-intensive operations if they pass structural checks. Unprivileged eBPF modes, enabled via kernel configuration since version 5.3, further restrict capabilities for non-root users but rely on the same verifier for core .

Applications

Network Processing and Filtering

The Berkeley Packet Filter (BPF) facilitates processing and ing by allowing user-defined programs to execute in kernel space, inspecting packet headers and payloads to selectively accept, drop, or modify traffic without copying full packets to user space, thereby reducing overhead compared to traditional methods. This mechanism originated as a socket in classic BPF (cBPF), where filters attach to network sockets to evaluate incoming packets against criteria such as IP addresses, ports, protocols, and offsets, returning decisions to pass a prefix of the packet or discard it entirely. In Linux implementations, cBPF filters are compiled from a into bytecode interpreted by the kernel's just-in-time (JIT) compiler, enabling tools like and to capture only relevant traffic efficiently. Extended BPF (eBPF) expands these capabilities for advanced network processing, attaching programs to hooks throughout the kernel's networking stack for programmable ingress and egress filtering. For instance, eXpress Data Path (XDP) hooks execute eBPF code at the earliest driver level on receive queues, allowing packets to be dropped, forwarded to user space via AF_XDP sockets, or redirected before stack processing, achieving throughputs exceeding 10 million packets per second on modern hardware for DDoS mitigation and load balancing. Traffic Control (tc) classifiers using eBPF (cls_bpf) enable fine-grained filtering and shaping in the qdisc layer, supporting actions like rate limiting, header rewriting, and multipath routing based on dynamic conditions. These features underpin applications in and optimization, such as kernel-level firewalls that inspect and block malicious flows without context switches, and intrusion detection systems that correlate packet patterns in . In production environments, eBPF-driven processing has demonstrated up to 90% latency reductions in packet handling for high-volume scenarios, as measured in benchmarks integrating with network interface cards supporting XDP offloads. However, filter efficacy depends on precise verification to prevent panics, with the verifier enforcing bounds checking and loop limits during program loading.

Observability and Tracing

The extended Berkeley Packet Filter (eBPF) enables observability and tracing by allowing programs to attach dynamically to tracepoints, kprobes (kernel probes), and uprobes (user probes), which instrument functions, calls, and user-space events with minimal overhead. These attachments permit the collection of runtime data such as execution latencies, function call graphs, and resource usage without requiring recompilation or . eBPF maps further support aggregating traced data, such as histograms of syscall durations or counters for I/O operations, facilitating analysis in production environments. Prominent tools leveraging eBPF for tracing include the BPF Compiler Collection (BCC), which provides and C APIs for developing complex tracing scripts and daemons, and bpftrace, a high-level for one-liners and quick diagnostics. BCC, integrated into distributions since kernel version 4.1 (released March 2015), supports tracing via front-ends like trace-cmd and perf, enabling probes on events for performance profiling. bpftrace, introduced in 2018 and drawing from and awk syntax, compiles scripts to eBPF for tasks like summarizing retransmits or monitoring page faults, with support in 4.18 (September 2018) and later. In contexts, tracing underpins tools for metrics export (e.g., via integration), distributed tracing (e.g., correlating kernel-user spans), and , as seen in frameworks like Tracee for event filtering. These capabilities extend to containerized environments, where traces pod-level interactions without host modifications, outperforming static in overhead (typically <5% CPU for high-frequency probes). Empirical benchmarks show eBPF-based tracers achieving sub-microsecond latencies for event capture, compared to traditional debugfs methods exceeding milliseconds. Limitations in tracing include verifier-enforced bounds on program complexity, restricting loops and unbounded data structures to prevent panics, though recent (5.10+, December 2020) mitigate this via bounded iteration helpers. Adoption has grown since Linux 4.4 (January 2016), with eBPF tracing integrated into production systems for root-cause analysis, evidenced by its use in hyperscale data centers for latency histograms and error tracking.

Security Enforcement and Monitoring

The Berkeley Packet Filter (BPF), particularly in its extended form (eBPF), facilitates security enforcement by enabling kernel-level filtering of system calls through seccomp-BPF. This mechanism, integrated into the since version 3.5, allows processes to load BPF programs that inspect and restrict incoming syscalls based on criteria such as syscall number, arguments, and instruction pointers, thereby implementing process sandboxing. The BPF verifier ensures these programs terminate safely and avoid unbounded loops or invalid memory accesses, providing deterministic enforcement without risking kernel instability. In container environments like , seccomp-BPF profiles define default or custom syscall allowlists, reducing attack surfaces by blocking potentially exploitable calls such as execve or clone unless explicitly permitted. eBPF further advances enforcement via integrations like BPF-LSM, which hooks into (LSMs) to dynamically enforce access controls on file systems, network sockets, and capabilities at runtime. This allows for context-aware policies, such as restricting process escalations or unauthorized , loaded without recompiling the or rebooting systems. For instance, eKCFI employs to validate control-flow integrity by monitoring indirect branches against a predefined graph, enabling flexible, post-deployment hardening against return-oriented programming attacks. In security monitoring, programs attach to tracepoints, kprobes, and uprobes to collect on events like syscall invocations, file I/O, and flows with minimal overhead, often under 5% CPU utilization even under load. This enables detection of anomalies, such as unexpected privilege escalations or lateral movements, as implemented in tools like Tetragon, which uses eBPF for Kubernetes-native runtime visibility and policy enforcement. Similarly, and Red Canary's eBPF-based collectors monitor behavioral patterns for threat hunting, exporting data to userspace for analysis without instrumenting applications directly. These capabilities stem from eBPF's in-kernel execution model, which bypasses traditional polling or module-based monitoring's performance bottlenecks while maintaining sandboxing to prevent observer effects from compromising host integrity.

Security Analysis

Defensive Uses and Benefits

The Berkeley Packet Filter (BPF), particularly its extended variant (eBPF), enables defensive through efficient -level packet inspection and filtering, allowing systems to drop malicious traffic early in the processing stack without user-space overhead. In implementations, BPF filters customizable rules to prevent packet-level attacks, such as by matching specific patterns at the network interface. For intrusion detection systems (IDS), eBPF programs hook into the via mechanisms like XDP to perform parallel payload matching using algorithms like Aho-Corasick, pre-dropping suspicious packets and achieving up to three times the throughput of traditional tools like Snort under high traffic loads. Beyond networking, eBPF supports host-level defensive enforcement by monitoring system calls, tracing kernel events, and enforcing runtime policies, such as rejecting unauthorized processes or isolating workloads in cloud environments. Tools leveraging eBPF provide granular visibility into kernel behavior for , enabling real-time threat mitigation in containerized and setups without destabilizing the system. This includes policy-based responses like process termination or syscall blocking, integrated into cloud workload protection platforms (CWPP) for comprehensive security observability. Key benefits stem from eBPF's verifier, which statically analyzes programs for safety by enforcing bounds checking, , loop absence, and resource limits (e.g., up to 1 million instructions), preventing crashes or exploits that plague traditional modules. This sandboxed execution, combined with just-in-time () compilation, delivers low-latency performance with minimal CPU and memory use, as only aggregated results are surfaced to user space, supporting scalable defenses in high-volume environments. Additionally, user-space program loading allows dynamic updates without recompilation, enhancing agility while maintaining separation of privileges through capability checks like CAP_BPF.

Offensive Risks and Malware Exploitation

Malware exploiting the Berkeley Packet Filter (BPF), particularly its extended variant , leverages kernel-level execution to enable sophisticated evasion and persistence after initial , as loading programs typically requires root access. Attackers use BPF's attachment points, such as kprobes for syscall interception or XDP/ for packet processing, to hide processes, files, and network activity from monitoring tools, thereby undermining (EDR) systems. These capabilities stem from BPF's in-kernel , which allows dynamic without modifying kernel code, but introduce risks like tampering with eBPF maps to disable firewalls or security hooks. Key evasion techniques include socket filters for selective traffic inspection, where responds only to packets containing predefined "magic" values, bypassing standard rules and avoiding detection by network scanners. Syscall hooks via kprobes enable file hiding by filtering directory listings (e.g., altering SYS_getdents outputs) or injecting unauthorized privileges, such as modifying sudoers files through SYS_openat2 interception. helpers like bpf_probe_write_user facilitate user-space memory manipulation for deployment, while bpf_override_return can block process termination or scans, and verifier flaws (e.g., exploited via fuzzers leading to CVE-2023-2163) allow bounded but impactful manipulations. Notable malware instances demonstrate these risks. BPFDoor, a stealthy backdoor analyzed in May 2022 and linked to Chinese threat actors targeting global organizations including government entities, employs cBPF sniffers to monitor traffic for magic sequences (e.g., 0x5293 in TCP packets), enabling reverse shells while masquerading processes and timestomping files for persistence. Symbiote, identified in 2022, prepends BPF filters using LD_PRELOAD to conceal command-and-control (C2) communications, evading traffic analysis. Boopkit, a 2023 proof-of-concept rootkit, activates via eBPF tracepoints on malformed TCP packets and hides processes through getdents64 hooks. More recently, the LinkPro rootkit, dissected in October 2025, uses eBPF for process and file concealment (e.g., hooking getdents and sys_bpf), network hiding on port 2233, and activation via magic TCP SYN packets with window size 54321, operating in passive or active C2 modes with fallback persistence mechanisms. Such exploitations highlight BPF's dual-use nature, where post-exploitation deployment can render visibility unreliable, as modified programs may alter logs or block probes from tools like bpftool. Detection remains challenging due to BPF's legitimate use in tools, necessitating load-time and scrutiny of unexpected BPF attachments.

Historical Incidents and Vulnerabilities

One notable early vulnerability in the Berkeley Packet Filter (BPF) subsystem was CVE-2017-16995, disclosed in December 2017, which involved errors in the BPF just-in-time (JIT) compiler within the verifier. This flaw enabled unprivileged local users to trigger memory corruption, potentially leading to denial-of-service or for , affecting kernels up to version 4.14 without specific mitigations. An exploit module was developed and integrated into , demonstrating practical local on vulnerable systems compiled with BPF support. In the extended BPF (eBPF) era, the verifier's complexity has introduced recurrent bugs, often exploitable for kernel-level arbitrary read/write primitives. For instance, CVE-2023-2163, identified in 2023 via and detailed publicly in 2024, stemmed from imprecise path pruning in the verifier, allowing attackers to corrupt tracking and bypass safety checks for out-of-bounds memory access. This permitted local or escapes on affected distributions, with a proof-of-concept exploit chaining verifier bypasses to leak kernel pointers and modify credentials. The issue was patched by refining precision propagation in commits, but it highlighted a pattern of verifier flaws, including prior CVEs like 2020-8835 and 2021-3490, which similarly enabled escalation through inadequate bounds or pointer validation. Beyond kernel bugs, BPF has been abusively leveraged in for evasion, marking real-world incidents of offensive use. The BPFDoor backdoor, active since at least 2017 but publicly uncovered in June 2022, targets and servers, particularly in sectors across and the . Attributed to the China-linked Red Menshen group, it deploys classic BPF filters via the to selectively hide command-and-control traffic—activating only on predefined byte sequences—thus evading tools. Variants observed through 2025, including in the SK breach disclosed in April 2025, demonstrated persistence via reverse shells and forking, infecting thousands of servers before detection. Similarly, the Symbiote , noted in 2022, prepends BPF filters to legitimate ones using LD_PRELOAD hooks, concealing traffic on compromised hosts. These cases underscore BPF's dual-use potential, where its kernel-level packet inspection enables stealthy persistence despite the verifier's safeguards against malicious programs.

Limitations and Criticisms

Performance and Resource Constraints

eBPF programs achieve high performance through just-in-time (JIT) compilation to native machine code and direct execution within the kernel, avoiding user-space transitions and enabling near-native speeds for tasks like packet processing. Benchmarks indicate overhead as low as 20 nanoseconds per program invocation in basic tracing use cases. Classic BPF, lacking advanced features like maps or helpers, imposes even lighter runtime costs via its simple virtual machine but relies on interpreter or JIT execution, with performance enhanced by hardware offloads on supported network interfaces that bypass host CPU entirely. Despite these efficiencies, resource constraints limit scalability. The eBPF verifier enforces bounds such as a 512-byte limit per program to curb memory abuse and prevent deep , alongside caps (e.g., 4096 for unprivileged users) and restrictions on like bounded backwards jumps. Maps for persistent state are configurable per-instance (e.g., via max_entries) but aggregate to memory limits, tunable via sysctls, with excessive maps or entries risking out-of-memory conditions. Verifier analysis, which exhaustively simulates execution paths, adds load-time overhead—potentially seconds for intricate programs—though runtime verification is lightweight. In resource-constrained environments, widespread eBPF attachment (e.g., for or ) can accumulate CPU and pressure, leading to drops or degraded throughput, necessitating careful of active programs and counts. Optimization strategies include minimizing counts, leveraging calls judiciously (with post-Spectre mitigations increasing their cost), and prioritizing JIT-enabled architectures to sustain under load.

Complexity in Development

Developing eBPF programs demands proficiency in low-level C programming and deep familiarity with Linux kernel internals, imposing a steep learning curve on developers unfamiliar with systems-level code. Unlike standard user-space applications, eBPF code operates in a highly restricted virtual machine environment, prohibiting unbounded loops, direct stack access beyond fixed limits, and arbitrary function calls to enforce kernel safety. This subset of C requires developers to master eBPF-specific constructs like maps for data storage, helper functions for kernel interactions, and attachment points (hooks) for program injection, often necessitating iterative trial-and-error to comply with evolving kernel APIs. The verifier exacerbates development challenges through its rigorous static analysis, which simulates all execution paths to prevent invalid memory accesses, infinite loops, or resource exhaustion, but frequently rejects valid programs with opaque error messages. As of Linux kernel version 6.12, the verifier comprises over 20,000 lines of code, reflecting its growing intricacy to handle advanced features like bounded loops and pointer tracking, yet this complexity leads to verifier timeouts on intricate programs exceeding instruction limits (typically 1 million steps). Developers must often refactor code—employing techniques like tail calls to split logic across multiple programs or manual bounds checking—to satisfy , a process that can consume significant time without runtime feedback. An empirical analysis of eBPF-related issues on highlights recurring verifier-related hurdles, including register state tracking and enforcement, underscoring the need for specialized tools like bpftool for log inspection. Maintenance further compounds complexity, as eBPF programs tied to specific versions risk incompatibility with updates that alter verifier behavior or deprecate , requiring ongoing adaptation amid kernel dependency. While higher-level frameworks such as BCC or bpftrace abstract some intricacies, they do not eliminate the core demands of verifier compliance and kernel awareness, limiting accessibility for non-expert developers.

Scope and Architectural Drawbacks

The Berkeley Packet Filter (BPF), including its extended form , operates within a narrowly defined scope centered on safe, event-driven execution at predefined hooks, such as packet ingress/egress points, tracepoints, and kprobes, precluding arbitrary modifications or core subsystem alterations. Unlike traditional modules, BPF cannot introduce new program types, maps, or fundamental behaviors, limiting its utility to augmentation rather than wholesale replacement of existing logic. This scoped attachment model ensures bounded intervention but restricts comprehensive kernel-wide programming, as programs lack direct access to unmodified data structures or the ability to expose novel functions without upstream integration. Architecturally, BPF's verifier imposes stringent constraints to guarantee termination and , capping unprivileged programs at 4096 instructions (with exploration up to 1 million) and 512 bytes of stack , which curtails complex computations and necessitates highly optimized, linear code paths. Loops remain bounded (e.g., up to 32 iterations since 5.8), functions must be inlined without external libraries or global variables, and memory operations are confined to read-only probes for data with experimental, root-restricted writes to only. These verifier-enforced rules, while mitigating risks like loops or overflows, reduce expressiveness relative to unrestricted C modules, often requiring workarounds that sacrifice functionality or portability across versions. Portability further suffers from verifier variability, where programs accepted on one may fail on another due to subtle differences in helper availability or type checking, complicating deployment in heterogeneous environments. Overall, this sandboxed prioritizes verifiability over flexibility, rendering BPF unsuitable for Turing-complete or state-heavy tasks that demand unbounded resources or direct mutability.

References

  1. [1]
    [PDF] The BSD Packet Filter: A New Architecture for User-level ... - Tcpdump
    Dec 19, 1992 · The BSD Packet Filter (BPF) uses a new, register- based filter evaluator that is up to 20 times faster than the original design.
  2. [2]
    bpf - FreeBSD Manual Pages
    The Berkeley Packet Filter provides a raw interface to data link layers in a protocol independent fashion. All packets on the network, even those destined for ...
  3. [3]
    The BSD Packet Filter: A New Architecture for User-level ... - USENIX
    The BSD Packet Filter: A New Architecture for User-level Packet Capture. Authors: Steven McCanne & Van Jacobson, Lawrence Berkeley Laboratory.
  4. [4]
    The BSD packet filter: a new architecture for user-level packet capture
    The BSD Packet Filter (BPF) uses a new, register-based filter evaluator that is up to 20 times faster than the original design.
  5. [5]
    Linux Socket Filtering aka Berkeley Packet Filter (BPF)
    BPF allows a user-space program to attach a filter onto any socket and allow or disallow certain types of data to come through the socket.
  6. [6]
    BPF - the forgotten bytecode - The Cloudflare Blog
    May 21, 2014 · The original concepts underlying the BPF were described in a 1993 and didn't require updates for many years.
  7. [7]
  8. [8]
    bpf(4) - NetBSD Manual Pages
    The Berkeley Packet Filter provides a raw interface to data link layers in a protocol independent fashion. All packets on the network, even those destined for ...
  9. [9]
    bpf(4) - OpenBSD manual pages
    Sets the filter program used by the kernel to discard uninteresting packets. An array of instructions and its length are passed in using the following structure ...
  10. [10]
  11. [11]
    Linux Socket Filtering aka Berkeley Packet Filter (BPF)
    The syntax is closely modelled after Steven McCanne's and Van Jacobson's BPF paper. The BPF architecture consists of the following basic elements: Element.
  12. [12]
    1 BPF Instruction Set Architecture (ISA)
    This document specifies the BPF instruction set architecture (ISA). As a historical note, BPF originally stood for Berkeley Packet Filter.
  13. [13]
    [PDF] BPF – in-kernel virtual machine
    • instruction set has some built-in safety (no exposed stack pointer, instead load instruction has 'mem' modifier). • dynamic packet-boundary checks. Page 8 ...
  14. [14]
    Classic BPF vs eBPF - The Linux Kernel documentation
    Original BPF and eBPF are two operand instructions, which helps to do one-to-one mapping between eBPF insn and x86 insn during JIT.
  15. [15]
  16. [16]
    A thorough introduction to eBPF - LWN.net
    Dec 2, 2017 · The original patch that added support for eBPF in the 3.15 kernel showed that eBPF was up to four times faster on x86-64 than the old classic ...Missing: transition history
  17. [17]
    What is eBPF? An Introduction and Deep Dive into the eBPF ...
    BPF originally stood for Berkeley Packet Filter, but now that eBPF (extended BPF) can do so much more than packet filtering, the acronym no longer makes sense.Missing: history | Show results with:history
  18. [18]
    BPF Documentation — The Linux Kernel documentation
    ### Overview of eBPF Technical Enhancements
  19. [19]
    eBPF: One Small Step - Brendan Gregg
    May 15, 2015 · This is my first post about eBPF (extended Berkeley Packet Filter), and I'll summarize how we got here, why eBPF and maps are important, and ...Why We Need Ebpf · Show Me The Code! · More ExamplesMissing: motivations | Show results with:motivations<|control11|><|separator|>
  20. [20]
    A Gentle Introduction to eBPF - InfoQ
    May 3, 2021 · First released in limited capacity in 2014 with Linux 3.18, making full use of eBPF requires at least Linux 4.4 or above. In Figure 2, we see a ...What Is Ebpf? · How Does Ebpf Work? · Ebpf In Action
  21. [21]
    [PDF] The BSD Packet Filter: A New Architecture for User-level ... - USENIX
    In this paper, we present the design of BPF, outline how it interfaces with the rest of the system, and describe the new approach to the filtering mechan- ism.
  22. [22]
    [PDF] Packet Filters - Proposed solutions and current trends - Columbia CS
    Apr 14, 2010 · The BSD Packet Filter: A New Architecture for User-level Packet Capture. In Proceedings of the USENIX Winter Conference, pages 259-269, San ...
  23. [23]
    An Introduction to the Extended Berkeley Packet Filter
    Sep 11, 2024 · This thing called eBPF​​ The extended Berkeley Packet Filter (eBPF) first officially appeared in Linux 3.18, which was released in December 2014 ...
  24. [24]
    The eBPF Runtime in the Linux Kernel - arXiv
    Oct 3, 2024 · eBPF is a runtime that enables users to load programs into the operating system (OS) kernel, like Linux or Windows, and execute them safely and efficiently.Missing: motivations | Show results with:motivations
  25. [25]
  26. [26]
    eBPF Explained: Use Cases, Concepts, and Architecture - Tigera
    You can conceive of it as a lightweight, sandboxed virtual machine (VM) within the Linux kernel. It allows programmers to run Berkeley Packet Filter (BPF) ...
  27. [27]
    eBPF application development: Beyond the basics
    Oct 19, 2023 · This article provides a guide to eBPF application development. As the title suggests, the content is focused on eBPF 201 concepts.
  28. [28]
    eBPF for Windows
    eBPF is a well-known technology for providing programmability and agility, especially for extending an OS kernel, for use cases such as DoS protection and ...
  29. [29]
    eunomia-bpf/bpftime: Userspace eBPF runtime for Observability ...
    bpftime is a High-Performance userspace eBPF runtime and General Extension Framework designed for userspace. It enables faster Uprobe, USDT, Syscall hooks, XDP ...Missing: variants | Show results with:variants
  30. [30]
    tuxology/libebpf: Experiemental userspace eBPF library - GitHub
    Dec 17, 2020 · This is a modified port of the Berkeley Packet Filter (BPF) infrastructure from the Linux kernel to the userspace as a shared library.
  31. [31]
    RFC 9669: BPF Instruction Set Architecture (ISA)
    BPF is now considered a standalone term that does not stand for anything. The original BPF is sometimes referred to as cBPF (classic BPF) to distinguish it ...
  32. [32]
    BPF Exam | TCPDUMP & LIBPCAP
    This tool, BPF Exam, illustrates the theory of Berkeley Packet Filter compilation and the practice of its reference implementation in libpcap.
  33. [33]
    libbpf Overview - The Linux Kernel documentation
    libbpf is a C-based library containing a BPF loader that takes compiled BPF object files and prepares and loads them into the Linux kernel.
  34. [34]
    eBPF verifier - The Linux Kernel documentation
    The safety of the eBPF program is determined in two steps. First step does DAG check to disallow loops and other CFG validation.
  35. [35]
    Verifier - eBPF Docs
    Jan 28, 2024 · Its main responsibility is to ensure that the BPF program is "safe" to execute. It does this by checking the program against a set of rules. The ...Basics · Analysis · Features · Bounded loops
  36. [36]
    Bounded loops in BPF for the 5.3 kernel - LWN.net
    Jul 31, 2019 · The resulting code not only adds support for bounded loops, but also a number of important optimizations. Writing BPF programs for the kernel ...
  37. [37]
    Berkeley packet filters - IBM
    Berkeley Packet Filters (BPF) provide a powerful tool for intrusion detection analysis. Use BPF filtering to quickly reduce large packet captures.<|control11|><|separator|>
  38. [38]
    Chapter 44. Understanding the eBPF networking features in RHEL 8
    The Extended Berkeley Packet Filter (eBPF) lets developers run sandboxed programs in the Linux kernel. For networking, eBPF programs attach to hooks to ...
  39. [39]
    eBPF XDP: The Basics and a Quick Tutorial | Tigera - Creator of Calico
    eBPF is an extended version of the Berkeley Packet Filter (BPF). It is an abstract virtual machine (VM) that runs within the Linux kernel, much like the ...Ebpf Xdp: The Basics And A... · Use Cases For Xdp · Quick Tutorial: Running Your...
  40. [40]
    How We Used eBPF to Build Programmable Packet Filtering in ...
    Dec 6, 2021 · We wanted to find a way to use eBPF to extend our use of nftables in Magic Firewall. This means being able to match, using an eBPF program within a table and ...Missing: improvements | Show results with:improvements<|separator|>
  41. [41]
    eBPF Userspace API - The Linux Kernel documentation
    eBPF programs can be attached to various kernel subsystems, including networking, tracing and Linux security modules (LSM).
  42. [42]
    Linux eBPF Tracing Tools - Brendan Gregg
    Dec 28, 2016 · This page shows examples of performance analysis tools using enhancements to BPF (Berkeley Packet Filter) which were added to the Linux 4.x ...
  43. [43]
    Tracing with BPF - Linux Observability with BPF [Book] - O'Reilly
    Beginning in this chapter, we're going to use a powerful toolkit to write BPF programs, the BPF Compiler Collection (BCC). BCC is a set of components that makes ...
  44. [44]
    A thorough introduction to bpftrace - Brendan Gregg
    Aug 19, 2019 · bpftrace is a new open source tracer for Linux for analyzing production performance problems and troubleshooting software.Missing: history | Show results with:history
  45. [45]
    ajor/bpftrace: High-level tracing language for Linux eBPF - GitHub
    BPFtrace uses LLVM as a backend to compile scripts to BPF-bytecode and makes use of BCC for interacting with the Linux BPF system, as well as existing Linux ...
  46. [46]
    eBPF Applications Landscape
    Tracee uses eBPF technology to detect and filter operating system events, helping you expose security insights, detect suspicious behavior, and capture forensic ...
  47. [47]
    Next-Generation Observability with eBPF - Isovalent
    Sep 8, 2023 · eBPF is an impressive tool to use for observability that enables deeper insights when compared to more traditional observability solutions.Using eBPF for Kubernetes... · Using eBPF for Security...
  48. [48]
    Seccomp BPF (SECure COMPuting with filters)
    Seccomp filtering provides a means for a process to specify a filter for incoming system calls. The filter is expressed as a Berkeley Packet Filter (BPF) ...
  49. [49]
    seccomp(2) - Linux manual page - man7.org
    ... Berkeley Packet Filter (BPF) passed via args. This argument is a pointer to a struct sock_fprog; it can be designed to filter arbitrary system calls and ...
  50. [50]
    Seccomp in Kubernetes: What It Is and How to Use It - DevOpsCube
    Apr 1, 2025 · Seccomp uses Linux's seccomp-bpf (Berkeley Packet Filter) mechanism that filters syscalls using predefined rules.. BPF : Originally used for ...
  51. [51]
    Enhancing Runtime Security With EBPF/BPF-LSM: Impact Of Cisco's ...
    Jul 14, 2025 · While eBPF was typically used in observability and monitoring scenarios, the extensions of kernel space hooks and bpf-helpers ecosystem allowed ...Cloud Security Solutions · Our Journey to eBPF · Chance Favors a Prepared Mind<|control11|><|separator|>
  52. [52]
    Scaling Runtime Security: How eBPF is Solving Decade-Long ...
    May 7, 2023 · eBPF programs are loaded at runtime and can be attached to various kernel events to inspect or modify data passing through the kernel, without ...
  53. [53]
    [PDF] Practical and Flexible Kernel CFI Enforcement using eBPF - People
    eKCFI uses eBPF to enforce kCFI, allowing dynamic policy changes and context sensitivity, unlike traditional approaches that are inflexible.
  54. [54]
    [PDF] Security Monitoring with eBPF - Brendan Gregg
    Extended Berkley Packet Filter (eBPF) is a new Linux feature which allows safe and efficient monitoring of kernel functions. This has dramatic implications for ...
  55. [55]
    eBPF for Advanced Linux Performance Monitoring and Security
    Mar 4, 2025 · The origin of the eBPF story dates back to Berkeley Packet Filter (BPF) in 1992. BPF was meant to filter network packets efficiently and safely ...Use Cases Of Ebpf · Ebpf For Performance... · Ebpf For Security...
  56. [56]
    eBPF Runtime Security at Scale: Top Tetragon Use Cases (Part 2)
    Aug 15, 2024 · See the top eBPF runtime security use cases, and how eBPF security offers stronger runtime enforcement and threat detection.
  57. [57]
    eBPF for security: a beginner's guide | Red Canary
    Jan 4, 2022 · Red Canary uses eBPF to gather security telemetry directly from the Linux kernel. With our open source tool, now you can too.
  58. [58]
    Customizable Firewall Rules and Filters | SSN Docs
    Oct 27, 2023 · The SSR uses Berkeley Packet Filters (BPF) to create customizable firewall filters. This filtering solution can be a key tool to prevent packet level attacks.
  59. [59]
    Design and implementation of an intrusion detection system by ...
    The IDS uses eBPF in the kernel for fast pattern matching and pre-dropping packets, with a user space program to examine remaining packets. It can match ...
  60. [60]
    The Advantages of eBPF for CWPP Applications | CSA
    Feb 23, 2023 · eBPF is a framework for loading and running user-defined programs within the Linux OS kernel, to observe, change, and respond to kernel behavior.
  61. [61]
    Using eBPF in Kubernetes: A Security Overview - Wiz
    Jul 20, 2024 · eBPF detects and mitigates security threats by monitoring and sandboxing suspicious activities. Efficient resource allocation, eBPF aids in ...
  62. [62]
    Harnessing the eBPF Verifier - The Trail of Bits Blog
    Jan 19, 2023 · As a safety measure, the kernel “verifies” eBPF programs at load time and rejects any that it deems unsafe. However, using eBPF is a CI / CD ...With Great Power Comes Great... · Trust But Verify · Enabling Rigorous Testing Of...
  63. [63]
    [PDF] eBPF Security Threat Model - Linux Foundation
    The eBPF threat model outlines eBPF's defenses, potential risks, and inherent controls, using attack trees to explore how attackers could utilize eBPF.
  64. [64]
    eBPF Offensive Capabilities – Get Ready for Next-gen Malware
    Sep 5, 2023 · In this article, we will explore some of the offensive capabilities that eBPF can provide to an attacker and how to defend against them.
  65. [65]
    eBPF: A new frontier for malware - Red Canary
    Jan 5, 2023 · eBPF (extended Berkeley Packet Filter) has taken the Linux world by storm. First introduced in 2013 to enable programmable networking, ...
  66. [66]
    BPFDoor - An Evasive Linux Backdoor Technical Analysis
    May 11, 2022 · The use of BPF and packet capture provides a way to bypass local firewalls to allow remote attackers to control the implant. Finally, the ...
  67. [67]
    How BPF-Enabled Malware Works: Bracing for Emerging Threats
    Oct 19, 2023 · BPF is a kernel-level engine that implements a virtual machine (VM) to interpret bytecode. Originally, it was designed to accomplish network ...
  68. [68]
    LinkPro: eBPF rootkit analysis - Synacktiv
    Oct 14, 2025 · ... rootkits, with features ranging from establishing secret command and control (C2) channels to process hiding and container evasion techniques.
  69. [69]
  70. [70]
    Linux - BPF Sign Extension Local Privilege Escalation (Metasploit)
    Jul 19, 2018 · CVE-2017-16995 . local exploit for Linux platform. ... The target system must be compiled with BPF support and must not have kernel.
  71. [71]
    A deep dive into CVE-2023-2163: How we found and fixed an eBPF ...
    Aug 8, 2024 · The bug (CVE-2023-2163) discovered by Buzzer lies in the eBPF path pruning logic and we were successfully able to write an exploit that can be ...
  72. [72]
  73. [73]
    BPFdoor Malware Targets Linux Systems Unnoticed for Five Years
    May 13, 2022 · It turned out that the backdoor malware called BPFdoor, which cybersecurity researchers recently discovered, has been targeting Linux and Solaris systems for ...
  74. [74]
    Quick and Simple: BPFDoor Explained - The Hacker News
    Jun 13, 2022 · BPFDoor is a piece of malware associated with China-based threat actor Red Menshen that has hit mostly Linux operating systems.
  75. [75]
    [PDF] How and When You Should Measure CPU Overhead of eBPF ...
    Oct 28, 2020 · How does it work? – Adds ~20ns of overhead per run. Page 8. Two ways to enable kernel ...
  76. [76]
    Pitfalls of relying on eBPF for security monitoring (and some solutions)
    Sep 25, 2023 · An eBPF program's stack space is limited to 512 bytes. When writing eBPF code, developers need to be particularly cautious about how much ...
  77. [77]
    Understanding of BPF - Unix & Linux Stack Exchange
    Apr 18, 2022 · BPF (or eBPF) is a language used for filtering packets, system call filters, and performance monitoring, executed in the kernel.Missing: timeline | Show results with:timeline
  78. [78]
    BPF maps — The Linux Kernel documentation
    ### Summary of eBPF Map Limits and Constraints
  79. [79]
    eBPF verifier — The Linux Kernel documentation
    ### eBPF Verifier Performance Cost, Time Complexity, and Limits
  80. [80]
    What is eBPF, and why does it matter for observability? - New Relic
    Jun 3, 2025 · eBPF is a groundbreaking kernel technology introduced in Linux 4.x, enabling bytecode to run directly within the Linux kernel.
  81. [81]
    eBPF Security: Top 5 Use Cases, Challenges & Best Practices
    Dec 29, 2024 · eBPF is a revolutionary technology in the Linux kernel that enables the safe and efficient execution of custom programs directly within the ...Missing: guarantees | Show results with:guarantees
  82. [82]
    [PDF] Evaluation of tail call costs in eBPF - Linux Plumbers Conference
    Aug 24, 2020 · This paper compares the performance of tail calls between eBPF programs before and after the op- timizations introduced to mitigate spectre ...<|control11|><|separator|>
  83. [83]
    eBPF – The Best Kept Secret in Technology - Kerno
    May 7, 2025 · Steep Learning Curve: eBPF programming requires knowledge of kernel internals, low-level programming in C, and familiarity with tools like ...
  84. [84]
    BPF vs eBPF: Key Differences Explained For DevOps & SREs
    BPF, or Berkeley Packet Filter, is a technology that allows programs to filter and capture network packets efficiently. Developed in the early 1990s, BPF ...
  85. [85]
    A look inside the BPF verifier - LWN.net
    Jul 23, 2024 · The BPF verifier is, fundamentally, a form of static analysis. It determines whether BPF programs submitted to the kernel satisfy a number of properties.
  86. [86]
    eBPF Verifier: Debugging Tips, Errors, and Best Practices
    Mar 13, 2025 · The eBPF verifier is a part of the eBPF framework that checks eBPF bytecode for safety risks before they run in the Linux kernel.
  87. [87]
    Complexity of the BPF Verifier - pchaigno
    Jul 2, 2019 · This post discusses the increasing complexity of the Linux eBPF verifier by measuring various metrics from the number of lines of code to ...
  88. [88]
    eBPF: Yes, it's Turing Complete! - Isovalent
    Sep 12, 2024 · By version 6.1, the kernel supports a helper function called bpf_loop(). This allows the loop instructions to be implemented as a separate eBPF ...Missing: introduction | Show results with:introduction
  89. [89]
    An Empirical Study on the Challenges of eBPF Application ...
    This study aims to shed light on these challenges by analyzing eBPF issues on Stack Overflow along several eBPF-specific dimensions.
  90. [90]
    The eBPF Evolution and Future: From Linux Origins to Cross ...
    Sep 3, 2024 · eBPF (Extended Berkeley Packet Filter) was initially developed on the Linux platform and has gradually expanded to other operating systems like ...Missing: history | Show results with:history
  91. [91]
    BPF Design Q&A - The Linux Kernel documentation
    Q: Is BPF a generic instruction set similar to x64 and arm64? Q: Is BPF a generic virtual machine ? BPF is generic instruction set with C calling convention.
  92. [92]
    BPF as a safer kernel programming environment - LWN.net
    Sep 23, 2022 · The BPF subsystem, on the other hand, provides a programming environment that allows engineers to write programs that can run safely in kernel ...
  93. [93]
    Berkeley Packet Filter (BPF) — General Overview - CodiLime
    Feb 6, 2023 · The Berkeley Packet Filter was originally introduced to increase network packet handling performance. Previous BPF solutions offered only user space.<|separator|>
  94. [94]
    What is eBPF? Common Use Cases and Best Practices - Groundcover
    Mar 10, 2024 · eBPF, which is short for extended Berkeley Packet Filter, is a Linux kernel feature that makes it possible to run sandboxed programs within kernel space.