Fact-checked by Grok 2 weeks ago

Rump kernel

The Rump kernel is a portable, componentized for operating system components, originating from the project, that enables the reuse of kernel-quality drivers—such as file systems, network stacks, and device drivers—as independent, userspace-executable modules without requiring a full traditional operating system. Developed primarily by Antti Kantee starting in 2007, it implements the "anykernel" concept, where drivers can be compiled into monolithic kernels, run in userspace on systems, or integrated into unikernels and environments, thereby reducing overhead and enhancing portability across platforms like , JavaScript engines, and OS. Key features include a hypercall (rumpuser) for , support for approximately 200 unmodified system calls to run applications, and a minimal codebase of around 1,000 lines for new platform ports, leveraging over a million lines of battle-tested drivers. This design facilitates rapid, test-driven driver with subsecond iteration cycles, improves security by isolating untrusted components (e.g., mounting suspicious file systems in userspace), and minimizes memory footprints and attack surfaces compared to full OS deployments. Notable applications include running web servers like thttpd or interpreters like in rump environments, integration with high-performance networking frameworks such as netmap, and contributions to third-party projects for custom software stacks. As of 2025, rump kernels are integrated into modern systems such as 10.0 for userspace networking and for disk drivers. The 's evolution is detailed in Kantée's 2012 doctoral thesis and subsequent publications, emphasizing flexible OS internals over rigid monolithic structures.

Core Concepts

Anykernel Architecture

The anykernel architecture represents a approach to operating , merging the performance characteristics of a with the modularity typically associated with user-space environments for drivers and subsystems. In this model, kernel code is organized into reusable components that can be extracted without modification, allowing them to function seamlessly across diverse configurations such as monolithic kernels, microkernels, or even exokernels. This emphasizes the separation of from , enabling drivers to operate independently while retaining their original interfaces and behaviors. The foundational idea draws from early explorations in componentized OS frameworks. Key benefits of the anykernel approach include simplified driver development and testing, as components can be compiled and executed in user space without risking kernel instability or requiring extensive rewriting. Developers can leverage familiar tools and environments for , such as running a driver as a linked to an application, which facilitates rapid iteration and . Additionally, it enhances security isolation by confining potentially vulnerable drivers—such as those for file systems or peripherals—to sandboxed contexts, thereby reducing the overall of the host system without sacrificing functionality. These advantages stem from the architecture's focus on portability and reusability, making it possible to deploy battle-tested drivers in lightweight or specialized setups. Under the anykernel model, various drivers—including those for devices, file systems, and networking—can be compiled either into a traditional for optimal performance or as standalone entities that run in user space or minimal runtime environments. For instance, a networking driver might be linked into a user-level application to handle packet processing, or a driver could operate as an isolated server providing POSIX-compliant access. This flexibility allows the same codebase to support multiple deployment scenarios, from embedded systems to virtualized hosts, by treating drivers as libraries with well-defined dependencies on core primitives. The Rump kernel serves as the first practical implementation of this anykernel concept within the operating system.

Rump Kernel Components

The Rump kernel is built upon the anykernel architecture, which enables the extraction and reuse of kernel components as independent libraries. These components are derived from the kernel but separated from its core infrastructure, such as process scheduling and management, allowing them to be compiled as standalone libraries without dependencies on a full operating system. This separation facilitates independent building and linking into custom images, often using tools like the buildrump.sh script for cross-compilation on hosts. The modular design divides components into orthogonal "factions"—such as base system routines, device handling, file systems, and networking—permitting selective inclusion at or to create tailored software stacks with minimal overhead. Core components encompass a range of kernel-quality drivers and subsystems, including:
  • File systems: Such as the Berkeley Fast File System (FFS), (NFS), , and , providing support for both disk-based and network-based storage operations.
  • POSIX system calls: Approximately 200 NetBSD-compatible syscalls (e.g., for sockets and file operations), prefixed with rump_sys_ and implemented via wrappers that ensure ABI compatibility without relying on kernel traps.
  • PCI device drivers: Enabling hardware configuration, interrupt handling, and access for a variety of peripherals.
  • TCP/IP networking stack: A full implementation supporting protocols like IPv4/IPv6, , and , with options for raw access or integration with host networking via virtual interfaces.
  • SCSI support: Protocol stacks for storage devices, integrated into the broader I/O driver layer.
Rump kernel components are categorized into driver types that support protocol translation and , allowing developers to assemble specialized environments. Device drivers, such as and implementations, manage low-level hardware interactions like I/O and interrupts, forming the foundation for storage and peripheral access in custom configurations. File system drivers handle data organization and access, enabling features like mounting remote NFS shares or local FFS volumes without a traditional . Networking drivers, exemplified by the / stack, provide end-to-end communication capabilities, such as running a or integrating with high-performance frameworks for packet processing. This supports the construction of bespoke stacks—for instance, a minimal networking-only kernel or a file-system-focused —by linking only the required libraries, avoiding the bloat of unused OS elements.

Development History

Origins and Antti Kantee

The Rump kernel project originated as a research initiative led by Antti Kantee, a long-time contributor since 1999, who developed it primarily to facilitate the execution of kernel drivers in user space for improved development and testing. Kantee's work addressed key limitations in traditional driver development, such as the challenges of testing on real hardware, which often required complex setups, and portability issues that demanded extensive porting efforts or conditional compilation hacks. These motivations stemmed from the need for a more efficient environment where unmodified kernel code could run reliably outside the kernel, minimizing overhead and enhancing driver reuse across platforms. Conceptualized around 2008-2010 within the project, the Rump kernel extended the anykernel architecture—a theoretical foundation for flexible kernel designs that decouple drivers from specific kernel environments. Kantee's initial prototypes, developed starting in 2007, focused on enabling user-space execution of kernel components like file systems (e.g., FFS) and networking stacks, with the first functional version taking about two weeks to prototype but requiring around four years of refinement for stability. This approach allowed developers to debug and test drivers without the full overhead of a or , drawing an analogy in Kantee's work to how emulators simplify hardware development compared to bare-metal coding. The project's first integration into the NetBSD source tree occurred incrementally, with initial commits in August 2007 marking the entry of core support for user-space kernel code execution. By 2011, key milestones included a significant commit on March 31 at 23:59 UTC, solidifying rump kernel functionality for components like TCP/IP stacks and USB drivers, and experimental inclusion in 5.0, leading to a stable version in 6.0. Kantee's leadership culminated in his 2012 Ph.D. from , where his dissertation, Flexible Operating System Internals: The Design and Implementation of the Anykernel and Rump Kernels, defended in 2012 under Professor Heikki Saikkonen's supervision, provided the comprehensive theoretical and practical foundation for the project.

Milestones and Presentations

The Rump kernel achieved a significant milestone with its integration into NetBSD 6.1, released on May 18, 2013, which introduced stable support for running kernel drivers in userspace as part of the mainline distribution. This allowed developers to leverage Rump components for tasks like file system mounting via the new -o rump option in mount(8) and enhanced testing of drivers outside the monolithic kernel environment. In 2014, the release of Rumprun marked another key advancement, providing a framework built on Rump kernels to execute unmodified applications directly on hypervisors such as and KVM without a traditional OS layer. This extension broadened the Rump kernel's utility beyond , enabling lightweight, secure deployments in cloud and embedded contexts. Antti Kantee introduced the core concepts of the anykernel architecture and Rump kernels at 2013 in his presentation "The Anykernel and Rump Kernels," highlighting their potential for driver virtualization and cross-platform reuse. He followed up at 2014 with "Rump Kernels, Just Components," detailing progress in modularizing kernel drivers as interchangeable components for diverse hosting environments. These talks underscored the shift from experimental tools to practical infrastructure for operating system development. The project continued to evolve, gaining inclusion in 10.0 released on March 28, 2024, where Rump kernels supported new userspace implementations like the VPN server via wg-userspace(8), alongside ongoing maintenance for stability and portability. Rump kernel support was further integrated in 10.1 (December 2024) and continued in the NetBSD 11.0 release, which entered its formal process in August 2025. As of November 2025, the Rump kernel remains actively maintained within NetBSD, with core components integrated into the base system for production use, and supporting GitHub repositories hosting drivers, tools, and infrastructure for ongoing contributions and extensions.

Technical Design

Portability and Environments

The Rump kernel achieves high portability through its anykernel architecture, which allows unmodified NetBSD kernel drivers to operate across diverse host environments without requiring OS-specific adaptations. This is facilitated by a minimal hypercall interface called rumpuser, which provides essential resources like memory allocation, file descriptors, and I/O operations from the host system. As a result, Rump kernels can be compiled and linked as libraries or standalone components on platforms supporting C99, enabling seamless integration with approximately 1 million lines of battle-tested NetBSD driver code. For POSIX-compliant operating systems, Rump kernels offer robust userspace support on hosts including (the ), , , , , and . These environments leverage the POSIX system call compatibility of Rump components, allowing drivers for file systems, networking, and devices to run in userspace without modification. The integration typically requires only a few hundred lines of platform-specific code to implement the rumpuser interface, minimizing porting effort. In non-POSIX environments, Rump kernels extend to hypervisors such as and KVM, microkernels like L4 (via the OS framework), and bare metal hardware on architectures including x86. On bare metal, bootstrap code handles initial setup, while hypervisors use Rump as a guest domain to provide device drivers without a full OS. This versatility stems from the componentized design, where drivers are extracted and linked independently, preserving their original semantics. Portability benefits are evident in practical scenarios, such as deploying file systems like FFS or NFS on hosts for enhanced compatibility and isolation, or running TCP/IP stacks on bare metal systems without a traditional . For instance, tools like fs-utils enable userspace mounting of NetBSD-formatted disks on , reducing the need for cross-platform porting and improving through sandboxed execution. These capabilities have been demonstrated since , supporting applications from unikernels to remote servers.

Hypercall Mechanism

The hypercall mechanism in the Rump kernel establishes a stable, versioned (ABI) known as rumpuser, which enables kernel components to request resources from the host environment, including CPU scheduling via threads, memory allocation, and I/O operations. Defined in the header file sys/rump/include/rump/rumpuser.h, this ABI uses functions prefixed with rumpuser_ to maintain a closed that avoids conflicts with host system symbols, ensuring compatibility with NetBSD's libc established in 2009. All hypercalls conform to unambiguous types like int64_t for portability across hosts, with mandatory calls for core operations and optional ones for driver-specific needs. Unlike traditional kernel designs that rely on traps for privileged operations, the Rump kernel's hypercalls provide a paravirtualized interface implemented over protocols like sockets for remote procedure calls (RPC), allowing kernel code to execute in user space or as a lightweight kernel without triggering host kernel context switches or interrupts. This replacement facilitates direct access to host services—such as memory via rumpuser_malloc—while the rump kernel manages its own virtual namespaces for processes, filesystems, and devices, effectively virtualizing the kernel environment atop the host. The interface supports unmodified kernel drivers by mapping their internal requests to these hypercalls, preserving ABI compatibility for system calls prefixed with rump_sys_. Hypercalls are organized into key categories to cover essential interactions: system calls for initialization (rumpuser_init), error handling (rumpuser_seterrno), and clocks (rumpuser_clock_gettime); device I/O for file operations (rumpuser_open, rumpuser_close), block transfers (rumpuser_bio), and scatter-gather reads/writes (rumpuser_iovread, rumpuser_iovwrite); threading for creation (rumpuser_thread_create), joining (rumpuser_thread_join), and current identification (rumpuser_curlwp); and synchronization primitives including mutexes (rumpuser_mutex_*), read-write locks (rumpuser_rw_*), and condition variables (rumpuser_cv_*) that rely on host schedulers for blocking operations. These categories ensure comprehensive support for POSIX-style semantics while delegating resource-intensive tasks, like (rumpuser_getrandom), to the host. This design yields several advantages, including strong isolation through separate address spaces and namespaces that prevent rump kernel interference with the host, even for untrusted components like file systems. Debugging is simplified by running in user space, allowing tools like GDB or Valgrind to inspect kernel code directly and using rumpuser_dprintf for precise logging without kernel panic handling. Furthermore, the stable ABI eliminates the need for recompiling kernel components when switching host environments, as host-specific implementations of librumpuser handle adaptations transparently. This mechanism underpins the Rump kernel's portability to various operating systems and hypervisors by abstracting platform differences into the hypercall layer.

Practical Uses

Rumprun Unikernel

Rumprun is a platform built using Rump kernel components to enable the creation of standalone, minimal executables that operate without a complete underlying operating . It leverages modular Rump drivers to compile applications directly with only the essential kernel services, resulting in specialized, single-purpose OS images that boot directly into the target workload. Key features of Rumprun include compatibility with bare-metal execution, as well as support for hypervisors such as and KVM, allowing deployment across diverse hardware environments like x86, , and systems. Users can construct custom images by integrating selected drivers—for example, combining the /IP networking stack with a to support networked storage operations—while maintaining a interface for application compatibility. This approach supports multiple programming languages, including , Go, , and , facilitating the transformation of unmodified applications into efficient unikernels. Rumprun finds application in resource-constrained scenarios such as embedded systems, cloud-based appliances, and secure environments, where its small and properties enhance performance and security. A representative use case involves deploying the NetBSD networking stack as a dedicated within a KVM instance on a host, providing high-performance packet processing without the overhead of a full OS. The platform saw its initial release in , with continued development through variants like the SMP-enabled rumprun-smp fork, which provides support for multicore environments. However, the last significant updates to the main Rumprun repositories occurred around or earlier, and the project is considered experimental.

File System Utilities

The fs-utils subproject provides a suite of command-line utilities for manipulating file system images directly in user space, leveraging Rump kernel libraries to execute kernel-level code without requiring host kernel mounts or privileges. These tools enable safe access to disk images or block devices by opening them as regular files, thus avoiding any risk of corrupting the host and allowing operation on diverse host environments such as . Key utilities include fsu_ls for listing directory contents, fsu_cp and fsu_mv for copying and moving files, fsu_cat for displaying file contents, fsu_rm for removal, fsu_chown and fsu_chmod for ownership and permission changes, as well as specialized tools like fsu_find, fsu_diff, fsu_ecp for cross-file-system copies, and fsu_write for direct writing. These commands mimic standard utilities in syntax and behavior, facilitating familiar workflows for tasks such as inspecting or modifying images. Fs-utils supports a wide range of file systems through Rump's modular design, including block-device-based options like FFS (Fast File System), LFS (Log-structured File System), ext2, FAT, HFS, NTFS, and UDF; memory-based systems such as tmpfs; and network protocols like NFS. It also accommodates FUSE-based file systems, such as sshfs or ntfs-3g, by autodetecting the file system type from the image contents via the UKFS (User-Kernel File System) library, which handles mounting and access in user space. This integration with Rump file system components allows seamless execution of verified kernel code, reducing the need for duplicated user-space implementations and enhancing portability across non-NetBSD hosts. The primary advantages of fs-utils stem from its user-space operation, which eliminates superuser requirements and host mounting risks, making it ideal for development, testing, and of file systems or VFS code. For instance, developers can test new file system features on images without a full system or risking host integrity. The project has been publicly available on since 2013, with and build instructions provided for integration.

References

  1. [1]
    [PDF] OPERATING SYSTEMS - Rump Kernels - USENIX
    Oct 12, 2014 · This article introduces rump kernels, which provide NetBSD kernel drivers as portable components, allowing you to run applications without an ...
  2. [2]
    [PDF] The Design and Implementation of the Anykernel and Rump Kernels
    ... Antti Kantee. DOCTORAL. DISSERTATIONS. Page 2. Page 3. "BMUP 6OJWFSTJUZ ... Kantee. Page 10. 8. Page 11. 9. Preface. Meet the new boss. Same as the old boss. – ...
  3. [3]
    rumpkernel - NetBSD Wiki
    Rump kernels allow developing kernel drivers in a test-driven manner, including both unit tests and integration tests.
  4. [4]
    Think: A Software Framework for Component-based Operating ...
    In this paper, we propose a software framework, called Think, for implementing operating system kernels from components of arbitrary sizes. A unique feature of ...
  5. [5]
    [PDF] rumpkernel bookv2 20160802.pdf
    Figure 1.1: Relationship between key concepts: The anykernel allows driver components to be lifted out of the original source tree and rump kernels to be formed.Missing: origin | Show results with:origin<|control11|><|separator|>
  6. [6]
    Rump Kernels
    Rump kernels provide portable, ready-to-integrate kernel-quality operating system drivers running on a documented hypercall interface.
  7. [7]
    [PDF] Flexible Operating System Internals: The Design and ... - Aaltodoc
    [55] Antti Kantee. 2009. Rump File Systems: Kernel Code Reborn. In: Proceed- ings of the USENIX Annual Technical Conference, pages 201–214. [56] Antti Kantee.
  8. [8]
    Interview: Antti Kantee: The Anykernel and Rump Kernels
    Currently, NetBSD is the only anykernel. A related concept is a rump kernel, which is a virtualized instance of a driver or set of drivers running outside of ...Missing: origin | Show results with:origin
  9. [9]
  10. [10]
    Announcing NetBSD 6.1
    The NetBSD Project is pleased to announce NetBSD 6.1, the first feature update of the NetBSD 6 release branch. It represents a selected subset of fixes ...
  11. [11]
    The Rumprun unikernel and toolchain for various platforms - GitHub
    This repository uses rump kernels to provide the Rumprun unikernel. Rumprun works on not only on hypervisors such as KVM and Xen, but also on bare metal.Missing: 2014 | Show results with:2014
  12. [12]
    Talks | EuroBSDcon 2014
    ... rumprun tools to run them in userspace on Linux, FreeBSD and NetBSD. This talk will look at how this is achieved, and look at use cases, including kernel ...
  13. [13]
    FOSDEM 2013 - The Anykernel and Rump Kernels
    This talk will introduce the concepts of the anykernel and rump kernels, motivate their existence, and show a few cool tricks that are unique benefits.
  14. [14]
    FOSDEM 2014 - Rump Kernels, Just Components
    Rump kernels are NetBSD kernel drivers running on top of a set of high-level hypercalls. The term "drivers" is used in the broad sense, and includes for example ...
  15. [15]
  16. [16]
    NetBSD's Rump Kernel Framework: A Deep Dive into Modular ...
    Jun 30, 2025 · A Rump kernel is essentially a componentized version of the NetBSD kernel that can run unmodified kernel code in different contexts.
  17. [17]
    Rump Kernels - GitHub
    Rump Kernels. Projects related to rump kernels: drivers, infrastructure, etc. 38 followers · http://rumpkernel.github.io · Overview · Repositories 12 · Projects ...
  18. [18]
    rumpkernel(7) - NetBSD Manual Pages
    71-80, March 2009. HISTORY An experimental concept for the anykernel and rump kernels was first seen during the NetBSD 5.0 development cycle. A stable concept ...Missing: origin | Show results with:origin
  19. [19]
    rumpuser(3) - NetBSD Manual Pages
    The rumpuser hypercall interfaces allow a rump kernel to access host resources. A hypervisor implementation must implement the routines described in this ...
  20. [20]
    On rump kernels and the Rumprun unikernel - Xen Project
    Aug 6, 2015 · The Rumprun unikernel, based on the driver components offered by rump kernels, provides a means to run existing POSIX applications as unikernels on Xen.
  21. [21]
    [PDF] Rumprun for Rump Kernels: Instant Unikernels for POSIX applications
    Monolithic kernel, Microkernel: standard stuff. Rump kernel: an existing monolithic kernel componentized into an anykernel. Mix and match kernel components ...Missing: announcement | Show results with:announcement
  22. [22]
    [PDF] Deploying real-world software today as unikernels on Xen with ...
    Rump kernels provide production quality kernel drivers such as file systems, POSIX system call handlers and a TCP/IP network stack.Missing: documentation | Show results with:documentation
  23. [23]
    ssrg-vt/rumprun-smp - GitHub
    Below is the README file that was provided with the original version of rumprun by Antti Kantee (with changes for Xen HVM). This repository uses rump kernels to ...
  24. [24]
    rumpkernel/fs-utils: File System Access Utilities in Userland - GitHub
    The goal of fs-utils is to provide a set of utilities for accessing and modifying file system images without having to use host kernel mounts.Missing: rump | Show results with:rump
  25. [25]
    None
    ### Summary of Section 4.3: File System Access Utilities (fs-utils) from Rump File Systems Paper
  26. [26]
    [PDF] fs-utils: File Systems Access Tools in Userland - NetBSD
    What is fs-utils ? Use cases. RUMP and UKFS. UKFS: User-Kernel File System. ▻ Handles mount information. ▻ Allows file system images access fs = ukfs_mount ...