Fact-checked by Grok 2 weeks ago

Network block device

A network block device (NBD) is a client-server that allows a system to access remote storage as a local block device over a TCP/IP , enabling operations such as reading and writing blocks of data as if the device were attached directly to the client machine. This setup treats the remote server as a virtual disk, supporting filesystems, swap partitions, or other block-based uses without requiring specialized hardware. Developed by Pavel Machek, the NBD protocol originated in 1997 as part of the Linux kernel development for version 2.1.55, initially to enable booting from or accessing remote storage in diskless environments. The protocol underwent revisions, with a formal specification documented in 2011, introducing structured handshakes and metadata querying; later revisions added optional features like TLS encryption to enhance security and flexibility. Today, NBD is maintained as a kernel module on the client side, while the server operates entirely in userspace, making it portable across operating systems including Windows. In operation, the client initiates a connection to the (typically on 10809), negotiates capabilities during a phase, and then sends I/O requests—such as reads, writes, or trims—in a transmission phase, with the responding accordingly. Key features include support for multiple concurrent connections, configurable handling (up to 16 partitions per device by default), and up to 16 devices (configurable via parameters like nbds_max). Common implementations include the nbd-client and nbd-server tools from the official NBD project, which facilitate exporting files or disks for remote mounting, though it is not recommended for production due to potential and lack of built-in compared to distributed storage systems like or Ceph.

Introduction

Definition and Purpose

A network block device (NBD) is a client-server and software mechanism in the that enables a client system to access a remote storage device over a /IP network as if it were a local block device. This allows direct block-level read and write operations on the remote storage, treating it like a standard disk without the need for translation at the network layer. The primary purpose of NBD is to provide scalable remote solutions, such as diskless for thin clients, remote backups, and shared in clustered or virtualized environments. Unlike file-level protocols like NFS, which operate on files and directories with higher-level semantics, NBD works at the block level to emulate direct disk access, supporting any that the client can handle without restrictions imposed by the network protocol. Key benefits of NBD include low overhead for block I/O operations compared to file-sharing protocols, full compatibility with standard Linux block device interfaces such as /dev/nbdX, and the ability to resize the device and underlying file system without reformatting, provided the file system supports online resizing. These features make NBD suitable for efficient, transparent remote storage integration. NBD was initially introduced in the version 2.1.55 in April 1997 by Pavel Machek.

Historical Development

The (NBD) originated in 1997 when Pavel Machek developed it as a patch for the version 2.1.55, aiming to provide access to devices for experimental setups like diskless workstations. This initial implementation allowed a client machine to treat a remote server's device as a local one over , marking an early effort to enable distributed storage in environments. The code was first compiled and tested that year, with Machek holding the copyright. NBD was integrated into the mainline during the 2.1 development series, becoming available from kernel version 2.1.101 onward. Significant enhancements followed in the early , including contributions from Steven Whitehouse in 2001 for compatibility with the evolving layer, such as updates to the request completion handling in nbd_end_request(). By the 2.6 kernel series (starting 2003), NBD saw further refinements to align with the new I/O framework, improving reliability and integration, though it remained primarily single-threaded and single-connection at this stage. The user-space nbd-server tool, initially part of Machek's work, was packaged for in 2001 by Wouter Verhelst, who later assumed upstream maintenance in the mid-2000s and introduced stability improvements through the , including better management and options. NBD has found in high-availability storage projects, where it can be used to access remote devices in clustered environments, similar to setups involving DRBD or MD RAID for replication. Over time, NBD evolved from its single-threaded origins to support more efficient I/O patterns. A key milestone was the addition of multi-connection support in Linux kernel 4.9 (2016), enabling concurrent reads and writes across multiple TCP connections to reduce contention and improve throughput in networked scenarios. This facilitated integration with virtualization frameworks like virtio, where NBD backends in tools such as QEMU expose remote block devices to virtual machines via the virtio-blk driver for paravirtualized performance. Further advancements in kernel 5.x series (post-2019) enhanced asynchronous handling through the multi-connection model, allowing better scalability for I/O-intensive workloads. As of November 2025, NBD remains actively maintained in the Linux kernel, with the latest stable release being version 6.17, which includes continued optimizations for block devices. NBD has been used to enhance suitability for cloud-native applications, such as mapping Ceph RBD volumes in Kubernetes persistent volumes as local devices for containerized workloads. In 2024-2025, updates to the libnbd library addressed security issues in NBD+SSH handling, improving safety for networked block access.

Protocol Specifications

Client-Server Model

The Network Block Device (NBD) protocol employs a client-server architecture to enable remote access to block storage over a network. In this model, the server exports a block device—such as a file, disk partition, or virtual disk—making it available for remote connections, while the client connects to the server and presents the remote device as a local block device, typically under paths like /dev/nbd0 in Linux environments. The communication relies on TCP as the transport layer, utilizing the default port 10809, which is the IANA-assigned port for NBD. This setup allows clients, often in diskless or resource-constrained systems, to leverage remote storage as if it were locally attached. The connection process begins with the client establishing a to the server. During the subsequent phase, the client sends option requests, including specifications for the export name, minimum and preferred block sizes (with a minimum of 512 bytes and 4 commonly preferred), and flags indicating support for features like operations. The server responds by confirming the selected options, providing the exported device's size in bytes, and advertising its capabilities, such as read-only mode or flush support, through protocol flags. This ensures compatibility and configures the session before transitioning to data transmission. Once connected, data flows bidirectionally over the established stream, with the client issuing commands and the server generating corresponding replies. The protocol operates on fixed-size blocks for reads, writes, and other operations, aligning with standard semantics to maintain compatibility with local filesystems. Error conditions, such as server-side failures, are propagated to the client via standardized error codes, including EIO for general I/O errors, enabling graceful handling of issues like device unavailability. The original NBD protocol, developed informally without a formal , follows a simple structure for basic command-response interactions. Subsequent evolutions introduced the "newstyle" in nbd 2.9.17 and, starting around 2015, fixed newstyle extensions that support structured replies, allowing servers to send alongside for optimizations like sparse reads and status queries. These enhancements improve efficiency without altering the core client-server flow. Specific commands, such as read and write requests, are detailed in the transmission phase following .

Command Structure and Data Handling

The Network Block Device (NBD) protocol defines a set of commands that enable block-level operations over a network connection. The core commands include NBD_CMD_READ (type 0), which retrieves from a specified offset and length; NBD_CMD_WRITE (type 1), which sends to be written at the given offset; NBD_CMD_DISC (type 2), which signals the client to disconnect without expecting a reply; and NBD_CMD_FLUSH (type 3), which ensures all prior writes are committed to stable storage before replying. An optional command, NBD_CMD_TRIM (type 4), allows discarding in a range, supported only if the server advertises the NBD_FLAG_SEND_TRIM capability during . Additional extension commands, negotiated during the , include NBD_CMD_CACHE (type 6) for pre-reading into the server , NBD_CMD_WRITE_ZEROES (type 7) for efficiently zeroing blocks without transferring , and NBD_CMD_BLOCK_STATUS (type 8) for querying block allocation or other . Each command begins with a fixed header format transmitted in network byte order: a 32-bit magic number (0x25609513, known as NBD_REQUEST_MAGIC) to identify valid requests, followed by 16-bit command flags (typically 0 for basic operations, or NBD_CMD_FLAG_FUA for forced unit access on writes to ensure immediate persistence), 16-bit type indicating the command, a 64-bit (a client-chosen echoed in replies), 64-bit (the starting byte on the ), and 32-bit (the number of bytes to transfer). For NBD_CMD_WRITE, the header is immediately followed by the exact length bytes of payload data to write; other commands have no immediate payload in the request. Replies to commands (except NBD_CMD_DISC) use a simple structure in the base : a 32-bit magic number (0x67446698, NBD_SIMPLE_REPLY_MAGIC), a 32-bit (0 for success, or a POSIX-like errno such as 1 for EPERM), and the 64-bit matching the request. For NBD_CMD_READ, the reply header is followed by the requested length bytes of data; writes, flushes, and trims yield only the header if successful. In the extended (enabled via NBD_FLAG_SEND_STRUCTURED), replies may use structured format with magic 0x668e33ef (NBD_STRUCTURED_REPLY_MAGIC), including flags (e.g., NBD_REPLY_FLAG_DONE to indicate completion), a 16-bit type for kind (e.g., data or error details), the , 32-bit length, and the itself, allowing segmented or metadata-rich responses for commands like block status queries. Data handling in NBD occurs as a byte stream over , with clients and servers recommended to disable (via TCP_NODELAY) for low-latency transfers. Servers must support concurrent requests, processing them asynchronously without assuming order, but ensuring that writes preceding a flush are durably stored before the flush reply; clients track requests via unique handles to match replies, which may arrive out of sequence. To prevent partial operations and ensure efficiency, offsets and lengths must be multiples of the negotiated minimum block size (default 1 byte, but often 512 or 4096 bytes based on server advertisement), with a preferred block size for optimal and a maximum limit of 32 MiB per request to bound usage.

Linux Implementation

Kernel Integration

The Network Block Device (NBD) is integrated into the through the nbd.ko , which serves as a enabling clients to access remote storage as local devices. This can be loaded dynamically using modprobe nbd with optional parameters such as max_part to specify the maximum number of partitions per device (e.g., max_part=8 for up to 8 partitions; default 16 per kernel source) and nbds_max to set the total number of available NBD devices (defaulting to 16 if unspecified). Upon loading, the driver registers character and block devices under /dev/nbdX (where X ranges from 0 to the configured maximum), utilizing the kernel's IDR ( ID range) allocator to manage device indices and a mutex for during allocation. The driver mechanics center on seamless integration with the kernel's block layer, where it operates as a request-based driver. Incoming I/O operations are received as bio (block I/O) structures from the upper layers, such as filesystems or applications, and enqueued in the device's . The driver then processes these requests by serializing them into NBD commands, forwarding them over sockets to the user-space for execution, and awaiting replies to complete the bios accordingly. This forwarding occurs via dedicated send and receive functions (nbd_send_cmd and nbd_handle_reply) that handle , , and error propagation, ensuring compatibility with the block layer's submission and completion model without blocking the calling threads. Key features include support for multi-queue I/O through the blk-mq (block multi-queue) framework, introduced in kernel 4.9 via the addition of multi-connection capabilities, which allow multiple concurrent sockets per device to distribute load and reduce contention in multi-threaded workloads. Dynamic resizing is facilitated by ioctls such as NBD_SET_SIZE to update the device's size in bytes, followed by NBD_SET_SOCK to bind a new socket and NBD_DO_IT to initiate or resume the I/O loop, enabling on-the-fly adjustments without full reconnection in some cases. Error handling emphasizes robustness during failures, particularly disconnects: the driver marks affected sockets as dead (nbd_mark_nsock_dead), flushes the request queue by canceling inflight I/Os (nbd_clear_que), and invalidates the backing block device (invalidate_disk) to propagate errors upward while preventing hangs.

User-Space Components

The user-space components of the primarily consist of the daemon and client utilities that facilitate the and connection to remote block s over the network. These tools operate entirely outside the , allowing for flexible deployment without requiring privileged modifications. The primary daemon, nbd-server, is part of the NBD package and enables the of local s or entire disks as block s accessible via the NBD . It listens on a specified , typically 10809, and supports exporting multiple s simultaneously. is managed through the /etc/nbd-server/[config](/page/Configuration) , which defines s using directives such as port for the listening , exportname to name the exported , and authentication options like authfile for restrictions (specifying a file with allowed or CIDR notations, e.g., containing 192.168.1.0/24) or TLS for encrypted connections via certfile, keyfile, etc. For instance, a might specify an export section like [mydisk] with exportname = /path/to/disk, allowlist = true, and authfile = /etc/nbd-server/allow. This daemon handles read/write requests from clients, translating them to local operations while supporting features like for efficient snapshots using sparse s. Recent updates as of 2025 include security enhancements in libnbd for NBD+SSH URIs. On the , nbd-client provides the core utility for establishing connections to an NBD , mapping the remote export to a local block device such as /dev/nbd0. A typical invocation is nbd-client <server-ip> <port> /dev/nbd0, which negotiates the connection and enables subsequent filesystem mounting or disk usage. This tool supports integration with image formats like QCOW2 by connecting to servers that export such files, allowing clients to treat virtual disk images as raw block devices without format-specific handling in the client itself. For more advanced programmatic access, the libnbd library offers a C-based to interact with NBD servers, supporting operations like opening, reading, and writing to exports while handling protocol details such as structured replies and TLS. In Debian-based distributions like , the NBD tools are available via the nbd-client and nbd-server packages, installable with apt install nbd-client or apt install nbd-server. These packages include service units, such as [email protected], for automatic startup and management of connections at boot, configurable via /etc/nbd-client/nbdtab for predefined mappings. For enhanced flexibility, nbdkit serves as an advanced, -based NBD server introduced in , allowing custom backends through a stable C API. It supports diverse sources such as in-memory disks via the memory plugin or (LVM) volumes, enabling tailored implementations like caching or filtering without recompiling the core server. Plugins can be loaded dynamically, e.g., nbdkit --plugin=/path/to/plugin.so file=image.qcow2, to export formats including QCOW2 directly.

Configuration and Usage

Server Setup

To set up an NBD server on a Linux system, begin by installing the necessary package, which provides the user-space daemon for exporting block devices over the network. On Debian-based distributions such as Ubuntu, this can be achieved using the package manager with the command apt install nbd-server, which installs both the server and client tools from the official NBD project repository. For systems without pre-built packages, compile from source by cloning the repository at https://github.com/NetworkBlockDevice/nbd, running ./autogen.sh (if building from Git), followed by ./configure, make, and make install, ensuring dependencies like docbook2man for SGML processing are available. The NBD kernel module is not required on the server side, as the server operates entirely in user space. Next, prepare the export by selecting a backing store, such as a regular file, a (e.g., /dev/sda1), or a device for testing purposes. For a file-based export, create an empty file of desired size using dd if=/dev/zero of=/path/to/exportfile bs=1M count=1024 to allocate 1 GB, then optionally format it with a filesystem like mkfs.ext4 /path/to/exportfile if needed for later client use. Ensure the export path has appropriate permissions, typically owned by the nbd and group created during package installation, to allow the server process to access it without elevated privileges. Configuration is managed primarily through the /etc/nbd-server/config, which defines global settings and individual exports in an INI-like format with sections in square brackets and options as key = value pairs. In the [generic] section, enable export listing for clients by setting allowlist = true, specify the user and group with user = nbd and group = nbd, and optionally set the listening port with port = 10809 (the default). For IP-based , use the authfile option in each export section. For each export, create a dedicated section named after the export (e.g., [mydisk]), including the mandatory exportname = /path/to/exportfile to point to the backing store, and reference an authentication file with authfile = /etc/nbd-server/allow containing permitted client IPs or networks in CIDR notation (e.g., 192.168.1.0/24 or 127.0.0.1). A sample might appear as follows:
[generic]
allowlist = true
user = nbd
group = nbd
port = 10809

[mydisk]
exportname = /path/to/exportfile
authfile = /etc/nbd-server/allow
To support secure connections, enable TLS in the [generic] section with force_tls = true, requiring server certificates configured via additional options like certfile and keyfile, though this mandates prior setup of TLS infrastructure. Start the server daemon using systemd with systemctl start nbd-server after editing the config, or manually via nbd-server -n -C /etc/nbd-server/config for foreground testing, or add -d for debug mode (which runs in foreground without forking). The server binds to the specified port on all interfaces by default, listening for TCP connections from clients; to restrict to a specific IP, use the command-line option [ip@]port such as nbd-server 192.168.1.100@10809 -C /etc/nbd-server/config. For reconfiguration without restart, send a SIGHUP signal to the process. Verify connectivity by testing the port with nc -zv server_ip 10809 from another , which should report if the is listening and the permits inbound traffic on that port. Common pitfalls include restrictions blocking port 10809 (addressed by rules like ufw allow 10809/tcp on UFW-enabled systems or equivalent in /), insufficient permissions on the export file leading to failures, and exceeding the default maximum (configurable with -M or in config) during high load. For large exports approaching system limits, monitor resource usage, as the streams directly from the backing store without built-in caching.

Client Mounting and Management

On the client side, the (NBD) is managed through the module and user-space tools, allowing remote block storage to appear as a local . To establish a , the nbd module must first be loaded using the command modprobe nbd, which initializes support for up to 16 NBD by default (configurable via the nbds_max module parameter). Once loaded, the nbd-client connects to the remote , mapping the export to a local node such as /dev/nbd0. The basic command is nbd-client <host> <port> /dev/nbd0, where the default port is 10809 if unspecified; the -persist option enables automatic reconnection on network interruptions, ensuring reliability for ongoing operations. After , partitions on the can be detected without rebooting by running partprobe /dev/nbd0, which informs the of any table changes. For mounting, the connected NBD device functions like any local block device and supports standard filesystems such as or . If the device is new or unformatted, create a filesystem with mkfs.ext4 /dev/nbd0 (or mkfs.xfs /dev/nbd0 for ), which formats the remote storage over the network. Subsequently, mount a —e.g., mount /dev/nbd0p1 /mnt—to access the filesystem locally; the _netdev option can be added for network-dependent mounts to delay until network availability. Ongoing management includes disconnection via nbd-client -d /dev/nbd0, which cleanly severs the link and makes the device node available for reuse. To resize an NBD device after the has updated the , echo the new in 512-byte sectors to the interface: echo <new_size_in_sectors> > /sys/block/nbd0/[size](/page/Size), followed by partprobe to update partitions if applicable. for boot-time operations often involves scripts or entries in /etc/[fstab](/page/Fstab), such as /dev/nbd0p1 /mnt [ext4](/page/Ext4) defaults,_netdev 0 2, which ensures mounting occurs after initialization; custom scripts can invoke nbd-client prior to mounting. Monitoring uses tools like blkid /dev/nbd0 to retrieve filesystem UUIDs or labels for persistent identification, while rules (e.g., in /etc/udev/rules.d/) can trigger actions like auto-mounting upon device detection via patterns matching KERNEL=="nbd*".

Alternatives and Comparisons

Similar Network Storage Protocols

Several protocols offer network-based access to block storage, serving as alternatives to NBD by enabling remote disk-like access over networks. These include , , and NVMe-oF, each tailored to specific use cases such as enterprise storage, local area networks, or high-performance data centers. , or Internet Small Computer Systems Interface, encapsulates SCSI commands within TCP/IP packets to provide block-level storage access over IP networks. Defined in RFC 3720, it operates in a client-server model with initiators (clients) sending SCSI commands to targets (servers), supporting features like session management and error recovery. Authentication is handled via (Challenge-Handshake Authentication Protocol), which verifies initiators and targets using shared secrets during login. AoE, or , delivers block storage directly over Ethernet frames without relying on or , reducing protocol overhead for low-latency access. Developed by Coraid and introduced as an around 2004, it targets / commands to Ethernet targets, making it suitable for local area networks where devices are confined to the same . AoE lacks built-in capabilities, limiting its scope to environments. NVMe-oF, or NVMe over Fabrics, extends the NVMe command set—originally designed for local PCIe-attached SSDs—across network fabrics like RDMA over Ethernet or to enable remote, high-speed block access. Specified by the NVM Express consortium starting with version 1.0 in 2016, it supports low-latency operations in data centers by leveraging fabrics for efficient queue management and data transfer, often achieving near-local performance.

Key Differences from NBD

The Network Block Device (NBD) protocol differs from primarily in its simplicity and lack of integration with the command set. While encapsulates commands over /, enabling compatibility with enterprise storage area networks () and features like multipath I/O for and load balancing, NBD employs a lightweight, custom protocol with basic read, write, and trim operations directly over , avoiding the overhead of emulation. This makes NBD easier to configure and deploy in resource-constrained environments, but it forgoes advanced capabilities such as CHAP authentication, session management, and native multipath support, which require additional tools like device-mapper-multipath for NBD equivalents. In a 2003 benchmark using early hardware, NBD showed lower CPU utilization than , though often delivered higher throughput in single-server setups; NBD could surpass this by distributing across multiple servers. 's layer introduces modest latency penalties but tolerates higher network delays in production , suiting enterprise scenarios where robustness outweighs NBD's . In contrast to file-level protocols like NFS and CIFS (SMB), NBD operates at the block level, presenting remote storage as a raw block device for direct I/O access without an intervening filesystem layer on the client. This enables NBD to support any local filesystem (e.g., ) transparently, achieving faster mount times than NFS's file-oriented sharing, which relies on (or in NFSv4) and incurs overhead from pathname resolution and attribute caching. However, NBD lacks built-in mechanisms for multi-client concurrency, such as NFS's locking and lease-based coordination, making it unsuitable for shared writes across multiple clients without external filesystem-level locking (e.g., via on a clustered block device), as simultaneous block modifications can lead to . CIFS adds Windows-specific semantics like opportunistic locking, further emphasizing file-level semantics over NBD's raw block exposure, which prioritizes single-client or mirrored setups like diskless . Compared to NVMe-oF, NBD's TCP-based transport results in higher CPU overhead due to software-managed data transfers, lacking NVMe-oF's support for RDMA (e.g., over RoCE or ) that bypasses the CPU for and achieves sub-millisecond latencies in (HPC) environments. NVMe-oF leverages the NVMe command queue model for parallel I/O, delivering superior throughput and efficiency on modern SSD arrays, but it demands specialized NICs and fabrics, increasing complexity and cost. NBD, with its minimal command set, suits simpler TCP-only networks but cannot match NVMe-oF's scale for low-latency, high-IOPS workloads. These trade-offs position NBD as ideal for lightweight applications, such as embedded systems on devices where its low overhead enhances management and power efficiency, or diskless clients borrowing remote disks without dedicated . In contrast, , NFS/CIFS, and NVMe-oF scale better for production arrays, supporting multipath , concurrent access, and HPC demands in or clustered environments.

Security Considerations

Vulnerabilities and Risks

The Network Block Device (NBD) operates over unencrypted connections by default, exposing data transfers to and man-in-the-middle (MITM) attacks where an attacker can intercept or alter I/O operations without detection. Although optional TLS support via the NBD_OPT_STARTTLS extension provides and using client/ certificates, the core lacks built-in mechanisms, requiring explicit for secure channels. NBD servers are susceptible to denial-of-service () attacks, as the basic nbd-server implementation does not include , allowing attackers to overwhelm the server with excessive connections, malformed commands, or oversized requests that consume resources or cause crashes. For instance, improper handling of name length fields in NBD requests can trigger server crashes, enabling remote by a malicious client. On the , mounting an NBD as /dev/nbdX grants root-level to the remote block storage, effectively exposing the entire remote filesystem to local privileged users and increasing risks of unauthorized data or corruption if the is compromised. Historical vulnerabilities, such as the in the userspace nbd-server.c due to improper bounds checking on overly long inputs, have allowed remote code execution on the or crashes of the . As of November 2025, ongoing vulnerabilities in NBD, including use-after-free errors in the nbd_genl_connect() exploitable by local privileged users with access to NBD management interfaces, highlight persistent risks of or system instability. Additionally, in October 2025, a (CVE-2025-40080) was addressed by restricting NBD sockets to and to mitigate abuse of unsupported types. Integration with container runtimes like or can amplify these issues, as misconfigured NBD mounts within containers may enable lateral movement across hosts by providing direct block-level access to shared remote storage.

Mitigation Strategies

To mitigate risks associated with unencrypted data transmission in NBD setups, encryption can be implemented using (TLS). The NBD protocol includes support for upgrading connections to TLS via the NBD_OPT_STARTTLS option during negotiation, allowing both authentication and encryption of block device traffic over . User-space NBD servers such as nbdkit, when compiled with , enable TLS enforcement with options like --tls=require to reject non-TLS connections, supporting certificates for or Pre-Shared Keys (PSK) for simpler credential-based verification. On the , tools like nbd-client integrate TLS via , requiring specification of client certificates, private keys, and files (e.g., via -certfile, -keyfile, and -cacertfile options) to establish secure sessions, with default prioritization of TLS 1.2 or higher. For legacy or custom NBD servers lacking native TLS, external wrappers like or socat can encapsulate the port (default 10809) in a TLS tunnel, providing encryption without modifying the core protocol. Authentication in NBD is not built into the core , necessitating layered approaches to prevent unauthorized . TLS-based methods, such as client in nbdkit (--tls-verify-peer), ensure only trusted clients connect by validating against a CA. Alternatively, SSH tunneling can secure NBD traffic by forwarding the port through an encrypted SSH connection (e.g., ssh -L 10809:[localhost](/page/Localhost):10809 user@server), combining via SSH keys or passwords with network isolation. Virtual Private Networks (VPNs), such as those using or , encapsulate NBD entirely within an authenticated tunnel, restricting to VPN peers and adding or similar . Third-party proxies like those in OpenBMC's jsnbd project can extend by integrating Pluggable Authentication Modules () for username/password checks before forwarding requests. Network-level controls further harden NBD deployments against unauthorized access and exploitation. Firewall rules, such as using Uncomplicated Firewall (ufw) to allow connections only from trusted IP addresses (e.g., ufw allow from 192.168.1.0/24 to any port 10809), limit exposure to specific clients or subnets. Running the NBD server in a chroot jail restricts its filesystem access, while containerization with tools like Docker or Podman isolates the process; additionally, AppArmor or SELinux profiles can enforce mandatory access controls, confining the server to minimal privileges (e.g., SELinux's nbd_t type for targeted policies). Best practices emphasize ongoing maintenance and isolation to address known vulnerabilities. Regular updates are critical, as patches for NBD-related issues—such as the 2023 fix for incomplete ioctl argument validation that could enable denial-of-service () attacks (CVE-2023-53513)—prevent crashes or resource exhaustion from malformed requests. Enabling audit logging with auditd captures NBD socket and events for forensic analysis (e.g., via rules in /etc/audit/rules.d/ targeting /dev/nbd*), aiding in detection of suspicious activity. Deployments should avoid public internet exposure, instead confining NBD to private VLANs or isolated segments to reduce , ensuring only authenticated and firewalled internal traffic reaches the service.

References

  1. [1]
    Network Block Device (TCP version)
    The nbd kernel module need only be installed on the client system, as the nbd-server is completely in userspace. In fact, the nbd-server has been successfully ...
  2. [2]
    The Network Block Device | Linux Journal
    May 1, 2000 · In April of 1997, Pavel Machek wrote the code for his Network Block Device (NBD), the vehicle for his work being the then-current 2.1.55 ...
  3. [3]
    nbd/doc/proto.txt at master · NetworkBlockDevice/nbd
    Insufficient relevant content. The provided text is a GitHub page fragment with navigation, feedback, and footer information but does not contain the NBD protocol specification from `proto.txt`.
  4. [4]
    NetworkBlockDevice/nbd: Network Block Device - GitHub
    Welcome to the NBD userland support files! This package contains nbd-server and nbd-client. To install the package, download the source and do the normal ...
  5. [5]
    NBD: A Powerful Tool for data center - HardenedVault
    Jul 24, 2024 · NBD, the abbreviation for Network Block Device (protocol), is a protocol that was initially implemented in the Linux kernel version 2.1.55 in 1997.Missing: merged | Show results with:merged
  6. [6]
    drivers/block/nbd.c - pub/scm/linux/kernel/git/history/history
    * Copyright 1997 Pavel Machek <pavel@atrey.karlin.mff.cuni.cz>. *. * (part of code stolen from loop.c). *. * 97-3-25 compiled 0-th version, not yet tested it.Missing: origins | Show results with:origins
  7. [7]
    [PDF] KNBD - A Remote Kernel Block Server for Linux
    Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels. 2.1.101 and later. You can configure this block device to ...Missing: introduction | Show results with:introduction
  8. [8]
    nbd.c source code [linux/drivers/block/nbd.c] - Codebrowser
    * Copyright 1997-2000, 2008 Pavel Machek <pavel@ucw.cz>. 9, * Parts copyright ... /* Don't allow ioctl operations on a nbd device that was created with. 1638 ...
  9. [9]
    Lessons from 15 years of NBD - DebConf16
    In 2001, I uploaded the very first nbd-client and nbd-server packages into Debian. A few years later, I took over upstream maintenance.
  10. [10]
    drbd vs md+nbd ?
    Hi all, I am looking ahead at 2.5 here. The functionality drbd provides will certainly be needed again, and Philipp will be looking at this in the future, ...
  11. [11]
    nbd-client(8) - Arch manual pages
    With nbd-client, you can connect to a server running nbd-server, thus using raw diskspace from that server as a blockdevice on the local client.
  12. [12]
    Linux_6.12 - Linux Kernel Newbies
    Summary of the changes and new features merged in the Linux kernel during the 6.12 development cycle.
  13. [13]
  14. [14]
    None
    Summary of each segment:
  15. [15]
    nbdkit - network block device (NBD) server
    Network Block Device (NBD) is a network protocol for accessing block devices over the network. Block devices are hard disks and things that behave like hard ...
  16. [16]
    which parts of the NBD protocol nbdkit supports - Libguestfs
    The NBD protocol specification claims that you should always use newstyle when using port 10809, and use oldstyle on all other ports, but this claim is not ...<|control11|><|separator|>
  17. [17]
    QEMU NBD protocol support — QEMU documentation
    ### NBD Protocol Resizing Support
  18. [18]
  19. [19]
    nbd: add multi-connection support - LWN.net
    Sep 15, 2016 · nbd: add multi-connection support ; Subject: [PATCH][V2] nbd: add multi-connection support ; Date: Thu, 15 Sep 2016 10:43:54 -0400 ; Message-ID: < ...Missing: 2.6 2003
  20. [20]
    Ubuntu Manpage: /etc/nbd-server/config - configuration file for nbd ...
    This file allows to configure the nbd-server. While /etc/nbd-server/config is the default configuration file, this can be varied with the -C option to nbd- ...
  21. [21]
    NBD-SERVER - Network Block Device
    When this option is enabled, nbd-server will use sparse files to implement the copy-on-write option; such files take up less space then they appear to, which ...
  22. [22]
    libnbd - network block device (NBD) client library in userspace
    Network Block Device (NBD) is a network protocol for accessing block devices over the network. Block devices are hard disks and things that behave like hard ...
  23. [23]
    nbdkit - GitLab
    Network Block Device — is a protocol for accessing Block Devices (hard disks and disk-like things) over a Network. The key features of ...
  24. [24]
    nbd-server(1) - Arch manual pages
    With NBD, a client can use a file, exported over the network from a server, as a block device. It can then be used for whatever purpose a normal block device ( ...
  25. [25]
    Re-read The Partition Table Without Rebooting Linux System - nixCraft
    Apr 20, 2023 · The partprobe command is part of GNU parted software. The parted is a disk partitioning and partition resizing program. It allows you to create, ...Missing: NBD | Show results with:NBD
  26. [26]
    Understanding NBD: Linux Network Block Device Protocol
    Jul 31, 2023 · The Network Block Device (NBD) protocol enables block-level access to remote storage devices over a network.<|control11|><|separator|>
  27. [27]
    Re: [Nbd] NBD_CMD_RESIZE - Debian Mailing Lists
    May 18, 2013 · ... NBD devices. > > > > One option is to stop the NBD server, resize the file, start NBD again, > > then run a peculiar version of block_resize ...
  28. [28]
    How To: Using “systemd” to mount NBD devices on boot (Ubuntu)
    To mount NBD devices on boot using systemd, get the UUID, create a .mount file, configure it to mount after network, and enable it with `systemctl enable`.Missing: client debian
  29. [29]
    blkid(8) — util-linux - bookworm - Debian Manpages
    Nov 21, 2024 · This option forces blkid to use udev when used for LABEL or UUID tokens in --match-token. The goal is to provide output consistent with other ...
  30. [30]
    1226311 – udev don't probe for FS on nbd devices - Red Hat Bugzilla
    Jun 17, 2015 · rules: # Ignore nbd devices in the "add" event, with "change" the nbd is ready ACTION=="add", SUBSYSTEM=="block", KERNEL=="nbd*", ENV{SYSTEMD_ ...
  31. [31]
    RFC 3720 - Internet Small Computer Systems Interface (iSCSI)
    RFC 3720 describes a transport protocol for iSCSI, which works on top of TCP and aims to be compliant with the SCSI architecture model.
  32. [32]
    DRBD - LINBIT
    DRBD is open source distributed replicated block storage software for the Linux platform and is typically used for high performance high availability.DRBD Proxy · DRBD Basics Training · Software Downloads · How-To Guides
  33. [33]
    RFC 3723 - Securing Block Storage Protocols over IP
    iSCSI Authentication 2.4.1. CHAP Compliant iSCSI implementations MUST implement the CHAP authentication method [RFC1994] (according to [RFC3720], section ...
  34. [34]
    Technology - Coraid
    ATA-over-Ethernet (AoE) is a proven open standard protocol designed by the inventor of network address translation and other important networking technologies, ...
  35. [35]
    Do We Need Another Storage Protocol? - Coraid
    Feb 5, 2018 · ATA-over-Ethernet has a number of fans. But why did I invent a new storage protocol? Let me explain. The need. In late 1999, I started thinking ...
  36. [36]
    [PDF] NVM ExpressTM over Fabrics Revision 1.1a July 12, 2021
    This specification defines extensions to the NVMe interface that enable operation over a fabric other than. PCI Express (PCIe). This specification supplements ...
  37. [37]
    [PDF] NVMe over Fabrics | NVM Express® Moves Into The Future
    NVM Express over Fabrics defines a common architecture that supports a range of storage networking fabrics for NVMe block storage protocol over a storage ...
  38. [38]
    DRBD 9.0 en - LINBIT
    If you depend on the old behavior, it can be brought back by disabling strict name checking: echo 0 > /sys/module/drbd/parameters/strict_names . Volumes. Any ...
  39. [39]
    About Us - LINBIT
    Since 2001, the company has been recognized in the public sphere primarily for its distributed replicated storage system for the Linux platform DRBD, which was ...
  40. [40]
    NBD Knows
    Oct 27, 2023 · Network Block Devices differs in that doesn't implement the commands from an existing disk protocol (SCSI or ATA). Instead, it uses its own ...
  41. [41]
    NVMe over TCP vs iSCSI - Evolution of Network Storage - simplyblock
    Jan 8, 2025 · Yes, NVMe over TCP is superior to iSCSI in almost any way. NVMe over TCP provides lower protocol overhead, better throughput, lower latency, ...
  42. [42]
    Image based deployment via dd and nbd - Foreman
    Nov 30, 2015 · Instead of iSCSI, I will test Network Block Device, which is easier to configure and it is present in all Linux distributions, including Fedora ...
  43. [43]
    [PDF] Performance comparison between iSCSI and other hardware and ...
    What is Enhanced Network Block. Device (ENBD)?. ENBD is a linux kernel module coupled with a user space daemon that sends block requests from a linux client to ...
  44. [44]
    Re: [Nbd] NBD vs iSCSI performance? - Debian Mailing Lists
    May 22, 2007 · iscsi does take more cpu (add some latency) because of protocol overhead, other than that, no difference. ... [Nbd] NBD vs iSCSI performance?
  45. [45]
    Diskless system - ArchWiki - Arch Linux
    Oct 14, 2025 · The primary difference between using NFS and NBD is while with both you can in fact have multiple clients using the same installation, with NBD ...
  46. [46]
    NVMe® over RDMA Transport: Improving Network-Based Storage
    The NVMe/RDMA transport provides a method of transporting NVMe traffic over a network that supports RDMA, such as Ethernet or InfiniBand, using a minimal amount ...
  47. [47]
    NVMe Vs. NVMe-oF: A Comparison | NetApp
    Jul 7, 2022 · In this article, we compare NVMe vs NVMe-oF (NVMe over Fabric) as modern storage access protocols that support distributed, cloud-native storage.
  48. [48]
    Empirical analysis of Android storage management using Network ...
    May 19, 2025 · The Network Block Device (NBD) protocol, integrated into the Linux kernel, offers distinctive benefits for Android storage management. Unlike ...
  49. [49]
    Research on key technologies of NBD storage service system based ...
    Dec 27, 2021 · This paper mainly focuses on the I/O cache of the server layer device of the network block device in the embedded background.
  50. [50]
  51. [51]
    tokio_nbd - Rust - Docs.rs
    This implementation follows the NBD protocol specification as defined at NetworkBlockDevice/nbd. §Security Considerations. NBD does not provide built-in ...
  52. [52]
    USN-5323-1: NBD vulnerabilities | Ubuntu security notices
    Mar 14, 2022 · It was discovered that NBD incorrectly handled name length fields. A remote attacker could use this issue to cause NBD to crash, ...
  53. [53]
    QEMU Disk Network Block Device Server
    Access to bind qemu-nbd to a /dev/nbd device generally requires root privileges, and may also require the execution of modprobe nbd to enable the kernel NBD ...
  54. [54]
    Network Block Device nbd-server.c buffer overflow - CVE-2011-0530
    Network Block Device is vulnerable to a buffer overflow, caused by improper bounds checking by the mainloop function in nbd-server.c. By sending an overly long ...
  55. [55]
    CVE-2025-38443 Detail - NVD
    Jul 25, 2025 · In the Linux kernel, the following vulnerability has been resolved: nbd: fix uaf in nbd_genl_connect() error path There is a use-after-free ...
  56. [56]
    CVE-2025-38443 - Red Hat Customer Portal
    Jul 30, 2025 · This is a kernel-space logic bug exploitable by a local privileged user with access to nbd management interfaces (typically root or a ...
  57. [57]
  58. [58]
    nbdkit-tls - authentication and encryption of NBD connections
    NORMAL means all secure ciphersuites. The 256-bit ciphers are included as a fallback only. The ciphers are sorted by security margin.".Missing: features | Show results with:features
  59. [59]
    stunnel TLS Proxy
    The stunnel program is designed to work as TLS encryption wrapper between remote clients and local (inetd-startable) or remote servers.Missing: NBD block
  60. [60]
    CVE-2023-53513 - Red Hat Customer Portal
    Apr 30, 2024 · In the Linux kernel, the following vulnerability has been resolved: nbd: fix incomplete validation of ioctl arg We tested and found an alarm ...