Fact-checked by Grok 2 weeks ago

DragonFly BSD

DragonFly BSD is a free and open-source operating system derived from the 4.4BSD-Lite codebase, forked from 4.8 in June 2003 by developer . It targets AMD64 (x86_64) architectures, running on x86_64-compatible hardware from entry-level to high-end systems, and emphasizes high performance, stability, and scalability through innovations in multi-processor support and design. The project diverges from other BSD variants like , , and by prioritizing pragmatic advancements, such as minimal lock contention in the and efficient (), while maintaining a lean and customizable base system suitable for servers, desktops, education, research, and . Originating from Dillon's frustrations with the delayed FreeBSD 5 series and its performance regressions, DragonFly BSD was announced in June 2003 with goals to preserve the reliability of the FreeBSD 4 branch while introducing modern features like lightweight kernel threads and a revised virtual file system (VFS) layer. Key architectural innovations include virtual kernels (vkernels), which allow lightweight, isolated instances of the kernel for resource partitioning and debugging without full virtualization overhead, and the NVMM module for native type-2 hypervisor support. The system employs a message-passing inter-process communication model inspired by Mach for improved modularity, though it retains a monolithic kernel structure overall. A defining feature is the HAMMER2 filesystem, introduced as the default in recent releases, which supports volumes up to 1 exbibyte, unlimited snapshots for backups and versioning, built-in compression, deduplication, and automatic integrity checks without requiring tools like fsck after crashes. Hardware compatibility has expanded to include the amdgpu driver for modern AMD GPUs, AHCI and NVMe storage controllers, and swapcache mechanisms to optimize SSD usage. For package management, DragonFly uses DPorts, a ports collection adapted from FreeBSD's, with the latest synchronization based on the 2024Q3 branch and ongoing work toward 2025Q2 integration; binary packages are available via pkg(8). As of May 2025, the current stable release is version 6.4.2, which includes bug fixes for the installer, IPv6 handling, and userland utilities, alongside enhancements for hypervisor compatibility. The project remains actively developed by a community focused on long-term stability and innovative storage solutions, with ongoing efforts in areas like scalable temporary filesystems (TMPFS) and precise time synchronization via DNTPD.

Origins and History

Fork from FreeBSD

DragonFly BSD originated as a of 4.8, initiated by on June 16, 2003. Dillon, a longtime contributor since 1994, sought to preserve the stability and performance of the FreeBSD 4.x series amid growing dissatisfaction with the project's direction for FreeBSD 5, which involved significant architectural changes leading to release delays and performance regressions. By forking from the RELENG_4 branch, DragonFly aimed to evolve the 4.x codebase independently, unconstrained by 's shift toward a more monolithic threading model using mutexes. The initial development goals emphasized innovative kernel redesigns to enhance and reliability, including the of lightweight kernel threads (LWKT) for improved (SMP) performance through lock-free synchronization and partitioning techniques. A key focus was exploring native clustering capabilities with cache coherency, enabling a single system image (SSI) via message-passing mechanisms that decoupled execution contexts from address spaces, targeting both high-performance servers and desktop environments. These objectives positioned DragonFly as an experimental platform for advanced BSD concepts while backporting select 5 features, such as device drivers, to maintain compatibility. Initially, DragonFly continued using 's Ports collection for package management, as it was forked from 4.8. Starting with version 1.4 in 2006, the project adopted NetBSD's pkgsrc to leverage a portable, cross-BSD framework for third-party software, which facilitated resource sharing and eased maintenance for a smaller development team. This choice supported the project's goal of rewriting the packaging infrastructure to better suit its evolving and userland. Dillon publicly announced DragonFly BSD on July 16, 2003, via the FreeBSD-current mailing list, describing it as the "logical continuation of the FreeBSD 4.x series" and inviting contributions to advance its kernel-focused innovations. The announcement highlighted immediate priorities like SMP infrastructure and I/O pathway revisions, setting the stage for DragonFly's distinct trajectory within the BSD family.

Development Philosophy and Milestones

DragonFly BSD's development philosophy emphasizes minimal lock contention to achieve high concurrency, drawing from UNIX principles while prioritizing scalability, filesystem coherency, and system reliability. Initially focused on native clustering support with cache coherency mechanisms, the project shifted post-fork to enhance single-system performance through innovative locking strategies, such as token-based , which allows for efficient access without traditional mutex overhead. This approach promotes maintainability by reducing complexity in subsystems and enables features like virtual kernels, which run full kernel instances as user processes to facilitate experimentation and resource isolation without compromising the host system. Under the leadership of founder , a veteran contributor, DragonFly BSD operates as a community-driven project where code submissions are reviewed collaboratively, with Dillon holding final approval to ensure alignment with core goals. The philosophy underscores pragmatic innovation, favoring algorithmic simplicity and performance over legacy compatibility, which has guided ongoing efforts toward supporting exabyte-scale storage via filesystems like and adapting to modern hardware, including multi-core processors and advanced graphics. Key milestones reflect this evolution: from 2003 to 2007, extensive kernel subsystem rewrites laid the foundation for clustering while improving overall architecture. In 2008, DragonFly 2.0 introduced the HAMMER filesystem, enhancing data integrity with features like snapshots and mirroring. By late 2011, fine-grain locking in the VM system significantly boosted multi-core efficiency. Subsequent achievements included porting the Linux DRM for accelerated graphics between 2012 and 2015, scaling PID, PGRP, and session subsystems for SMP in 2013, and optimizing fork/exec/exit/wait mechanisms in 2014 to support higher concurrency. In 2017, version 5.0 introduced the HAMMER2 filesystem for advanced storage capabilities including unlimited snapshots and built-in compression. Version 6.0 was released in 2023, with enhancements to the NVMM hypervisor module. As of May 2025, the current stable release is 6.4.2.

Kernel Architecture

Threading and Scheduling

DragonFly BSD employs a threading model that separates and user space scheduling for enhanced isolation and performance. The utilizes Lightweight Kernel Threads (LWKT), which are managed by the LWKT scheduler, a per-CPU fixed-priority mechanism designed for efficient execution of kernel-level tasks. In contrast, user threads are handled by a dedicated User Thread Scheduler, which selects and assigns one user thread per CPU before delegating to the LWKT scheduler, thereby preventing user-space activities from directly interfering with operations. The LWKT system facilitates between kernel threads via a lightweight port-based interface, allowing asynchronous or synchronous communication without necessitating full context switches in many scenarios, which supports high concurrency levels. This design enables the to manage up to a million processes or threads, provided sufficient physical memory, by minimizing overhead in thread creation and inter-thread coordination. The scheduler is inherently SMP-aware, with each CPU maintaining an independent LWKT scheduler that assigns threads non-preemptively to specific processors, promoting scalability across multi-core systems. It supports up to 256 CPU threads while reducing lock contention through token-based , where the LWKT scheduler integrates atomic operations to acquire tokens with minimal spinning before blocking, ensuring low-latency access in contended environments. Additionally, DragonFly BSD provides checkpointing capabilities, allowing to be suspended to disk for later resumption on the same or different machines, facilitating , , and ; this feature integrates with kernels to enable kernel-level mobility.

Locking and Shared Resources

DragonFly BSD employs a token-based locking system, known as LWKT serializing , to protect shared kernel resources while enabling high and low contention. These support both shared and exclusive modes, allowing multiple readers to access resources concurrently while writers acquire exclusive control, with atomic operations like atomic_cmpset*() ensuring . Unlike traditional mutexes, permit and prioritize spinning over blocking to minimize overhead, releasing all held if a blocks, which reduces contention in multiprocessor environments. This design facilitates efficient handling of shared data structures by optimizing for common read-heavy workloads, contributing to the kernel's ability to scale across many cores without excessive lock wars. Complementing this, DragonFly BSD features lockless allocators to avoid overhead in allocation paths. The kmalloc allocator is a per-CPU slab-based system that operates essentially without locks, distributing slabs across CPUs to enable parallel allocations and deallocations with minimal contention. For more specialized needs, objcache provides an object-oriented allocator tailored to frequent creation and destruction of specific object types, such as network buffers or filesystem inodes, also designed to be lockless and built atop kmalloc for efficiency. These allocators enhance overall performance by eliminating global locks in , supporting high-throughput operations in multi-threaded contexts. Fine-grained locking permeates key subsystems, further bolstering scalability for shared resources. In the (VM) system, locks are applied at the per-object level down to the physical map (pmap), a refinement completed in late that yielded substantial performance improvements on multi-core systems by reducing global contention. The network stack employs fine-grained locks alongside packet hashing to allow concurrent processing across CPUs, with major protocols like IPFW and operating with few locking collisions for multi-gigabyte-per-second throughput. Similarly, disk I/O subsystems, including AHCI for and NVMe drivers, use per-queue locking and hardware parallelism to achieve contention-free operations, enabling high-bandwidth storage access without traditional monolithic locks. The , , and session (SESSION) subsystems are engineered for extreme scaling without relying on traditional locks, accommodating up to one million through SMP-friendly algorithms. This design, implemented in 2013, leverages per-CPU structures and atomic operations to handle massive loads—tested successfully up to 900,000 —while maintaining and avoiding bottlenecks in . Such optimizations tie into the broader threading model but focus on resource protection to support dense workloads in virtualized or high-concurrency environments.

Virtual Kernels

The virtual kernel (vkernel) feature in DragonFly BSD enables the execution of a complete DragonFly BSD as a userland within the host system, facilitating isolated kernel-level operations without impacting the host . Introduced in DragonFly BSD 1.8, released on January 30, 2007, vkernel was initially proposed by in September 2006 to address challenges in coherency and for clustering applications. This design allows developers to load and run kernel code in a contained environment, eliminating the need for system reboots during iterative testing and reducing boot sequence overhead. At its core, vkernel operates by treating the guest kernel as a single host process that can manage multiple spaces (vmspaces) through system calls like vmspace_create() and vmspace_destroy(). This supports hosting multiple virtual kernels on a single host, each in isolated environments with options for shared or private memory allocation, such as specifying memory limits via command-line flags like -m 64m. Page faults are handled cooperatively between the host and guest kernels, with the host passing faults to the vkernel for resolution using a userspace pagetable (vpagetable) and mechanisms like vmspace_mmap(). Signal delivery employs efficient mailboxes via sigaction() with the SA_MAILBOX flag, allowing interruptions with EINTR to minimize overhead. Nested vkernels are possible due to recursive vmspace support, and the guest kernel links against libc for thread awareness, integrating with the host's scheduler for performance. This complements DragonFly's lightweight kernel threading model by enabling kernel code to execute in userland contexts. Key features include device passthrough and simulation, supporting up to 16 virtual disks (vkd), CD-ROMs (vcd), and network interfaces (vke) that map to host resources like tap(4) devices and bridges. For networking, vke interfaces connect via the host's bridge(4), such as bridging to a physical interface like re0, enabling simulated network environments without dedicated hardware. Device emulation relies on host primitives, including kqueue for timers and file I/O for disk access, ensuring efficient resource utilization. To enable vkernel, the host requires the sysctl vm.vkernel_enable=1, and guest kernels must be compiled with the VKERNEL or VKERNEL64 configuration option. Vkernel is primarily used for kernel debugging and testing, allowing safe experimentation with panics or faults—such as dereferences—via tools like GDB and the kernel debugger (ddb), without risking the host system. Developers can attach GDB to the vkernel process using its , perform backtraces with bt, and inspect via /proc/<PID>/mem, as demonstrated in tracing issues like bugs in sys_ktrace. Beyond debugging, it supports resource partitioning for testing and development of clustering features, such as single system image setups over networks, by providing isolated environments for validation without hardware dependencies.

Device Management

DragonFly BSD employs a system known as DEVFS to manage nodes dynamically within the /dev directory. DEVFS automatically populates and removes nodes as is detected or detached, providing a scalable and efficient interface for accessing without manual intervention. This implementation supports on open, which facilitates the creation of multiple instances of drivers, enhancing flexibility for applications requiring isolated access. A key feature of DEVFS is its integration with block device serial numbers, enabling persistent identification of storage devices such as , , , and USB mass storage units. Serial numbers are probed and recorded in /dev/serno/, allowing systems to reference disks by unique identifiers rather than volatile names like /dev/ad0, which supports seamless disk migration across without reconfiguration. For example, a disk with serial number 9VMBWDM1 can be consistently accessed via /dev/serno/9VMBWDM1, ensuring stability in environments with frequent hardware changes. DragonFly BSD includes robust drivers for modern storage interfaces, including AHCI for Serial ATA controllers and NVMe for PCIe-based non-volatile memory controllers. The AHCI driver supports hotplug operations, particularly on chipsets, allowing dynamic attachment and detachment of devices with minimal disruption. Similarly, the NVMe driver, implemented from scratch, handles multiple namespaces and queues per controller, enabling efficient multi-device configurations and high-performance I/O in enterprise storage setups. These drivers contribute to the system's partial implementation of standards, facilitating layered device transformations. For secure storage, DragonFly BSD provides transparent through the dm_target_crypt module within its framework, which is compatible with Linux's and supports LUKS volumes via cryptsetup. Additionally, tcplay serves as a BSD-licensed, drop-in compatible tool for and volumes, leveraging dm_target_crypt to unlock and map encrypted containers without proprietary dependencies. This encryption capability integrates with the broader storage stack, allowing encrypted devices to be treated as standard block devices for higher-level operations. The device's I/O subsystem in DragonFly BSD is designed for low-contention access, minimizing locks to support high-throughput operations across multiple cores. This enables scalable handling of large-scale , including up to four swap devices with a total capacity of 55 terabytes, where I/O is interleaved for optimal performance and requires only 1 MB of RAM per 1 GB of swap space. Such design choices ensure efficient resource utilization in demanding environments, with virtually no in-kernel bottlenecks impeding concurrent device access.

Filesystems and Storage

HAMMER Filesystem

The HAMMER filesystem, developed by for DragonFly BSD, was introduced in version 2.0 on July 20, 2008. Designed from the ground up to address limitations in traditional BSD filesystems like UFS, it targets exabyte-scale storage capacities—up to 1 exabyte per filesystem—while incorporating replication for high-availability mirroring across networked nodes. This replication model allows a master pseudo-filesystem to propagate changes to one or more read-only slave instances, ensuring data consistency without multi-master complexity. HAMMER's core features emphasize reliability and temporal access. Instant crash recovery is achieved via intent logging, which records metadata operations as UNDO/REDO pairs in a dedicated log; upon remount after a crash, the filesystem replays these in seconds without requiring a full fsck scan. It supports 60-day rolling snapshots by default, automatically generated daily via cron jobs with no runtime performance overhead, as snapshots are lightweight references to prior transaction states rather than full copies. Historical file versions are tracked indefinitely until pruned, enabling users to access any past state of a file or directory using 64-bit transaction IDs (e.g., via the @@0x<transaction_id> syntax in paths), which facilitates fine-grained recovery from accidental deletions or modifications. The on-disk layout employs a single, global to index all filesystem elements, including inodes, directories, and indirect blocks, providing logarithmic-time operations scalable to massive datasets. To enable isolation and replication, introduces pseudo-filesystems (PFS), which function like independent mount points but share the underlying volume; the root PFS (ID 0) serves as the master, while additional PFSs can be created for slaves or segmented workloads. A single filesystem supports up to 65,536 PFSs across its structure, with volumes (up to 256 per filesystem, each up to 4 petabytes) providing the physical backing via disk slices or raw devices. All data and metadata are protected by 32-bit checksums to detect corruption. Performance is optimized through several mechanisms tailored for modern workloads. Background filesystem checks run asynchronously to verify integrity without blocking access, while the design decouples front-end user operations from back-end disk I/O, allowing bulk writes to bypass the for sequential data streams. Early architectural choices laid precursors for (planned as per-volume filters) and deduplication, where duplicate 64 KB blocks are identified and stored only once during reblocking operations, reducing storage redundancy in repetitive datasets. These elements ensure remains efficient for both small-file metadata-heavy tasks and large-scale archival use. HAMMER's design has influenced subsequent developments, including its successor , which refines replication and adds native while retaining core principles.

HAMMER2 Filesystem

is a modern filesystem developed for DragonFly BSD as the successor to the original HAMMER filesystem, becoming the default option starting with DragonFly BSD released in October 2017. It employs a block-level (COW) design, which enhances by avoiding in-place modifications and supports efficient crash recovery through consistent on-disk snapshots of the filesystem topology. This COW mechanism also improves overall efficiency by reducing fragmentation and enabling features like instantaneous writable snapshots, which are created by simply copying a 1KB pseudo-file system (PFS) root inode. Key enhancements in HAMMER2 include built-in using algorithms such as zlib or LZ4, which can be configured per directory or file and applies to blocks up to 64KB, achieving compression ratios from 25% to 400% depending on data patterns. The filesystem supports automatic deduplication, with live deduplication occurring during operations like file copying to share identical data blocks and minimize physical writes. Additionally, batch deduplication tools allow scanning for redundancies post-creation, and remote mounting is possible over NFS-like protocols via its clustering capabilities. HAMMER2 also provides directory-level quotas for space and inode usage tracking, along with support for multi-volume setups through device ganging, enabling distributed storage across independent devices. In terms of performance, HAMMER2 incorporates tiered storage arrangements via clustering, allowing nodes with varying hardware configurations, and fast cloning through its snapshot mechanism, which is nearly instantaneous and avoids full data duplication. The 6.4 series, starting from December 2022, introduced experimental remote-mounting of HAMMER2 volumes, enhancing distributed access. Subsequent updates up to 6.4.2 in May 2025 addressed large-scale operation issues, such as fixing runaway kernel memory during bulkfree scans on deep directory trees and improving lookup performance by reducing unnecessary retries on locked elements. While HAMMER2 maintains with legacy HAMMER volumes through read-only mounting, its focus remains on advancing modern storage paradigms.

Auxiliary Filesystems

DragonFly BSD provides several auxiliary filesystems and mechanisms that support specialized storage needs, complementing its primary filesystems by enabling efficient mounting, temporary storage, caching, and dynamic linking without relying on core persistent storage details. These features enhance flexibility for system administration, , and performance optimization in diverse environments. NULLFS serves as a filesystem layer, allowing a directory or filesystem to be mounted multiple times with varying options, such as read-only access or stacking, which facilitates isolated environments like jails and simplifies administrative tasks without data duplication. This mechanism, inherited and refined from BSD traditions, ensures low-overhead access to underlying data structures, making it ideal for scenarios requiring multiple views of the same content. TMPFS implements a memory-based temporary filesystem that stores both and file in , backed by swap space only under memory pressure to minimize I/O latency and contention. Integrated closely with DragonFly's subsystem, it supports scalable operations for runtime like logs or session files, with recent enhancements clustering writes to reduce paging overhead by up to four times in low- conditions. This design prioritizes speed for short-lived , automatically mounting points like /var/run via for immediate efficiency. SWAPCACHE extends swap space functionality by designating an SSD to clean filesystem and , accelerating I/O on hybrid storage setups where traditional disks handle bulk storage. Configured via simple partitioning and activation commands, it transparently boosts read performance for frequently accessed blocks, yielding substantial gains in and workloads with minimal hardware additions. This caching layer operates alongside primary filesystems, providing a non-intrusive performance uplift without altering underlying storage layouts. Variant symlinks introduce context-sensitive symbolic links that resolve dynamically based on process or user attributes, using embedded variables like {USER} or {VARIANT} to point to environment-specific targets. Managed through the varsym(2) interface and system-wide configurations in varsym.conf(5), they enable applications and administrators to create adaptive paths, such as user-specific binaries or architecture-dependent libraries, reducing manual configuration overhead. Enabled via sysctl vfs.varsym_enable, this feature has been a core tool in DragonFly since its early development, offering precise control over symlink behavior without runtime scripting.

Networking and Services

CARP Implementation

DragonFly BSD includes native support for the (CARP), a that enables multiple hosts on the same local to share IPv4 and addresses for high-availability networking. CARP's primary function is to provide by allowing a backup host to assume the shared IP addresses if the master host becomes unavailable, ensuring continuous service availability. It also supports load balancing across hosts by distributing traffic based on configured parameters. The protocol operates in either preemptive or non-preemptive modes, determined by the net.inet.carp.preempt value (default 1, enabling preemption). In preemptive mode, a backup with higher (lower advertisement skew) automatically assumes the master role upon , while non-preemptive mode requires manual intervention or to change roles. CARP uses virtual IDs (VHIDs) ranging from 1 to 255 and a shared password for via SHA1-HMAC to secure group membership and prevent unauthorized takeovers. Configuration of interfaces occurs primarily through the ifconfig utility at runtime or persistently via /etc/rc.conf by adding the interface to cloned_interfaces. Key parameters include advbase (base advertisement interval in seconds, 1-255), advskew (skew value 0-254 to influence master election, where lower values prioritize mastery), vhid, pass ( ), and the parent physical interface via carpdev. For example, a basic setup might use ifconfig carp0 create vhid 1 pass secret advskew 0 192.0.2.1/24 on the master host. requires enabling via sysctl net.inet.carp.allow=1 (default enabled). Failure detection and graceful degradation are handled through demotion counters, which track interface or service readiness and adjust advertisement skew dynamically to lower a host's priority if issues arise, such as a downed physical link or unsynchronized state. Counters can be viewed with ifconfig -g groupname and incremented manually (e.g., via ifconfig carp0 -demote) to simulate failures or prevent preemption during maintenance. This mechanism ensures reliable failover without unnecessary role switches. For secure redundancy, integrates with DragonFly BSD's firewall frameworks: requires explicit rules to pass CARP protocol (IP protocol 112) packets, such as pass out on $ext_if proto carp keep state, while IPFW needs corresponding pipe or rule allowances to avoid blocking advertisements. Both firewalls support IPv4 and CARP traffic, allowing filtered in production environments like clustered firewalls or gateways. In clustered services, CARP complements filesystem replication (e.g., ) by providing network-layer redundancy without overlapping storage concerns.

Time Synchronization

DragonFly BSD employs DNTPD, a custom lightweight implementation of the Network Time Protocol (NTP) client daemon, designed specifically to synchronize the system clock with external time sources while minimizing resource usage. Unlike traditional NTP daemons, DNTPD leverages double staggered and analysis to achieve stratum-1 level accuracy without requiring a local GPS receiver, enabling precise time and frequency corrections even in challenging network conditions. This approach accumulates regressions at the nominal polling rate—defaulting to 300 seconds—and requires a high threshold (≥0.99 for 8 samples or ≥0.96 for 16 samples) before applying adjustments, allowing for errors as low as 20 milliseconds or 1 millisecond with low-latency sources. DNTPD supports pool configurations by allowing multiple server targets in its setup, facilitating redundancy and resilience against individual source failures, such as network outages or 1-second offsets. It integrates seamlessly with the kernel via the adjtime(2) system call for gradual clock adjustments, avoiding abrupt jumps that could disrupt ongoing operations; coarse offsets exceeding 2 minutes are corrected initially if needed, while finer sliding offsets and frequency drifts within 2 parts per million (ppm) are handled ongoing. Configuration occurs through the /etc/dntpd.conf file, which lists servers in a simple "server " format— the default pulls from the ntp.org pool for the DragonFly zone, ensuring at least three sources for robust majority-based decisions. DNTPD excels in low-bandwidth environments compared to standard NTP implementations, as its streamlined design reduces overhead while maintaining high accuracy through line intercept methods and standard deviation summation. In server environments, DNTPD provides long-term drift correction by continuously monitoring and adjusting the clock frequency based on regression slopes, ensuring sustained over extended periods. It handles leap seconds through standard NTP protocol mechanisms, inserting or deleting them as announced by upstream sources to maintain alignment with (UTC). Overall, these features make DNTPD particularly suitable for resource-constrained systems requiring reliable time management without the complexity of full NTP server functionality.

Asynchronous Operations

DragonFly BSD enhances system performance through asynchronous (I/O) mechanisms in its networking and layers, allowing non-blocking operations to handle high loads efficiently without stalling critical paths. The NFSv3 implementation features full RPC asynchronization, replacing traditional userland nfsiod(8) threads with just two dedicated threads that manage all client-side I/O and non-blocking file operations over the network. This approach prevents bottlenecks from misordered read-ahead requests, improving reliability and throughput for distributed file access. The network stack supports high-concurrency I/O via lightweight kernel threading (LWKT) , where most hot-path operations are asynchronous and thread-serialized for . To optimize for (SMP) environments, it employs serializing token locking, which serializes broad code sections with minimal contention, allowing recursive acquisition and automatic release on blocking to boost parallel I/O performance. DragonFly BSD integrates the DragonFly Mail Agent (DMA), a lightweight SMTP server designed for efficient local delivery and remote transfers, leveraging the kernel's asynchronous networking primitives for non-blocking mail handling in resource-constrained setups. Starting in the 6.x release series, optimizations enable experimental remote mounting of HAMMER2 volumes directly over the network, reducing latency for distributed storage while building on asynchronous NFSv3 for seamless integration with remote filesystem features.

Software Management and Distribution

Package Management

DragonFly BSD employs DPorts as its primary package for installing and maintaining third-party software. DPorts is a ports collection derived from FreeBSD's Ports Collection, adapted with minimal modifications to ensure compatibility while incorporating DragonFly-specific ports. This system allows users to build applications from or install pre-compiled packages, emphasizing a stable and familiar environment for . Historically, DragonFly BSD relied on NetBSD's pkgsrc for package management up through version 3.4, which provided source-based compilation and binary support across multiple BSD variants. The project transitioned to DPorts starting with the 3.6 release in to enhance compatibility with DragonFly's evolving and userland, as well as to streamline maintenance by leveraging FreeBSD's extensive porting efforts. This shift more than doubled the available software options and aligned DragonFly closer to FreeBSD's without adopting its full base system. DPorts undergoes quarterly merges from FreeBSD's stable ports branches to prioritize reliability over the latest upstream changes, with the 2024Q3 branch fully integrated as of late and the 2025Q2 branch under development in November 2025. These merges ensure a curated set of ports that compile and run effectively on , supporting thousands of applications ranging from desktop environments to servers. Binary packages are available primarily for the x86_64 architecture, DragonFly's sole supported platform since version 4.0. Software installation and updates are handled via the tool, a lightweight binary that supports commands like pkg install <package> for adding applications, pkg upgrade for system-wide updates, and pkg delete for removal. For source builds, users employ the standard utility within the DPorts tree, fetched via git clone from the official . The system focuses on conflict-free upgrades and stability, with features like package auditing (pkg audit) to detect vulnerabilities, making it suitable for environments rather than rapid development cycles.

Application-Level Features

DragonFly BSD provides several application-level features that enhance runtime management and flexibility, leveraging its unique and filesystem designs for seamless operation without interrupting ongoing processes. One key feature is process checkpointing, which allows applications to be suspended and saved to disk at any time, enabling later resumption on the same system or migration to another compatible system. This is achieved through the sys_checkpoint(2) , which serializes the process state—including multi-threaded processes—into a file, and the checkpt(1) utility for restoration. The checkpoint image is stored on a or HAMMER2 filesystem, integrating it with the filesystem's capabilities to version the process alongside directory contents, thus providing per-process versioning without downtime. For example, applications can handle the SIGCKPT signal to perform cleanup before checkpointing, and upon resume, they receive a positive return value from the to detect the event. Limitations include incomplete support for devices, sockets, and pipes, making it suitable primarily for simple or compute-bound applications rather than those reliant on network connections. Complementing this, DragonFly BSD supports directory-level snapshots via the and filesystems, which applications can leverage for versioning data without service interruption. enables instant, writable snapshots by copying the volume header's root block table, allowing mounted snapshots for ongoing access and modification. Automatic snapshotting is configurable through /etc/periodic.conf, retaining up to 60 days of daily snapshots and finer-grained 30-second intervals for recent history, facilitating operations with the undo(1) tool. These filesystem snapshots provide applications with robust, non-disruptive and at the directory level, directly supporting the versioning of checkpointed files. Variant symlinks offer dynamic linking for applications, resolving based on runtime context such as user, group, UID, jail, or architecture via embedded variables like ${USER} or ${ARCH}. Implemented through varsym(2), these symlinks allow application authors to create configuration paths that adapt automatically—for instance, directing to user-specific libraries or architecture-appropriate binaries—enhancing portability and management without hardcoded paths. System-wide variables are managed via varsym.conf(5), enabling administrators to control resolutions globally or per-context. These features integrate with DragonFly BSD's DPorts system, which builds and deploys applications from ports, allowing developers to incorporate checkpointing and symlink support natively during compilation for optimized runtime behavior.

Release History and Media

DragonFly BSD's first stable release, version 1.0, was made available on July 12, 2004. Subsequent major releases have followed a series-based structure, with version 5.0 released on October 16, 2017, introducing bootable support for the HAMMER2 filesystem as an experimental option alongside the established HAMMER1. Version 6.0 arrived on May 10, 2021, featuring revamped caching and enhancements to HAMMER2, including multi-volume support. The 6.2 series began with version 6.2.1 on January 9, 2022, incorporating the NVMM type-2 for hardware-accelerated and initial GPU driver support matching 4.19. The project maintains a release cadence centered on stable branches, where major versions introduce significant features and point releases address security vulnerabilities, bugs, and stability improvements without altering core functionality. For instance, the 6.4 series started with version 6.4.0 on December 30, 2022, and progressed through 6.4.1 on April 30, 2025, to 6.4.2 on May 9, 2025, the latter including fixes for IPv6-related panics, installer issues with disk sizing, and crashes in userland programs generating many subprocesses. Distribution media for DragonFly BSD targets x86_64 architectures and includes live ISO images that directly for or testing, encompassing the and DPorts package management tools in a compact, DVD-sized format of approximately 700 MB uncompressed. USB installers are provided as raw disk images suitable for writing to flash drives via tools like dd, enabling portable installations. Netboot options are supported through PXE-compatible images and daily snapshots available on official mirrors, facilitating network-based deployments. Recent updates in the 6.4 series build on prior work by stabilizing the NVMM for type-2 and advancing experimental remote mounting capabilities for HAMMER2 volumes, allowing networked access to filesystem resources.
Release SeriesInitial Release DateKey Point ReleasesNotable Features
5.0October 16, 20175.0.1 (Nov 6, 2017), 5.0.2 (Dec 4, 2017)HAMMER2 bootable support (experimental)
6.0May 10, 20216.0.1 (Oct 12, 2021)VFS caching revamp, HAMMER2 multi-volume
6.2January 9, 2022 (6.2.1)6.2.2 (Jun 9, 2022)NVMM , GPU driver
6.4December 30, 2022 (6.4.0)6.4.1 (Apr 30, 2025), 6.4.2 (May 9, 2025) fixes, remote HAMMER2 experiments

References

  1. [1]
    introduction - DragonFlyBSD
    Jul 24, 2018 · DragonFly is a freely available, full source 4.4BSD-Lite based release for almost all Intel and AMD based computer systems. It is based ...Missing: overview | Show results with:overview
  2. [2]
    DragonFlyBSD: DragonFly BSD
    DragonFlyBSD is a BSD-derived OS, like other BSDs and Linux, based on UNIX ideals, and includes the HAMMER2 filesystem. It is a modern, useful, and familiar ...Download · DragonFly's Major Features List · DragonFly Mirrors · Edit Page
  3. [3]
    release64 - DragonFlyBSD
    DragonFly BSD 6.4. Version 6.4.0 released 2022 12 30; Version 6.4.1 released 2025 04 30; Version 6.4.2 released 2025 05 09. DragonFly version 6.4 is the ...
  4. [4]
    DPortsUsage - DragonFlyBSD
    Oct 3, 2025 · Getting started with pkg(8). DragonFly daily snapshots and Releases (starting with 3.4) come with pkg(8) already installed. Upgrades from ...
  5. [5]
    [PDF] The DragonFlyBSD Operating System
    Its goals are to maintain the high quality and performance of the FreeBSD 4 branch, while exploiting new concepts to further improve performance and stability.
  6. [6]
    history - DragonFlyBSD
    Jan 17, 2019 · DragonFly BSD was forked from FreeBSD 4.8 in June of 2003, by Matthew Dillon. The project was originally billed as the logical continuation of the FreeBSD 4.x ...
  7. [7]
    Interview with Matthew Dillon of DragonFly BSD - OSnews
    Mar 13, 2004 · Matthew discusses DragonFly's status, goals, the overall BSD platform, innovation, and more. Update: Added one more question at the end of the ...
  8. [8]
    Annoucning DragonFly BSD! - FreeBSD Mailing Lists
    Annoucning DragonFly BSD! Matthew Dillon dillon at apollo.backplane.com. Wed Jul 16 12:41:34 PDT 2003. Previous message: msdof and fstab; Next message: ...
  9. [9]
    Locking and Synchronization - DragonFlyBSD
    Jul 24, 2018 · The most common use case is when you are holding a token but do not want to break its atomicy by potentially blocking on a deeper lock. It is ...
  10. [10]
    vkernel - DragonFlyBSD
    Jul 24, 2018 · It eases debugging, as it allows for a virtual kernel being loaded in userland and hence debug it without affecting the real kernel itself.Missing: philosophy | Show results with:philosophy<|separator|>
  11. [11]
    team - DragonFlyBSD
    Dec 4, 2023 · Project Leader. Matthew Dillon is known for creating the DICE C compiler on the Amiga, and later co-founding BEST Internet in San Francisco.
  12. [12]
    hammer - DragonFlyBSD
    Jan 17, 2023 · A single HAMMER file system can be up to 1 exabyte in size, and can encompass up to 256 volumes, each of which can be up to 4 petabytes (4096 ...Missing: scale | Show results with:scale
  13. [13]
    DragonFly's Major Features List - DragonFlyBSD
    Jul 23, 2021 · DragonFly will autotune kernel resources and scaling metrics such as kernel hash-tables based on available memory.
  14. [14]
  15. [15]
    DragonFly On-Line Manual Pages : locking(9)
    The DragonFly kernel provides several locking and synchronisation primitives, each with different characteristics and purposes. This manpage aims at giving an ...
  16. [16]
    sys_checkpoint(2) - DragonFly On-Line Manual Pages
    If the resumed program is checkpointed again the system will automatically copy any mappings from the original checkpoint file to the new one, since the ...Missing: suspend | Show results with:suspend
  17. [17]
    [PDF] An MP-capable network stack for DragonFlyBSD with minimal use of ...
    Fine-grained locking. ○ Proven effective many times in the past. ○ Performs very well for current MP systems. –. Page 10. Fine-grained locking. ○ Proven ...Missing: VM | Show results with:VM
  18. [18]
    A peek at the DragonFly Virtual Kernel (part 1) - LWN.net
    Mar 29, 2007 · In this article, we will describe several aspects of the architecture of DragonFly BSD's virtual kernel infrastructure, which allows the ...Missing: SMP | Show results with:SMP
  19. [19]
    VirtualKernelPeek - DragonFlyBSD
    May 23, 2010 · In this article, we will describe several aspects of the architecture of DragonFly BSD's virtual kernel infrastructure, which allows the ...Missing: philosophy | Show results with:philosophy
  20. [20]
    HowToDebugVKernels - DragonFlyBSD
    Jul 24, 2018 · The vkernel architecture allows us to run DragonFly kernels in userland. These virtual kernels can be paniced or otherwise abused, without ...
  21. [21]
  22. [22]
    environmentquickstart - DragonFlyBSD
    This document describes the DragonFly environment one will find on a newly installed system. While you are getting started, please pay careful attention to the ...Missing: token- locking<|separator|>
  23. [23]
    git: kernel - AHCI hotplug work to help support AMD chipsets
    Nov 24, 2010 · git: kernel - AHCI hotplug work to help support AMD chipsets ... DragonFly BSD source repository. Previous message: git: kernel - AHCI ...
  24. [24]
    DragonFly On-Line Manual Pages : nvme(4)
    DESCRIPTION. The nvme driver provides support for PCIe storage controllers conforming to the NVM Express Controller Interface specification. NVMe controllers ...Missing: hotplug | Show results with:hotplug
  25. [25]
  26. [26]
    [PDF] THE HAMMER FILESYSTEM - DragonFly BSD
    Jun 21, 2008 · That is 256 x. Page 4. 4096 TB or a total of up to 1 Exabyte. ... When storage is freed the free-bytes field is updated but low level storage ...
  27. [27]
    hammer(5) - Manual pages
    A HAMMER file system can be up to 1 Exabyte in size. It can span up to 256 volumes, each volume occupies a DragonFly disk slice or partition, or another special ...
  28. [28]
    release50 - DragonFlyBSD
    Oct 16, 2017 · Preliminary HAMMER2 support has been released into the wild as-of the 5.0 release. This support is considered EXPERIMENTAL and should generally ...
  29. [29]
    HAMMER2 Design Document
    The filesystem syncs fully vfsync the buffer cache for the files that are part of the sync group, and keeps track of dependencies to ensure that all inter- ...
  30. [30]
    gitweb.dragonflybsd.org Git - sys/vfs/hammer2/DESIGN - gitweb
    Apr 3, 2015 · 68 * Directory sub-hierarchy-based quotas for space and inode usage tracking. ... 263 variable block size of underlying elements. 264 · 265 ...
  31. [31]
  32. [32]
  33. [33]
    DragonFlyBSD Improves Its TMPFS Implementation For Better ...
    Of several interesting commits merged tonight, the improved write clustering is a big one. In particular, "Reduces low-memory tmpfs paging I/O overheads by 4x ...
  34. [34]
    Automatic tmpfs for /var/run, on DragonFly – DragonFly BSD Digest
    This thread on having a tmpfs /var/run led to this commit, making it as easy as setting a rc.conf variable. 0 Comments - Categories: Committed Code, ...
  35. [35]
  36. [36]
    Setting up swapcache - DragonFlyBSD
    Jul 24, 2018 · Formatting new drive for swap. After you install your drive you should be able to find it from dmesg. For the Intel just grep on INTELMissing: 55TB | Show results with:55TB
  37. [37]
    ln(1) - DragonFly On-Line Manual Pages
    DragonFly supports a special kind of dynamic symbolic link called a variant symlink. The source_file of a variant symlink may contain one or more variable names ...
  38. [38]
  39. [39]
    carp(4) - DragonFly On-Line Manual Pages
    CARP allows multiple hosts on the same local network to share a set of IP addresses. Its primary purpose is to ensure that these addresses are always available, ...
  40. [40]
    DragonFly On-Line Manual Pages : carp()
    A carp interface can be created at runtime using the ifconfig carpN create command or by configuring it via cloned_interfaces in the /etc/rc. conf file.
  41. [41]
    ifconfig(8) - DragonFly On-Line Manual Pages
    CARP Parameters The following parameters are specific to carp(4) interfaces: advbase seconds Specifies the base of the advertisement interval in seconds.
  42. [42]
    [PDF] PF: The OpenBSD Packet Filter - DragonFly BSD
    See carp(4) for more information. CARP Example. Here is an example CARP configuration: # sysctl -w net.inet.carp.allow=1. # ifconfig carp1 create. # ifconfig ...
  43. [43]
    release32 - DragonFlyBSD
    Nov 2, 2012 · Make carp(4) lockless MPSAFE; More RFC3390 conforming; RFC6298 conforming; RFC6633 conforming; Improve RFC4015 support; Implement RFC6675 ...
  44. [44]
    None
    ### Summary of dntpd(8) Man Page
  45. [45]
  46. [46]
  47. [47]
    HowToPkgsrc - DragonFlyBSD
    Jul 24, 2018 · Pkgsrc is a packaging system that was originally created for NetBSD. It has been ported to DragonFly, along with other operating systems.pkgsrc on DragonFly · Dealing with pkgsrc packages · Upgrading packagesMissing: adoption | Show results with:adoption
  48. [48]
    FAQ-English - DragonFlyBSD
    Jul 24, 2018 · What architectures does DragonFly support? How can I contribute? Does DragonFly use a dynamic /dev filesystem, as in devfs? Will DragonFly use ( ...
  49. [49]
    DragonFlyBSD/DPorts: The dedicated application build ... - GitHub
    The products of DPorts are executable software which are manipulated by Baptiste Daroussin's pkg(8) binary package manager. Building a DPort from source is not ...
  50. [50]
    releases - DragonFlyBSD
    6.4.x: 6.4.2 (May 9, 2025); 6.4.1 (April 30, 2025); 6.4.0 (December 30, 2022). 6.2.x: 6.2.2 (June 9, 2022); 6.2.1 (January 9, 2022).Missing: current | Show results with:current
  51. [51]
    release60 - DragonFlyBSD
    May 10, 2021 · DragonFly BSD 6.0. Version 6.0.0 released 05 10 2021; Version 6.0.1 released 10 12 2021. DragonFly version 6.0 is the next step from the 5.8 ...
  52. [52]
    release62 - DragonFlyBSD
    Jan 9, 2022 · DragonFly version 6.2 is the next step in the 6.x release series. This version has hardware support for type-2 hypervisors with NVMM, an amdgpu ...Missing: optimizations | Show results with:optimizations
  53. [53]
    Obtaining DragonFly for your system - DragonFlyBSD
    Note that there is no GUI release for the current release. You will need to install Xorg directly. If you use a USB .img file, it needs to ...