runit
Runit is a cross-platform Unix init scheme with service supervision, serving as a replacement for SysV init and other init systems.[1] It manages the booting, running, and shutdown of Unix-like systems while providing reliable oversight of individual services to ensure they start, run, and restart as needed.[1]
Developed by Gerrit Pape, runit emphasizes simplicity, reliability, and minimal size, with its core binaries optimized for efficiency—such as an 8.5k runit binary compiled with dietlibc.[1] First released in the early 2000s, it has been successfully integrated into distributions like Debian GNU/Linux (versions sarge, woody, and potato) since 2002, and continues to be adopted in modern systems including Artix Linux, Devuan, Gentoo, and Void Linux for its lightweight approach to process management.[1][2]
The system comprises three primary components: runit, which runs as process ID 1 (PID 1) to orchestrate system stages like boot, runlevels, and shutdown; runsv, a per-service supervisor that monitors and restarts processes as configured; and sv, a command-line tool for controlling services (e.g., starting, stopping, or checking status).[1] These elements enable automatic handling of service dependencies and runlevels without complex scripting, making runit suitable for embedded systems, servers, and desktops across platforms like GNU/Linux, *BSD, macOS, and Solaris.[1] Its design draws from principles of daemontools by Daniel J. Bernstein, prioritizing supervision to enhance system uptime and fault tolerance.[1]
History and Development
Origins and Inspiration
runit was created by Gerrit Pape in 2001 as a lightweight alternative to SysV init and other traditional init schemes, aiming to provide a more reliable and compact system initialization process.[3][1]
The project drew significant inspiration from Daniel J. Bernstein's daemontools, a toolkit released in 1997 that emphasized process supervision for Unix services, which runit reimplemented and extended to include full init functionality.[3][4]
Initial development focused on goals of simplicity, reliability, and portability, enabling runit to operate across various Unix-like systems including GNU/Linux, BSD variants, and macOS without requiring extensive modifications.[1]
Early development utilized CVS for version control, with the project later migrating to Git, as evidenced by its current repository on GitHub; maintenance has continued actively.[5]
Key Releases and Maintenance
runit was first publicly released in 2004 by its author Gerrit Pape, marking the debut of version 1.0 as a lightweight init and service supervision system. This early version established the core framework, drawing brief inspiration from Daniel J. Bernstein's daemontools for process supervision concepts.[6]
Subsequent releases evolved incrementally, focusing on enhancements rather than overhauls. Key updates included the introduction of the sv command in version 1.3.x for unified service control, replacing earlier tools like runsvctrl and runsvstat, and improvements to svlogd for timestamp handling and UDP logging in versions 1.5.x through 1.7.x.[7] Further refinements in versions 1.8.0 and later added support for platforms like AIX and Upstart, along with better zombie reaping in runit to enhance supervision reliability.[7] Cross-platform compatibility expanded with instructions for Mac OS X integration in version 1.3.0 and beyond.[7]
The latest stable release is version 2.3.0, announced on November 9, 2025, which includes ports to C23 (with GCC 15 as default), fixes to the CONT signal handler, automatic restarts of stage 2 on uncaught signals, patches to utmpset and runsvchdir, updated documentation, and incorporates patches from the supervision mailing list for sv and svlogd.[8] Previous releases, such as version 2.2.0 from September 29, 2024, introduced options like -C for chpst to set working directories and optional synchronization control in runit via /etc/runit/nosync.[9] Minor bug fixes and patches, such as those for sv command behavior in LSB mode, have continued to emphasize stability without major architectural changes since the project's inception.
Ongoing maintenance is handled by Gerrit Pape through the official site at smarden.org, where development remains active but conservative, reflecting runit's mature and reliable design.[1] Contributions are accepted via GitHub, with discussions on the supervision mailing list.[1] runit is distributed under a three-clause BSD-like permissive license, allowing broad reuse and modification.[10]
Design Principles
Core Philosophy
runit embodies a minimalist philosophy in its design as a UNIX init scheme and service supervisor, prioritizing reliability, small size, and simplicity over feature bloat. Optimized for minimal code footprint, runit avoids unnecessary dependencies and dynamic memory allocation, with its core binaries comprising just a few hundred lines of code each—such as runit.c at 330 lines and runsv.c at 509 lines—allowing for a process 1 binary as small as 8.5 KB when compiled with dietlibc. This approach ensures that the init process remains lightweight and focused solely on essential functions like booting, supervision, and shutdown, replacing traditional pid-guessing tools with a robust supervision interface.
At the heart of runit's architecture is its supervision model, which provides continuous monitoring and automatic restarting of services to enhance system reliability. Each service is overseen by a dedicated runsv process that watches for failures and restarts the service upon exit, enforcing a 1-second delay to prevent rapid restart loops while maintaining uptime. This model isolates services into individual processes, eliminating the complexity of multi-process daemons and enabling precise control without interference from pid files or unreliable kill commands. By default, services are restarted indefinitely unless configured otherwise, ensuring fault tolerance in production environments.[11][12]
runit structures system initialization into three distinct stages to separate boot phases without relying on traditional runlevels, promoting clarity and modularity. Stage 1 executes one-time initialization tasks via /etc/runit/1, such as mounting filesystems; Stage 2 launches the runsvdir supervisor for ongoing service management; and Stage 3 handles shutdown or reboot procedures through /etc/runit/3. This staged approach allows for parallel service startup and shutdown, contributing to fast boot times and orderly halts.[11]
Emphasizing portability and efficiency, runit is compatible with various UNIX kernels including GNU/Linux, BSD, Solaris, and macOS, without requiring kernel modules or platform-specific code. Its separation philosophy further isolates services by running them independently with distinct process states and user IDs, while logging is handled separately by svlogd to avoid interference and enable focused error capture. Drawing brief inspiration from daemontools, runit refines these concepts into a cohesive, high-performance init system.[11]
Boot Process
runit, functioning as the init process (PID 1), orchestrates the system's boot, runtime operation, and shutdown through a structured three-stage mechanism, emphasizing simplicity and reliability without reliance on traditional runlevels.[13] This approach allows for a straightforward initialization sequence where each stage executes specific scripts in /etc/runit/, enabling system administrators to customize early boot tasks while runit manages process supervision.[14]
In Stage 1, runit executes the script /etc/runit/1 to perform essential one-time setup tasks, such as mounting necessary filesystems, configuring the hostname, and preparing the environment by setting the PATH variable and running initialization scripts like /etc/init.d/rcS.[15] No services are started at this phase; the focus remains on bringing the kernel into a basic usable state. runit waits for this script to complete normally, but if /etc/runit/1 exits with code 100 or crashes repeatedly, runit may skip to shutdown procedures or provide an emergency shell on /dev/console for recovery.[14] This stage ensures foundational system readiness without introducing supervisable processes.
Stage 2 follows upon successful completion of Stage 1, where runit launches /etc/runit/2 to handle core system initialization. This script typically sets up console access by starting getty processes on virtual terminals and initiates service supervision by invoking runsvdir to scan and monitor directories in /etc/sv (or /etc/service in some configurations), launching and overseeing daemons as defined therein.[16] The script is designed not to terminate under normal operation, maintaining the system's multi-user state; if it exits with code 111 or crashes, runit automatically restarts it to ensure continuity. This stage represents the ongoing runtime phase, where supervised services, including those for logging and restarts of failed processes, operate under runit's oversight.[17]
For shutdown, runit receives a signal (such as CONT when /etc/runit/stopit is present) or detects Stage 2 termination, promptly ending Stage 2 and executing /etc/runit/[3](/page/3) to gracefully signal all supervised services to finish via commands like sv force-stop on the /etc/sv directories, allowing up to a timeout for clean termination.[18] Following this, /etc/runit/[3](/page/3) invokes shutdown scripts such as /etc/init.d/rc 0 for halt or equivalent, performing cleanup tasks like unmounting filesystems. After /etc/runit/[3](/page/3) completes, if /etc/runit/[reboot](/page/Reboot) exists and is executable, runit reboots the system; otherwise, it halts the system, optionally skipping sync() if /etc/runit/nosync is present.[17]
runit eschews traditional runlevels, including single-user mode, in favor of flexible service management; recovery or minimal operation is achieved by configuring a limited set of services in a separate runsvdir directory (e.g., via kernel boot parameters) or relying on the emergency shell in Stage 1 for troubleshooting without full initialization.[14]
Components
Main Tools
The runit init system comprises a suite of lightweight, interconnected executables designed to manage system initialization and service supervision on Unix-like operating systems. These tools emphasize simplicity, reliability, and minimal resource usage, with runit serving as the central init process and others handling per-service oversight and utilities.[1]
runit acts as the overall init program, running as Unix process ID 1 (PID 1) to orchestrate system booting, operation, and shutdown. It executes three sequential stages: stage 1 runs one-time boot tasks from /etc/runit/1, stage 2 typically launches service supervision from /etc/runit/2 (often invoking runsvdir), and stage 3 handles shutdown via /etc/runit/3, either rebooting or halting the system. If stage 1 or 2 fails under specific exit codes, runit skips to stage 3 for graceful termination. This structure replaces traditional SysV init while providing dependency-free service management.[19]
runsv functions as the per-service supervisor, monitoring and maintaining individual services to ensure continuous operation. It changes to a service's directory, executes the ./run script to start the service, and automatically restarts it if it terminates, after executing the ./finish script (if present) for cleanup. runsv also supports optional logging by piping service output to a log subprocess in service/log, and it maintains service status in files like supervise/status for runtime tracking. Multiple runsv instances can run concurrently, each dedicated to one service. Automatic restarts can be prevented by the presence of a ./down file or via control commands.[20]
runsvdir serves as the directory scanner, overseeing a collection of services by launching and monitoring runsv processes for each subdirectory in a specified service directory, such as /service. It rescans the directory every 5 seconds for changes, limits supervision to up to 1000 subdirectories, and restarts any terminated runsv instance to maintain supervision. This tool enables runit to handle dynamic service sets without manual intervention, supporting runlevels through directory symlinks. Service subdirectories must contain the required ./run script for runsv to initialize them.[21]
sv provides a command-line interface for controlling and querying services supervised by runsv. It allows operators to start, stop, restart, or check the status of services by interacting with control pipes in the service's supervise directory, supporting signals like TERM or KILL. sv operates on a default service directory like /service, with options for verbose output or custom timeouts, and can emulate LSB init scripts via symlinks.[22]
Auxiliary tools extend runit's functionality for process management and logging. chpst modifies the process state before executing a program, enabling changes to user/group IDs, environment variables, working directories, or resource limits—commonly used in ./run scripts to run services under restricted privileges. svlogd acts as the logging daemon, reading service output from standard input, filtering messages if needed, and writing to rotated log files in a specified directory, with defaults for 1,000,000-byte files and up to 10 archives. runsvchdir facilitates switching service directories for runsvdir by updating symlinks like /service to point to new sets, typically from /etc/runit/runsvdir/current, allowing seamless transitions between runlevels.[23][24][25]
Service Directory Structure
In runit, each service is managed through a dedicated directory, typically located under /etc/sv/<service-name>, where <service-name> is the identifier for the specific service such as sshd or httpd. This directory serves as the central configuration point and must contain at least one required file: an executable script named run. The run script is responsible for launching the service process in the foreground, often using a shebang like #!/bin/sh followed by an exec command to replace the shell with the service daemon, ensuring seamless supervision by the runsv process.[26]
Optionally, the service directory may include a finish script, which is an executable that runit invokes automatically after the run script terminates, passing the exit code and status as arguments to handle any necessary cleanup tasks, such as resource deallocation or state reset. Another optional file is down, a non-executable empty file whose mere presence instructs the supervisor to keep the service in a stopped state, preventing the run script from starting until the file is removed or the service is explicitly commanded otherwise.[26]
For logging, the service directory can contain a subdirectory named log, which itself follows a similar structure with its own run script dedicated to invoking a logging daemon like svlogd. This setup pipes the output from the main service's run script (including stdout and stderr) to the logger, allowing logs to be handled separately and rotated independently, often directing output to timestamped files in a specified directory. The log subdirectory may also include a finish script for post-termination logging cleanup if needed.[26]
To enable a service for supervision, runit supports creating symbolic links from a runtime scan directory—commonly /var/[service](/page/Service) for system-wide services or ~/service for user services—to the corresponding service directory in /etc/sv. The runsvdir process continuously monitors this scan directory, spawning a runsv instance for each linked subdirectory to initiate and oversee the service without requiring manual intervention.[27]
Usage
Configuring Services
To configure a service in runit, create a dedicated directory for it, typically under a path like /etc/sv/<service-name>, containing an executable run script that starts the service process.[1] The run script must begin with a shebang (e.g., #!/bin/sh) and use exec to replace the script process with the service binary, ensuring the supervisor can monitor it directly.[20] For environment control, wrap the exec command with chpst to set user ownership, resource limits, or other parameters; for instance, exec chpst -u <user> <command> runs the service as a specific user without requiring root privileges throughout.
Logging is configured by adding a log subdirectory within the service directory, containing its own run script that pipes output to svlogd for rotation and storage. The main run script should redirect stdout and stderr (e.g., via exec 2>&1) to enable this piping. A typical log/run script is #!/bin/sh followed by exec svlogd /var/log/<service-name>, where the log directory handles file rotation based on size or age via an optional config file (e.g., s 1000000 to rotate at 1MB).
To enable the service, create a symlink from the service directory to the supervision scan path, such as /var/service/<service-name> -> /etc/[sv](/page/SV)/<service-name>, allowing runsvdir to automatically detect and supervise it during boot or runtime.[1] Runit lacks a built-in dependency resolver, so dependencies are handled manually in scripts—e.g., by invoking sv start <dependency> in the run script or using wait loops like while ! nc -z [localhost](/page/Localhost) <port>; do sleep 1; done to poll for readiness.[28]
For a simple daemon like cron, the service directory /etc/sv/[cron](/page/Cron) might contain a run script such as:
#!/bin/sh
exec 2>&1
exec chpst -u [root](/page/Root) /usr/sbin/crond -f
#!/bin/sh
exec 2>&1
exec chpst -u [root](/page/Root) /usr/sbin/crond -f
This runs cron in the foreground as root, with output logged via the accompanying log setup.[29] For a network service like SSH, /etc/sv/sshd/run could be:
#!/bin/sh
exec 2>&1
exec chpst -u sshd /usr/sbin/sshd -D
#!/bin/sh
exec 2>&1
exec chpst -u sshd /usr/sbin/sshd -D
This daemonizes SSHD under the sshd user, listening indefinitely while supervision restarts it if it exits.[29] In both cases, the log/run script pipes to svlogd for persistent logging, and enabling involves symlinking to /var/service.
Controlling and Monitoring Services
runit provides tools for runtime control and monitoring of services through the sv command, which interacts with the runsv supervisor process for each service directory.[22] The sv up <service> command starts a service if it is not running or restarts it if it has stopped, sending necessary signals to initiate the process defined in the service's ./run script.[22] Conversely, sv down <service> stops the service by sending a TERM signal followed by CONT, preventing automatic restarts until explicitly started again.[22] The -v option for these commands waits up to 7 seconds and reports the resulting status, aiding in verification during operations.[22]
For synchronization and readiness checks, sv wait <service> pauses until the service reaches the up or down state, timing out after 7 seconds if unsuccessful, while sv [check](/page/Check) <service> verifies the up state and executes the optional ./check script in the service directory to confirm readiness, also with a 7-second timeout.[22] Monitoring is facilitated by sv status <service>, which displays the process ID (PID), uptime, and state of the supervised process, along with log service details if applicable.[22] These commands enable precise management of individual services without disrupting the overall supervision tree.
Logging in runit is handled by svlogd, which captures output from services and stores it in /var/log/<service>/current for real-time inspection.[24] Administrators can tail logs using tail -f /var/log/<service>/current to monitor activity, while automatic rotation occurs when the log exceeds 1,000,000 bytes or after a configurable interval, archiving files as @<timestamp>.s and retaining up to 10 by default.[24]
Global control across multiple services is achieved with runsvctrl, which applies commands like up, down, or exit to all services in specified directories, such as runsvctrl exit /etc/service/* for system shutdown by terminating services and exiting their supervisors.[30]
runit's monitoring emphasizes reliability through runsv, which automatically restarts a service upon exit of its ./run script, enforcing a 1-second delay to prevent rapid cycling in failure loops.[20] This simple backoff mechanism ensures continuous supervision without exponential delays, configurable via control interfaces for one-time runs or pauses if needed.[20]
Adoption
Use in Linux Distributions
Void Linux employs runit as its default init and service supervision system, leveraging the suite for both process initialization and ongoing daemon management.[31] The distribution integrates runit-rc, a set of scripts that handle boot-time tasks such as mounting filesystems and loading modules, providing a complete replacement for traditional SysV init scripting.[32] Services are managed through symlinks in /etc/runit/runsvdir/default, with distribution-specific configurations in /etc/sv ensuring compatibility with Void's XBPS package manager and musl or glibc libc variants.[33]
Artix Linux, a derivative of Arch Linux designed to avoid systemd, positions runit as a primary init alternative alongside options like OpenRC and s6.[34] It customizes runit with Arch-compatible service scripts, placing them in /etc/runit/sv for supervision, and integrates the init with Artix's pacman-based package system and existing bootloaders like GRUB.[35] This setup allows users to select runit during installation, enabling a lightweight, dependency-free boot process tailored to Artix's rolling-release model.[36]
In Devuan, a Debian fork emphasizing init freedom, runit is available as an optional init system package, installable via apt for users seeking alternatives to the default SysV init or OpenRC.[37] Community-maintained ports provide service definitions in /etc/sv, with integration supporting Devuan's deb package format and bootloaders like systemd-boot alternatives.[38] Slackware similarly offers runit as an optional installation through community-maintained SlackBuild scripts, allowing replacement of its SysV init without official inclusion in the core distribution.[39] Users configure services in /etc/sv and adapt boot scripts to align with Slackware's minimalistic rc structure.
Gentoo Linux provides runit as an optional init system via its official package repository, though it is not fully supported and requires additional configuration or third-party ebuilds for complete integration.[2] It can be used as PID 1 with kernel parameters and works alongside OpenRC for service management, appealing to users seeking lightweight alternatives in Gentoo's flexible environment.
Beyond full distributions, runit finds application in embedded systems and containerized environments due to its compact footprint—such as an 8.5k binary when compiled with dietlibc—and reliable process no. 1 handling.[1] In containers, it serves as a lightweight supervisor for multi-process applications, often replacing PID 1 in Docker images to manage daemons without the overhead of heavier inits.[27] These uses typically involve custom /etc/sv directories for environment-specific services and seamless integration with host bootloaders or container orchestration tools. Runit's minimal design enables such straightforward adoption across these contexts.[1]
Comparisons with Other Init Systems
Runit differs from the traditional SysV init by incorporating built-in service supervision through its runsv process, which automatically restarts failed services, a feature absent in SysV's sequential script-based approach.[40] While SysV init relies on runlevels for managing system states and executes services in a linear fashion, runit employs a three-stage boot process (stage 1 for hardware, stage 2 for core services, stage 3 for user services) that supports parallel execution, leading to faster boot times without the need for complex runlevel transitions.[41] This supervision enhances reliability in handling failures, contrasting SysV's manual intervention requirements, though runit forgoes SysV's native runlevel support in favor of customizable stages.[40]
Compared to systemd, runit prioritizes simplicity and minimalism, consisting of a small set of binaries (e.g., runit at around 330 lines of code) without dependencies on features like socket activation or cgroups integration that systemd provides for advanced resource management and on-demand service starting.[40] Systemd enables parallel service startup through dependency graphs and unit files, offering tighter integration with modern Linux kernels, whereas runit's dependency handling is manual and script-driven, reducing overhead but requiring explicit configuration for service ordering.[41] Runit's lightweight design avoids systemd's broader scope, which includes logging, device management, and network configuration, making it more portable across platforms but less feature-rich for complex environments.[32]
In relation to OpenRC, both systems are script-oriented and compatible with traditional init scripts, but runit places greater emphasis on process supervision via dedicated utilities like sv, while OpenRC focuses on dependency resolution through its service manager and runlevels.[41] OpenRC supports more granular control over service dependencies without built-in supervision, often pairing with external supervisors, whereas runit's integrated approach ensures clean process states and reliable restarts with minimal additional tools.[40] This makes runit suitable for environments valuing supervision over OpenRC's flexibility in dependency graphs.[41]
Runit's key advantages include rapid parallel booting and shutdown, with services starting concurrently for enhanced speed and reliability in failure recovery, as well as a small footprint that aids auditing and security.[40] However, its manual dependency management can complicate setups in systems requiring automated ordering, and it lacks the parallelism optimizations or ecosystem integration found in alternatives like systemd.[41] These traits position runit favorably in minimalistic setups or environments avoiding systemd's complexity, such as those prioritizing service supervision in resource-constrained or portable systems.[32]