Runlevel
A runlevel is a software configuration operating mode in Unix-like operating systems, particularly those employing the traditional SysV init system, which specifies the state of the system by controlling which services and daemons are started or stopped to achieve a particular operational environment.[1] In this context, runlevels enable administrators to transition the system between different states, such as single-user maintenance mode or full multi-user production mode, by executing predefined scripts that manage processes.[2] The SysV init system, originating from UNIX System V Release 4, defines runlevels through the/etc/inittab configuration file, which specifies the default runlevel and links to initialization scripts in directories like /etc/rc.d/rc[0-6].d/.[3] These scripts, prefixed with 'S' for starting services or 'K' for killing them, are executed in numerical order during transitions to ensure orderly service management.[2] Standard runlevels conventionally range from 0 to 6, with additional single-user modes denoted as 'S', though exact behaviors vary by implementation.
Runlevels can be queried using the runlevel command, which reports the previous and current states, and changed via init or telinit commands, though caution is advised to prevent system instability.[4] In modern Linux distributions, the systemd init system has largely superseded SysV runlevels, replacing them with targets for parallelized service management while maintaining backward compatibility through symbolic links (e.g., multi-user.target for runlevel 3).[5] This evolution allows for more efficient booting and service control, with commands like systemctl recommended over legacy runlevel tools.[5]
Core Concepts
Definition
A runlevel is a predefined operating mode in one of several distinct states of a Unix-like operating system, particularly those using System V-style initialization, where it specifies the configuration of system services, processes, and resources active during boot or runtime changes.[6] This mode determines the overall operational state of the system by selectively enabling or disabling components to achieve specific functionality levels.[7] Key characteristics of runlevels include their control over essential system elements such as daemons for background tasks, mounted file systems for data access, and network services for connectivity, thereby defining states from basic maintenance to fully operational multi-user environments.[1] In this framework, runlevels represent a structured progression of system readiness, ensuring predictable behavior without requiring manual intervention for common operational shifts.[7] Within SysV-style init systems, runlevels are designated by numeric identifiers conventionally ranging from 0 to 6, each associated with directories of scripts that the init process executes to manage service states upon entering or exiting the mode.[1]Purpose
Runlevels serve as predefined operating modes in Unix-like systems, enabling the init process to transition the system into specific states by starting or stopping designated services. This mechanism supports controlled boot sequences, where services are activated in a structured order to ensure system stability, as well as maintenance modes for targeted repairs and graceful shutdowns that halt operations systematically to prevent data loss. By allowing administrators to activate or deactivate groups of services without requiring a full system reboot, runlevels facilitate efficient resource management during runtime adjustments.[8] The primary advantages of runlevels lie in their ability to simplify system administration through the isolation of service dependencies into discrete operational levels, permitting quick switches between configurations without manual intervention for each service. This approach enhances fault isolation during troubleshooting, as lower operational modes can limit active components to pinpoint issues without interference from unnecessary processes. Additionally, runlevels promote security by restricting services in restricted modes, reducing potential vulnerabilities during sensitive tasks like file system repairs. For instance, entering a minimal mode allows exclusive root access for diagnostics or upgrades, while standard modes enable full multi-user access with necessary network capabilities.[9][8] Common use cases include routine maintenance, such as switching to a single-user mode for system repairs or backups, and transitioning to multi-user modes for everyday operations like server hosting or desktop usage. These transitions ensure orderly service management, supporting tasks from emergency recovery to optimized performance in production environments. However, runlevels exhibit limitations in handling complex service dependencies, as they rely on static assignments to levels without fine-grained dynamic ordering, which can complicate management in intricate setups.[6][5]History and Development
Origins in Unix System V
The concept of runlevels originated within the development of Unix System V at AT&T Bell Laboratories, where structured system initialization emerged to address the increasing complexity of Unix environments during the early 1980s. In System V Release 1 (SVR1), released in 1983, the init process was introduced as a central dispatcher for managing system startup and process control, utilizing the /etc/inittab file to define states and process behaviors. Runlevels were formalized at this stage as a numeric system (0 through 6), with the rstate field in /etc/inittab tying processes to specific runlevels and the init command supporting transitions via arguments such as init 2 for multi-user mode. This approach marked a departure from prior Unix versions, providing a modular framework for handling system resources amid growing demands for reliability in commercial deployments.[10] By System V Release 3 (SVR3) in 1987, runlevels were refined with improved integration for advanced features such as remote file sharing, where init 3 enabled multi-user mode with networking. AT&T Bell Labs engineers documented these runlevels in the SVR3 manuals, emphasizing their role in standardizing boot sequences and service management. The /etc/inittab file supported special actions for power failures and boot-time tasks, reflecting contributions from Bell Labs' systems programming team who integrated feedback from internal testing to enhance the init framework.[11] The primary motivation for introducing runlevels stemmed from the need to impose order on increasingly intricate Unix boot processes, as systems expanded to support networked and multi-user commercial applications. Unlike BSD's simpler /etc/rc scripts, which relied on sequential execution without discrete levels, System V adopted numeric runlevels to enable granular control over service activation, facilitating easier maintenance, troubleshooting, and scalability in diverse hardware environments. This design choice allowed administrators to switch states predictably, reducing downtime and errors in production settings. Early adoption of runlevels gained traction in commercial Unix variants by the late 1980s, with implementations appearing in SVR4 (1988), where they became integral to the init system's operation across AT&T-licensed platforms. Vendors such as Sun Microsystems and Hewlett-Packard incorporated SysV init features into their offerings, promoting widespread use in enterprise systems for consistent state management. By this period, runlevels had established themselves as a cornerstone of System V-style initialization, influencing subsequent Unix derivatives and solidifying their role in operational standardization.[12]Standardization Efforts
Following the initial development of runlevels in Unix System V, standardization efforts sought to promote consistency across Unix-like systems by addressing init process behaviors, though runlevels themselves were not fully integrated into broader specifications. The POSIX.1 standard, published in 1988 by the IEEE, established foundational interfaces for process creation, management, and termination, designating the init process as process ID 1 and the ancestor of all other processes, but it did not define or require runlevel mechanisms, leaving them as a System V-specific feature.[13] Similarly, the Single UNIX Specification (SUS), starting with Version 1 in 1990 and evolving through subsequent editions, extended POSIX with additional utilities and real-time extensions while specifying init-related behaviors such as signal handling and process groups, yet explicitly omitted runlevels to maintain focus on portable application interfaces rather than system initialization modes. A key milestone in runlevel standardization came with the Linux Standard Base (LSB), initiated in 2001 by the Free Standards Group (later acquired by the Linux Foundation) to enhance interoperability among Linux distributions. The LSB specified runlevel semantics in its core specifications, defining runlevels 0 through 6 for use in init scripts, including conventions for default start and stop actions to ensure consistent service management across conforming systems.[14] This effort produced multiple versions, from LSB 1.0 in June 2001, which introduced binary compatibility guidelines including runlevel handling, to LSB 5.0 in 2015, which refined desktop and multimedia integrations while maintaining runlevel definitions for backward compatibility.[15] Other standards, such as the Unix 98 brand (aligned with SUS Version 2 in 1997), indirectly supported runlevel concepts by certifying systems for POSIX compliance and adding utilities like those for process control, but they did not mandate runlevels, allowing vendor-specific implementations to persist. Challenges to cross-platform consistency arose from proprietary extensions in commercial Unix variants, such as those from Sun Microsystems or HP, which modified init behaviors without aligning on runlevel numbering or transitions. These efforts achieved partial success within the Linux ecosystem, where LSB compliance facilitated software portability and reduced fragmentation among distributions, but adoption remained limited in non-Linux Unix-like systems, where alternative initialization frameworks and proprietary modifications hindered uniform runlevel usage.Standard Runlevels
Descriptions of Runlevels 0-6
Runlevels 0 through 6 represent standardized operating states in Linux systems using Unix System V-style initialization, each configuring a specific set of services and processes to manage system behavior from shutdown to full operation. These runlevels are defined in the Linux Standard Base (LSB) as a means to ensure compatibility across compliant Linux distributions, drawing from traditional SysV init practices, though non-Linux SysV derivatives may vary in implementation.[16] The init process transitions between them by executing or terminating scripts in designated directories, such as /etc/rcX.d, where X is the runlevel number.[1] Runlevel 0 initiates a system halt, terminating all processes and powering off the machine after completing shutdown procedures. This state ensures a safe stop without rebooting, often used for maintenance or power conservation.[16][17] Runlevel 1, also denoted symbolically as 'S' or 's', enters single-user mode, providing root access with minimal services running, such as basic file system mounts but no networking or multi-user capabilities. This mode is ideal for administrative tasks like filesystem repairs, as it limits interference from other users or daemons.[16][18] In /etc/inittab configurations, the 'S' symbol triggers initialization scripts like /sbin/rcS for essential boot steps before full single-user entry.[18] Runlevel 2 supports multi-user mode without exporting network services, enabling local text-based logins and basic system operations but restricting remote access. This configuration runs multi-user terminal processes and daemons locally, suitable for environments where networking is unnecessary or disabled for security.[16][1] Runlevel 3 provides full multi-user mode with networking enabled, starting console operations, server daemons, and network interfaces for standard system use. This is the typical default for server environments, allowing multiple users, remote logins, and full resource availability over the network.[16][17] Runlevel 4 is user-definable and often left unused or configured for specialized setups, such as additional services beyond runlevel 3 without altering the standard multi-user state. By default, it mirrors runlevel 3 in many implementations, serving as a flexible extension for custom environments.[16][1] Runlevel 5 extends multi-user mode to include a graphical interface, typically via the X Window System or equivalent display manager like xdm, common in desktop-oriented systems. This state activates graphical login screens and related services while maintaining full networking and multi-user support.[16][1] Runlevel 6 triggers a system reboot, shutting down processes and restarting the operating system after performing necessary cleanup. This state returns the system to its initial boot sequence, defined by the initdefault in /etc/inittab.[16][17] Additionally, symbolic notations like 'N' indicate no previous runlevel, often displayed at initial boot when querying the current state via tools like runlevel(8). This helps distinguish first-time initialization from transitions.[19]Special and Non-Standard Runlevels
In Unix-like systems, special runlevels extend beyond the standard numeric levels 0 through 6 to provide targeted environments for maintenance and recovery. One such runlevel is 'S' (or equivalently 's'), which represents single-user mode and is primarily used during the boot process or for transitioning to a minimal administrative state. This mode limits the system to essential services, mounting the root filesystem and providing a root shell without requiring a password in many cases, facilitating basic repairs or configuration changes.[20] SysV init also supports additional special runlevels 'a', 'b', and 'c', which are used during the boot process to single-user mode. These runlevels process only specific entries in the /etc/inittab file: 'a' for entries before the single-user shell, 'b' for those between the shell and /etc/io.d initialization, and 'c' for entries after /etc/io.d. They allow fine-grained control over boot-time initialization without entering full single-user mode. Additionally, 'Q' or 'q' signals init to re-examine and reload the /etc/inittab file without changing the current runlevel, useful for applying configuration changes dynamically.[20] Beyond these, non-standard runlevels such as 7, 8, and 9 are supported by the init process in SysV-style systems but lack formal documentation or universal definition, allowing vendors or administrators to define custom behaviors. For instance, these levels may be employed in specialized setups for high-availability clustering or additional graphical interfaces, where runlevel 7 could enable specific failover services.[20] Vendor-specific deviations further adapt runlevel meanings; while many adhere to conventions like runlevel 3 for multi-user text mode and 5 for graphical, some systems repurpose level 4 for alternative console configurations or debugging.[21] These special and non-standard runlevels are essential in disaster recovery scenarios, enabling administrators to intervene when conventional operating states fail, such as after hardware issues or misconfigurations that halt normal booting. By providing isolated, controlled access, they minimize risks while maximizing repair flexibility.[5]Operational Mechanisms
Init Process and Runlevel Management
The init daemon serves as the initial process (PID 1) on Unix-like systems implementing the System V (SysV) initialization scheme, acting as the parent of all other processes and responsible for bootstrapping the system by spawning and managing essential services based on the current runlevel.[20] Upon system startup, init reads the configuration file /etc/inittab to determine the default runlevel and initiate the appropriate processes, ensuring controlled transitions between operational states.[20][18] The /etc/inittab file defines the system's initialization behavior through entries in the syntaxid:runlevels:action:process, where id is a unique two-character identifier, runlevels specifies the applicable runlevels (e.g., 123 for levels 1 through 3), action dictates the execution behavior (such as initdefault to set the default runlevel, sysinit for boot-time scripts, wait to run once and wait for completion, or respawn to automatically restart the process if it terminates), and process is the command or script to execute.[20] For instance, an entry like si::sysinit:/sbin/autopush -f /etc/iu.ap runs STREAMS initialization during system startup in Solaris, while id:3:initdefault: establishes runlevel 3 as the default multi-user mode.[18] This structure allows init to handle both one-time boot tasks and ongoing process supervision, re-examining the file as needed for dynamic changes.[20]
Runlevel management relies on a hierarchy of script directories: the actual service scripts reside in /etc/init.d/ (or equivalently /etc/rc.d/init.d/ in some implementations), containing executable shell scripts that define start, stop, and status actions for daemons and services.[1] For each runlevel N (0 through 6), the directory /etc/rcN.d/ holds symbolic links to these scripts, prefixed with S for start (e.g., S10network) to invoke the start action or K for kill (e.g., K20network) to stop the service, with the two-digit suffix (xx) determining execution order to respect dependencies—lower numbers run first.[1] When entering a new runlevel, init executes the relevant scripts in numerical sequence: first stopping services not needed in the target runlevel (via Kxx links from the previous level), then starting those required (via Sxx links in the new level).[1]
During the boot sequence, init first processes runlevel S (single-user mode) to perform essential tasks like mounting filesystems and running sysinit scripts, transitioning from this minimal state to the default runlevel—typically 3 for multi-user text mode or 5 for graphical mode—by sequentially executing the corresponding rcN.d scripts to activate networking, daemons, and other services in dependency order.[20][1] This ordered progression ensures system stability, with init monitoring and respawning critical processes as defined in inittab throughout operation.[18]
Changing Runlevels
Changing runlevels in System V Unix-like systems is typically accomplished using theinit command, which signals the init process (PID 1) to transition the system to a specified runlevel by executing the appropriate rc scripts.[17] The syntax is init N, where N is a digit from 0 to 6 representing the target runlevel; for example, init 3 switches to multi-user mode with networking enabled.[17] This command requires root privileges, as only the superuser can alter the system's operational state.[21]
An alternative to init is the telinit command, which serves as a lightweight wrapper that sends signals directly to the running init process without forking a new one, making it more efficient for runtime changes.[22] Like init, it uses the syntax telinit N for numeric runlevels, but it also supports symbolic arguments such as S or s for single-user mode, allowing skips in certain process termination steps if needed.[22] In practice, telinit is often preferred over init for non-boot-time transitions due to its focused role in communicating runlevel requests.[21]
During a runlevel change, the init process rereads the /etc/inittab configuration file to identify differences between the current and target runlevels, terminating services associated only with the current level by sending SIGTERM (followed by SIGKILL after a configurable grace period, defaulting to 3 seconds as of sysvinit 2.92) while preserving those common to both.[21] It then starts services specific to the new runlevel according to their defined actions (e.g., respawn for persistent daemons or once for one-time execution), potentially resulting in partial transitions for special cases like init 0 (halt), which focuses on shutdown without full multi-user startup.[21] The process updates the system's utmp and wtmp logs to record the previous and new runlevels.[21]
Safety considerations are critical when changing runlevels, particularly for abrupt shifts like 0 (halt) or 6 (reboot), which can lead to data loss if unsaved work or disk syncs are incomplete; administrators are advised to use the sync command beforehand and notify users via broadcasts.[23] For such changes, init automatically issues a warning message to all logged-in users using the wall mechanism, informing them of the impending shutdown or reboot to allow graceful logout.[23] Partial or emergency transitions, such as to single-user mode, should be tested in non-production environments to avoid service disruptions.[1]
To monitor the current runlevel, the runlevel command can be used, which outputs the previous and current runlevels separated by a space (e.g., "N 3" if the previous is unknown), drawing from the utmp file.[24] Alternatively, who -r displays the runlevel alongside system uptime and other status details, providing a quick query of the operational state without requiring additional tools.
Implementations in Linux
Linux Standard Base (LSB)
The Linux Standard Base (LSB) provides a set of specifications aimed at ensuring consistency in system initialization and runlevel management across Linux distributions, promoting portability of applications and scripts. The core specifications began with LSB 1.0 in June 2001, which introduced standards for runlevels and init scripts, and evolved through versions 1.1 to 1.3 (2002), 2.0 to 2.1 (2004–2005), 3.0 to 3.2 (2005–2008), 4.0 to 4.1 (2009–2011), culminating in LSB 5.0 released on June 3, 2015, as the final modular iteration.[15] These versions define runlevel behaviors by standardizing the states of the system (runlevels 0 through 6) and the mechanisms for transitioning between them, allowing compiled applications and installation scripts to operate reliably without distribution-specific adaptations.[25] LSB requirements for runlevels focus on defining expected system states and enforcing conventions for service management through init scripts. For instance, runlevel 2 supports multiuser mode without exported network services, while runlevel 3 enables full multiuser operation with networking, and runlevel 5 adds a display manager; services like cron are typically configured to start in runlevels 2–5, and networking facilities in 3–5, using standardized dependencies in script headers.[25] Init scripts must adhere to comment conventions, including keywords such asDefault-Start and Default-Stop to specify runlevels (e.g., Default-Start: 2 3 4 5 for a service active in multiuser modes), along with Required-Start for dependencies like $network or $local_fs, ensuring ordered execution during boot or shutdown.[26] These standards apply to start/stop actions, with optional support for reload and status, promoting reliable service lifecycle management.
Conformance to LSB runlevel specifications requires implementations to support the /sbin/init process, which reads /etc/inittab to control system states and runlevel changes, and to maintain directory structures for script execution. Init scripts are placed in /etc/init.d, with activation via tools like /usr/lib/lsb/install_initd that create symbolic links in /etc/rcN.d directories (where N denotes the runlevel, e.g., /etc/rc3.d for runlevel 3), using Snn (start) and Knn (kill) prefixes for ordering.[27] Distributions historically achieved certification through the Linux Foundation's testing infrastructure, including the LSB-Core test suite, which validated init script handling, runlevel transitions, and facility dependencies to ensure interoperability.[28] As of 2025, LSB and traditional runlevels are largely legacy, with most distributions using systemd for init management while providing backward compatibility for legacy purposes.[29]
The LSB's runlevel standards significantly enhanced binary compatibility and script portability across Linux distributions by providing a common framework for system initialization, reducing vendor-specific variations in service startup.[30] However, by the 2020s, as modern init systems like systemd gained dominance, explicit runlevel support was deprecated in favor of target units, though compatibility layers maintain LSB init script execution for legacy purposes.[29]
Variations Across Distributions
Slackware employs a straightforward configuration for runlevels through its/etc/inittab file, which defines the system's initial state and transitions.[31] The default runlevel is 3, providing multi-user mode with a text-based console login and full network support, while runlevel 2 offers multi-user functionality with networking services, similar to runlevel 3.[31] For graphical environments in runlevel 4, users must manually configure the X Window System, often by editing /etc/inittab to set id:4:initdefault: and ensuring a display manager is installed, as no automatic graphical boot is provided by default.[31] Slackware does not achieve full compliance with the Linux Standard Base (LSB) for runlevel management, prioritizing simplicity over standardized scripting conventions.
In Gentoo, runlevels are highly customizable through the /etc/runlevels/ directory structure, where subdirectories like default, boot, and sysinit hold symbolic links to init scripts for specific operational states.[32] The OpenRC init system, Gentoo's default, enables profile-based definitions of runlevels, allowing users to tailor service dependencies and boot sequences during installation via profiles that influence which services link to runlevels.[33] This approach supports hybrid setups combining traditional SysV-style runlevels with optional systemd integration, offering flexibility for embedded or specialized systems without strict adherence to numeric runlevel norms.[33]
Debian traditionally relies on sysvinit for runlevel handling in pre-systemd installations, with runlevel 2 serving as the default for multi-user mode, enabling console logins, networking, and most services but excluding graphical interfaces.[34] Graphical desktops, if installed, start via services enabled in this runlevel rather than a dedicated one. The file-rc package provides an alternative to traditional rc scripts, managing runlevel transitions through a dependency-based /etc/runlevel.conf file that links services to runlevels without numeric script directories.[35] Debian maintains partial LSB compliance for init scripts but deviates in default runlevel usage to consolidate multi-user operations under 2.[36]
Distributions like Red Hat Enterprise Linux (RHEL), prior to adopting systemd, designated runlevel 5 for graphical multi-user mode, automatically launching a display manager such as GDM for X sessions upon boot.[37] Service management in these runlevels utilized the chkconfig tool to create symbolic links in /etc/rc.d/rc[runlevel].d/ directories, enabling or disabling scripts across levels like 3 (text multi-user) and 5. RHEL achieved LSB certification for core runlevel features in versions up to 7, though later releases phased out full compliance.[38]
Common patterns across distributions include tailoring defaults for server versus desktop use; for instance, Ubuntu, as a Debian derivative, defaulted to runlevel 2 pre-systemd for both server (text-only multi-user) and desktop variants, with graphical sessions initiated through services like the display manager rather than a separate runlevel.[39] These variations, while diverging from LSB ideals of uniform runlevel 3 for text multi-user and 5 for graphical, reflect optimizations for specific use cases like resource efficiency in servers.[40]
Implementations in Other Unix-like Systems
System V Derivatives
Runlevels were first introduced in UNIX System V Release 3 (SVR3), released in 1987, as a mechanism to define distinct operating states of the system, managed through the/etc/inittab configuration file.[41] This file specified the initial default runlevel and controlled process spawning based on a colon-separated format of identifier, runlevel(s), action, and process command, enabling the init process (PID 1) to initialize the system and transition between states.[41] The basic runlevels 0 through 6 were defined with a focus on console operations: runlevel 0 for halting the system, 1 (or s/S) for single-user mode suitable for administrative tasks with limited file systems mounted, 2 for multiuser mode without remote file sharing, 3 for multiuser mode with remote file sharing enabled, 4 as a user-definable state, 5 for shutdown or power-off, and 6 for rebooting.[41] Process spawning relied on actions like respawn to automatically restart essential services such as getty for console logins, ensuring continuous availability on the system console.[41]
In UNIX System V Release 4 (SVR4), released in 1988, runlevel management was enhanced with a more structured scripting approach using directories like /etc/rcN.d (where N denotes the runlevel), containing symbolic links to initialization scripts in /etc/init.d prefixed with S (start) or K (kill) for ordered execution during transitions.[42] These scripts allowed for finer control over service startup and shutdown, improving upon SVR3's reliance on direct inittab entries by enabling lexicographical ordering of operations.[42] SVR4 also introduced support for the X Window System as an optional component.[42] Additionally, SVR4 standardized shutdown procedures through the shutdown command and associated K-prefixed scripts, ensuring orderly termination of services before halting or rebooting the system.[42]
General features of runlevels in System V derivatives centered on the init process's reliance on /etc/inittab for spawning and respawning processes tailored to the current runlevel, with transitions initiated via the init command (e.g., init 2 to enter multiuser mode).[41] This allowed seamless state changes, such as mounting file systems or starting network services, while maintaining console-centric operations for essential tasks.[41]
By the 2010s, pure System V runlevel implementations had largely been phased out in favor of vendor-specific operating systems and modern init systems, though they remained the basis for compatibility modes in distributions supporting legacy SysVinit scripts.[43]
Solaris and illumos
In earlier versions of Solaris, including up to Solaris 10 released in 2005, the operating system employed the traditional System V (SysV) init mechanism to manage runlevels 0 through 6. These runlevels were controlled via scripts located in directories such as/etc/rc0.d through /etc/rc6.d, where files prefixed with S initiated services and those prefixed with K terminated them during state transitions.[44][45]
Solaris 10 introduced the Service Management Facility (SMF), a dependency-based framework that largely supplanted traditional runlevels with milestones to provide more granular and reliable service management. Milestones represent system states analogous to runlevels, such as milestone/single-user (corresponding to runlevel S for single-user mode), milestone/multi-user (runlevel 2 for multiuser without networking resources), and milestone/multi-user-server (runlevel 3 for full multiuser with shared networking resources). Transitions between these states are handled by the svcadm command, though the init command remains available for compatibility, indirectly affecting milestones.[46][47]
Illumos, an open-source fork of OpenSolaris initiated after Oracle's acquisition in 2010, inherits and preserves the SMF architecture from Solaris 10 and later. It emulates traditional runlevels as SMF states while providing a compatibility layer for legacy SysV scripts, allowing them to operate as non-SMF-managed "legacy-run" instances without full dependency resolution. This setup ensures backward compatibility for older applications while encouraging migration to native SMF services.[48][49]
A distinctive feature in Solaris and illumos SPARC systems is the handling of runlevel 0, which halts the operating system and transfers control to the OpenBoot PROM (OBP) firmware, displaying the ok prompt for low-level diagnostics and boot configuration. Additionally, runlevel 3 supports graphical desktop environments, such as the Common Desktop Environment (CDE) or GNOME, when relevant services are enabled, differing from conventions in other Unix-like systems where a separate runlevel is typically designated for graphical modes.[50][51]
HP-UX and AIX
HP-UX implements runlevels based on the System V (SysV) initialization model, where the/sbin/init process reads the /etc/inittab file to determine the default runlevel and manage transitions between system states.[52] The standard runlevels include 0 for system halt, s or S for single-user mode with root filesystem mounted, 1 for single-user mode with all filesystems mounted and syncer running, 2 for basic multiuser mode allowing multiple logins, and 3 for multiuser mode with NFS exports and web-based administration tools like the HP System Management Homepage (HP SMH).[52] Runlevel changes invoke /sbin/rc, which executes start (S-prefixed) or kill (K-prefixed) scripts in directories like /sbin/rc2.d or /sbin/rc3.d, enabling controlled activation or deactivation of services.[52] The System Administration Manager (SAM), a graphical interface for system configuration, supports tasks such as kernel tuning and peripheral management but does not directly alter runlevel definitions; in HP-UX 11i v3 and later, SAM has been largely supplanted by the web-based HP SMH for routine administration.[52]
HP-UX supports trusted mode as a C2-level security configuration, which enforces stricter access controls, stores authentication data in /tcb/files/auth instead of /etc/passwd, and includes features like compartments, role-based access control (RBAC), and auditing; this mode is enabled system-wide via tools like SAM or command-line utilities and operates across runlevels without tying to a specific one like 4, though runlevel 4 remains available for user-defined multiuser extensions with features such as pseudo-TTYs and volume management.[52] In HP-UX 11i releases from the 2000s, enhancements to scripting in /etc/inittab and rc directories improved flexibility for enterprise environments, allowing better integration with virtual partitions (vPars) for dynamic resource allocation without full runlevel shifts.[53]
AIX employs a SysV-compatible runlevel system defined in /etc/inittab, where the init process spawns subsystems and executes rc scripts in /etc/rc.d/rcX.d directories (X being the runlevel) to transition states, with the default often set to 2 via the initdefault entry.[54] Runlevel 2 provides base multiuser mode with networking enabled but without advanced services like NFS, while runlevel 3 extends this to include networking plus additional daemons for file sharing and administrative tasks; higher levels (4-9) are user-definable, and special modes like a, b, or c start processes without terminating existing ones.[54] The System Resource Controller (SRC), integrated alongside init, manages subsystem lifecycle through commands like startsrc, stopsrc, and lssrc, enabling granular control over daemons without requiring full runlevel changes.[55]
Both HP-UX and AIX prioritize enterprise reliability through SysV foundations, supporting dynamic reconfiguration—such as AIX's concurrent kernel updates via Dynamic Logical Partitioning (DLPAR) or HP-UX's vPars for resource shifts—while minimizing downtime compared to rigid runlevel reboots.[56] In AIX 7 (released 2010), legacy runlevel support persists via /etc/inittab and SRC, with integrations like PowerSC for security compliance ensuring hardened configurations across states without altering core mechanisms.[55]
BSD Systems
In BSD variants such as FreeBSD, NetBSD, OpenBSD, and DragonFly BSD, there are no native numeric runlevels analogous to those in System V Unix systems; instead, system initialization and service management rely on a hierarchical set of shell scripts executed by the rc(8) utility.[57][58] The primary configuration file, /etc/rc.conf, along with defaults in /etc/defaults/rc.conf, determines which services start during boot, with overrides for local customization.[59] This approach emphasizes simplicity, using dependency declarations within scripts to order execution via tools like rcorder(8) in NetBSD.[58] Boot modes in BSD are controlled through kernel flags or commands rather than persistent runlevel states. For instance, single-user mode—providing a root shell for maintenance tasks without full service startup—is entered by appending the -s flag to the boot loader or issuing shutdown now 0 from multi-user mode.[57][60] Multi-user mode, the default operational state, mounts filesystems, configures networking, and launches daemons via scripts in the /etc/rc.d directory, which support standard actions like start, stop, and restart.[61] These scripts are invoked sequentially during startup, with no concept of discrete levels like SysV's 3 (multi-user text) or 5 (multi-user graphical).[62] To accommodate System V compatibility, FreeBSD and NetBSD include emulation for tools like the runlevel command, which simulates output by mapping multi-user mode to a conventional value such as 3, though this does not alter the underlying BSD init behavior.[63] Runlevel-like functionality can be approximated through rc.conf variables that conditionally enable services, such as setting graphical boot options or limiting to console mode, but these are one-time flags rather than switchable states.[61][58] Modern BSD implementations preserve this script-driven model without adopting formal standards like the Linux Standard Base. OpenBSD uses rcctl(8) for service control alongside /etc/rc.d scripts, maintaining a minimalistic structure focused on security and reliability.[62] Similarly, DragonFly BSD employs /etc/rc scripts for filesystem mounting and daemon startup, emphasizing efficiency in its rc.conf-based configuration without runlevel abstractions.[60] This contrasts with SysV's structured levels by prioritizing flexible, dependency-resolved scripting over predefined modes.[57]Transition to Modern Init Systems
Deprecation of Runlevels
Traditional runlevels reached their peak usage in the 1990s and 2000s, forming the cornerstone of system initialization in most Unix-like operating systems, including Linux distributions and System V derivatives.[64] Their decline accelerated after 2010 with the emergence of modern init systems that offered more flexible service management, culminating in explicit declarations of obsolescence by 2014 and complete removal of support in systemd version 258 by 2025.[65][66] By 2025, only a small number of Linux servers rely on pure SysV init with traditional runlevels, reflecting the widespread shift to alternatives amid the dominance of cloud-native architectures.[67] The primary reasons for this deprecation include the inherent rigidity of runlevels, which enforce coarse-grained, sequential service activation without robust dependency resolution, leading to brittle configurations in complex environments.[68] This limitation hampers scalability in cloud and containerized setups, where parallel bootstrapping and on-demand service orchestration are essential for efficient resource utilization and rapid deployment.[69] Furthermore, legacy SysV init scripts introduce security vulnerabilities through outdated scripting practices prone to injection flaws, privilege escalations, and unpatched exploits if not rigorously audited.[70][71] Despite the broad transition, runlevels persist in niche applications such as legacy enterprise systems, embedded devices with resource constraints, and minimal installations prioritizing simplicity over advanced features.[67] For instance, Slackware continues to default to a SysV-style init system, appealing to users seeking a lightweight, traditional boot process.[31] In non-Linux Unix-like systems, vendors like IBM maintain runlevel support in AIX for backward compatibility with long-standing enterprise workloads. The vast majority of Linux distributions have adopted systemd or similar modern init systems, underscoring the marginal role of pure runlevel-based setups in contemporary deployments. This shift ensures better alignment with current demands for reliability, performance, and security in diverse computing paradigms.Systemd Targets and Compatibility
Systemd introduces targets as the modern successor to traditional runlevels, providing a more flexible and dependency-aware mechanism for managing system states. Key targets includemulti-user.target, which corresponds to runlevel 3 and enables a non-graphical multi-user environment with network services active, and graphical.target, which maps to runlevel 5 and adds a graphical login interface on top of multi-user.target.[72] These targets are defined as special unit files, typically located in /usr/lib/systemd/system/ or /lib/systemd/system/, depending on the distribution, and serve as synchronization points for pulling in related services during boot.
To ensure backward compatibility with SysV-style systems, systemd creates symbolic links such as runlevel3.target pointing to multi-user.target and runlevel5.target pointing to graphical.target, allowing legacy scripts and tools to function without modification.[73] Administrators can emulate traditional runlevel changes using commands like systemctl isolate runlevel3.target to switch to a multi-user mode, mirroring the behavior of the old init 3 command, while kernel command-line parameters (e.g., 3 or 5) automatically map to the appropriate targets.[69]
Systemd became the default init system in Fedora starting with version 15 in May 2011, and in Ubuntu with version 15.04 in April 2015, marking a shift in major distributions.[74][75] By 2020, nearly all prominent Linux distributions, including Debian, Red Hat Enterprise Linux, and openSUSE, had fully transitioned to systemd as the standard, driven by its integration and performance improvements.[76]
Compared to runlevels, systemd targets offer advantages through parallel service activation, where independent units start concurrently to reduce boot times, and sophisticated dependency graphs that resolve service relationships automatically for ordered execution.[77] Additionally, socket activation enables on-demand service startup by listening on sockets and launching daemons only when connections arrive, optimizing resource usage over the always-on model of traditional runlevels.[78]
As transitional alternatives, Upstart served as an event-based init system in distributions like Ubuntu before being superseded by systemd around 2015.[79] OpenRC, used primarily in Gentoo, functions as a dependency-based hybrid that maintains compatibility with SysVinit while supporting parallel startups.[33]