init
In Unix-like operating systems, init (short for "initialization") is the first userspace process launched by the kernel during system boot, assigned process ID (PID) 1, and serving as the ancestor of all subsequent processes.[1] It reads configuration from files such as/etc/inittab to spawn essential services, manage system runlevels, and handle events like power failures or process terminations.[1]
Originating in early Unix implementations from Bell Labs in the 1970s, init evolved into the standardized System V (SysV) init in AT&T UNIX during the 1980s, which introduced runlevels—a mechanism to transition the system through operational states from single-user mode (runlevel 1) to full multi-user graphical environments (typically runlevels 2–5).[2] This SysV model, relying on sequential shell scripts in /etc/init.d/ for service management, became the foundation for Linux distributions but faced limitations in handling parallel startups, dependency resolution, and dynamic hardware like USB devices.[2]
By the mid-2000s, alternatives emerged to address these shortcomings: Upstart (introduced around 2006 by Canonical) adopted an event-driven approach for better parallelism, powering Ubuntu from version 6.10 and Fedora 9–14.[2] In 2010, systemd was developed by Lennart Poettering and Kay Sievers at Red Hat, emphasizing socket activation, on-demand service loading, and integrated logging via Journald; it gained widespread adoption starting with Red Hat Enterprise Linux 7 (2014) and Debian 8 (2015), and by 2025 serves as the default init system in most major Linux distributions due to its efficiency in modern, containerized, and cloud environments.[2][3] Other variants, such as OpenRC and runit, persist in lightweight or embedded systems for their simplicity and modularity.[2]
Key functions of init across implementations include re-executing itself for upgrades (via signals like SIGUSR2 in SysV), supervising child processes to prevent boot hangs, and facilitating shutdown or reboot by sending termination signals (SIGTERM followed by SIGKILL) to undefined processes during runlevel changes.[1] In contemporary systemd, these are extended with cgroups for resource control, D-Bus integration for inter-process communication, and targets replacing traditional runlevels for more flexible state management.[2]
Fundamentals
Role and Responsibilities
In Unix-like operating systems, the init process is the first user-space program executed by the kernel after the boot sequence completes, assigned process ID 1 (PID 1) to mark its foundational status.[4] This initiation occurs when the kernel invokes an executable such as/sbin/init via the execve system call, establishing the bridge between kernel-mode operations and user-space execution.[4] As PID 1, init assumes the role of the ultimate ancestor for all subsequent processes, directly or indirectly forking them into existence and serving as their adoptive parent when original parents terminate.[5] This hierarchical structure ensures that every daemon, shell, and user session traces its lineage back to init, maintaining process tree integrity across the system.[5]
The primary responsibilities of init encompass critical system lifecycle management. It reaps orphaned child processes—those left in a zombie state after their parent exits without invoking wait—by periodically calling system calls like waitpid to collect their exit statuses and free associated resources, preventing memory leaks and process table exhaustion.[4] Additionally, init oversees the startup of essential system services, such as mounting filesystems and launching background daemons, while facilitating the transition from single-user kernel-controlled mode to a fully operational multi-user environment.[5] During shutdown or reboot, init coordinates the graceful termination of services, unmounting filesystems, and signaling the kernel to halt or restart hardware, ensuring data integrity and orderly power-off.[5]
Failure of the init process carries severe implications, as it is irreplaceable in the process hierarchy. If init cannot be launched—due to a missing or corrupted binary specified by kernel parameters like init=—the boot process stalls, often resulting in a kernel panic with messages indicating no working init was found, leading to an unrecoverable system hang.[6] Similarly, if the running init (PID 1) terminates unexpectedly, the kernel detects the attempt to kill it and triggers a panic, syncing filesystems if possible before halting, as no alternative process can adopt orphans or manage services; recovery requires manual hardware intervention, such as resetting the system.[7] This design underscores init's indispensable nature, where its absence equates to total system failure without built-in failover mechanisms.[7]
Boot Integration
The boot process of a Unix-like system culminates in the kernel handing off control to the init process after completing its initialization tasks. Following hardware detection, memory setup, and mounting the root filesystem—often facilitated by an initial RAM filesystem (initramfs)—the kernel executes the program specified by theinit= kernel command-line parameter, typically /sbin/init. This execution creates the first user-space process with process ID 1 (PID 1), marking the transition from kernel mode to user mode and establishing the foundation for all subsequent user-space activities.
Upon startup, init begins in a minimal environment, inheriting a sparse set of environment variables from the kernel, such as those derived from command-line parameters up to the -- delimiter. Its initial actions include parsing configuration files to determine system setup, forking and executing essential background processes (daemons), mounting additional filesystems beyond the root, and configuring system-wide environment variables to prepare the runtime context. These steps ensure the operating system progresses from a bare kernel state to a functional user environment, with init overseeing the launch of core services like those for logging and networking, though detailed service management occurs later in the boot sequence.[8]
A key role of init in process lifecycle management is serving as the adoptive parent for orphaned processes—those whose original parent has terminated without reaping them. When a process becomes orphaned, the kernel reassigns its parent to PID 1, and init periodically invokes the wait() system call to detect and reap any resulting zombie (defunct) children, thereby freeing their process table entries and preventing resource leaks. This mechanism maintains system hygiene by automatically cleaning up terminated processes that would otherwise accumulate.
Unlike typical user processes, init operates under special protections to safeguard system stability: it ignores the SIGKILL signal, rendering attempts to terminate it via kill -9 ineffective, as the kernel ensures PID 1 only responds to signals for which it has installed explicit handlers. Additionally, init launches with root privileges but without a controlling terminal, running in a non-interactive context that emphasizes its role as the root ancestor of all processes rather than an interactive application.[9]
BSD-Style Init
Origins and Evolution
The BSD-style init system traces its roots to the Research Unix operating system developed at AT&T Bell Labs during the 1970s, where the init process emerged as the first user-space program (PID 1) responsible for launching essential shell scripts to initialize the system environment.[10] This foundational design first appeared in Version 6 Unix, released in 1975, as a straightforward shell script launcher that forked processes like getty for terminal management and executed basic startup routines without complex state tracking. The approach emphasized minimalism, allowing init to respawn failed processes and transition the system from kernel boot to a functional multi-user state through simple scripting.[11] Key evolutionary milestones refined this model within the Berkeley Software Distribution (BSD) lineage. In 4.1BSD, released in June 1981, the /etc/rc script was formalized as a central, monolithic shell script executed by init to handle system configuration tasks, such as mounting file systems from /etc/fstab, configuring terminals via /etc/ttys, and activating swap space with swapon.[12] This structure provided a reliable, sequential boot sequence that prioritized deterministic execution over parallelization, enabling site-specific customizations through an accompanying /etc/rc.local file. By 4.3BSD in 1986, enhancements to init supported emerging windowing systems, permitting the spawning of arbitrary programs beyond traditional getty processes and introducing a dedicated window field in configuration files to initialize graphical terminals and display managers.[13] A significant advancement occurred in NetBSD 1.5, released in December 2000, which replaced the monolithic /etc/rc with a modular /etc/rc.d directory containing individual scripts for each service, each annotated with dependency keywords like PROVIDE, REQUIRE, and BEFORE.[14] The rcorder utility then dynamically ordered and executed these scripts during startup, introducing dependency-aware sequencing while preserving the core sequential philosophy and avoiding runlevel abstractions seen in System V init.[15] At its core, the BSD-style init embodies a design philosophy of sequential, non-state-based startup that favors reliability and simplicity, eschewing intricate state machines or parallel processing in favor of predictable, linear execution to minimize failure points and ensure robust system bootstrapping.[16] This emphasis on straightforward scripting and fault-tolerant respawning has sustained its adoption as the default init system in contemporary BSD derivatives, including FreeBSD, NetBSD, and OpenBSD, where it continues to drive boot processes with minimal overhead. Its influence extends to select Linux distributions, notably Slackware, which employs a BSD-style initialization layout with /etc/rc.d scripts and a single-runlevel model for enhanced maintainability.[17]Boot Sequence and Configuration
In the BSD-style init system, the boot sequence begins after the kernel loads the root filesystem and executes init(8) as the first user process with process ID 1.[18] Init then invokes the /etc/rc script serially to perform system initialization, sourcing configuration from /etc/rc.conf to set parameters such as the local hostname, network interface details, and flags for enabling services like daemons and networking.[19] The /etc/rc script uses the rcorder(8) utility to determine the execution order of modular scripts in /etc/rc.d based on their declared dependencies, ensuring sequential startup of essential components such as network configuration and daemon processes.[20] Key configuration files guide this process, with /etc/ttys defining terminal lines for login sessions by specifying devices and getty(8) types.[21] During multi-user boot, init forks instances of getty based on active entries in /etc/ttys, enabling user logins on physical serial ports and virtual consoles (e.g., ttyv0 through ttyv9 for console switching via Ctrl+Alt+F1–F10). Meanwhile, /etc/rc.d houses independent scripts for services, each supporting standard actions likestart, stop, and restart via a unified interface, along with dependency keywords such as REQUIRE: networking or PROVIDE: foo to enforce ordering— for instance, a script might require networking to be available before attempting to bind a network daemon.[20]
The shutdown process mirrors this structure for orderly termination. When shutdown(8) or reboot(8) is invoked, init executes /etc/rc.shutdown, which sources /etc/rc.subr for utility functions and runs the /etc/rc.d scripts in reverse dependency order using rcorder(8), allowing services to stop gracefully (e.g., closing network connections and saving state) before unmounting filesystems and halting the system.[22]
System V Init
Core Components
The System V init system originated in AT&T's UNIX System III release in 1981, where the /etc/inittab file was introduced to configure the init process, and it was further standardized in System V Release 3 in 1987 with /sbin/init serving as the core executable responsible for system initialization and process management.[23][24] As the first user-space process started by the kernel (process ID 1), /sbin/init reads the /etc/inittab configuration file upon startup to determine and spawn essential system processes.[1] The /etc/inittab file is structured as a series of colon-separated entries in the formatid:runlevels:action:[process](/page/Process), where each line defines a process to manage.[25] The id field provides a unique 1-4 character identifier for the entry; the runlevels field specifies the system states (numeric or letter codes) in which the process should run; the action field dictates how init handles the process (e.g., respawn to automatically restart upon termination, or wait to execute once and monitor completion); and the process field contains the command or script to execute.[25] This tabular configuration allows init to systematically control daemon and service lifecycles without embedding logic directly in the init binary.
Upon reading /etc/inittab, init forks child processes for each applicable entry, such as multiple instances of the getty program for virtual terminals, ensuring they run in the appropriate contexts.[1] If a respawn-actioned process terminates unexpectedly—due to a crash or signal—init detects the exit via the wait status and immediately forks a replacement, providing automatic recovery for critical services like login daemons.[25] This forking mechanism, combined with signal handling, enables init to maintain system stability by treating itself as the ultimate parent for orphaned processes.
System V init supports various action types to handle diverse events beyond standard spawning, including boot for processes executed early in initialization (ignoring runlevels), bootwait for boot-time commands where init waits for completion, off to disable an entry, once for one-time execution upon entering a runlevel, and powerfail for signaling power-related events without waiting.[25] These actions facilitate event-driven responses, such as invoking scripts during power failures detected via hardware signals. In contrast to the BSD-style init's reliance on sequential rc scripts for process orchestration, System V init's inittab-driven approach emphasizes declarative configuration for parallel and resilient process management.[25]
Runlevels
In System V init, runlevels define distinct operational modes that determine the set of services and processes running on the system, enabling controlled transitions between states such as maintenance, multi-user operation, or shutdown. These modes, numbered from 0 to 6 with an additional special runlevel S, provide a standardized way to manage system behavior without requiring a full reboot, allowing administrators to tailor the environment to specific needs like diagnostics or resource optimization.[26] The standard runlevels, as specified in the Linux Standard Base (LSB), are as follows:| Runlevel | Purpose |
|---|---|
| 0 | Halt the system |
| 1 | Single-user mode |
| 2 | Multi-user mode without network services exported |
| 3 | Full multi-user mode |
| 4 | Reserved for local use; defaults to full multi-user mode |
| 5 | Multi-user mode with display manager or equivalent |
| 6 | Reboot the system |
| S | Single-user maintenance mode (equivalent to runlevel 1 but used during boot for initial setup) |
telinit command, which sends a signal to the init process to switch states by terminating processes associated with the current runlevel and launching those defined for the target runlevel, typically via scripts in /etc/rcN.d/ directories (where N is the runlevel number). This process ensures orderly changes, preserving system stability during mode shifts. The /etc/inittab file integrates with runlevels by specifying the default level and respawning critical processes as needed within each mode.[28][29]
The runlevel system offers advantages in flexibility and safety, permitting seamless switches—for instance, from full multi-user mode (runlevel 3) to single-user mode (runlevel 1) for recovery tasks—while minimizing downtime and manual intervention across diverse administrative scenarios.[29]
Scripts and Defaults
In System V init, service startup and shutdown are managed through shell scripts stored in the/etc/init.d/ directory. These scripts are not executed directly; instead, the init process uses symbolic links in runlevel-specific subdirectories, such as /etc/rc3.d/ for runlevel 3, to invoke them in the appropriate order during runlevel transitions. The links follow a standardized naming convention: files prefixed with S## (where ## is a two-digit number) indicate startup actions and are called with the start argument, while those prefixed with K## denote shutdown actions and receive the stop argument; the numeric suffix determines the sequence, with lower values executing first to respect dependencies.[30][31]
Default runlevels, which define the initial system state after boot, vary across Unix-like operating systems implementing System V init. Many Linux distributions default to runlevel 3 for multiuser mode with console access or runlevel 5 for graphical environments, while others prioritize non-graphical setups. The following table provides representative examples:
| Operating System/Distribution | Default Runlevel | Description |
|---|---|---|
| Red Hat Enterprise Linux | 5 | Multiuser mode with graphical login enabled[31] |
| Solaris | 3 | Multiuser mode with networking and NFS support[32] |
| AIX | 2 | Multiuser mode without additional customization[33] |
| Gentoo | 3 | Standard multiuser mode in OpenRC-based setups[34] |
| Ubuntu (pre-systemd releases) | 2 | Multiuser mode without display manager[35] |
/etc/inittab file to set the default runlevel via the id:runlevel:initdefault: entry, which specifies the target state on boot. Administrators can also extend functionality by placing custom scripts in /etc/init.d/—ensuring they support standard arguments like start, stop, restart, and status—and creating symlinks in the desired /etc/rcX.d/ directories to integrate them into specific runlevels.[31][30]
A key limitation of this script-based system is its reliance on serial execution, where services are started or stopped sequentially according to symlink order, often resulting in extended boot times as each script completes before the next begins, without inherent support for parallelization.[36]
Modern Init Systems
systemd
systemd is a system and service manager for Linux operating systems, serving as PID 1 and responsible for initializing the system and managing services. Developed by Lennart Poettering and Kay Sievers at Red Hat, it was first introduced in 2010 to address limitations in traditional init systems like SysVinit, such as serial service startup and lack of standardization.[37] The project aimed to leverage modern Linux kernel features for improved efficiency and portability across distributions.[38] Fedora 15, released in May 2011, became the first major distribution to adopt systemd as the default init system.[39] Adoption accelerated in the mid-2010s, with Debian switching to systemd in version 8 (Jessie) in April 2015.[40] Ubuntu followed suit in Ubuntu 15.04 (Vivid Vervet), released in April 2015, replacing its previous Upstart init.[41] By 2025, systemd has become the standard init system in nearly all major Linux distributions, including openSUSE, Arch Linux, and Red Hat Enterprise Linux, replacing SysVinit and Upstart in server and desktop environments.[42] Key innovations in systemd include parallel service startup, which allows independent services to activate concurrently rather than sequentially, leading to significantly faster boot times compared to SysVinit's serial approach.[42] It employs dependency-based unit activation, where services start only after their prerequisites are met, managed through a transaction system that resolves conflicts before execution.[43] Socket and D-Bus activation enable on-demand service launching: services remain inactive until a socket connection or D-Bus message triggers them, reducing resource usage and improving responsiveness.[44] Systemd organizes resources into units, configurable via declarative files typically located in/lib/systemd/system/ or /etc/systemd/system/. Service units (.service files) define how daemons like web servers or databases are executed, including options for restarts, environment variables, and resource limits.[45] Targets, analogous to SysV runlevels, group units for synchronization; for example, multi-user.target enables a non-graphical multi-user environment, while graphical.target adds a display manager.[46] The systemctl command provides a unified interface for managing units, such as starting, stopping, enabling, or querying status.
Advantages of systemd include integrated logging via journald, a binary journal that captures structured logs with timestamps, priorities, and metadata, viewable through journalctl for efficient querying and rotation. Security features leverage Linux namespaces and cgroups for sandboxing: directives like PrivateTmp=yes isolate temporary files, ProtectSystem=strict read-only mounts the filesystem, and NoNewPrivileges=yes prevents privilege escalation, enhancing service isolation.[47] These capabilities have facilitated better container integration, allowing systemd to manage Docker or Podman containers as native units, with socket activation for seamless networking.[48]
Despite its widespread use, systemd has faced criticisms for its complexity and scope creep, as it encompasses not only init but also logging, networking, and device management, leading to a monolithic design that some argue complicates debugging and increases the attack surface.[49] Detractors contend that this breadth violates Unix principles of modularity, making it harder for non-Linux ports and contributing to heated debates within the community since its inception.[49]