Fact-checked by Grok 2 weeks ago

Linux namespaces

Linux namespaces are a feature of the that wrap global system resources in an abstraction, providing processes within a namespace with their own isolated instance of those resources, such that changes made inside the namespace are visible only to processes in that namespace. This isolation mechanism enables lightweight virtualization, particularly for technologies, by partitioning resources like process IDs, network stacks, and mount points without requiring a separate kernel or . Introduced incrementally starting with mount namespaces in 2.4.19 in 2002, namespaces were further developed through contributions from Eric W. Biederman, whose 2006 Linux Symposium paper proposed extending them to multiple global resources for improved server efficiency, application migration, and security isolation. The primary purpose of Linux namespaces is to facilitate , allowing multiple independent environments to run on the same host while sharing the underlying , which enhances resource utilization and security in scenarios like and . Namespaces are created using system calls such as clone(2), unshare(2), or setns(2) with specific CLONE_NEW* flags, and they can be managed via files in /proc/[pid]/ns/, which serve as handles for joining or querying namespaces. Most namespace types require the CAP_SYS_ADMIN capability, though user namespaces have been unprivileged since Linux 3.8. A namespace persists until its last process exits or it is explicitly unpinned, such as by closing associated file descriptors. There are eight main types of Linux namespaces, each isolating a distinct set of kernel resources: These namespaces often work in conjunction with control groups (cgroups) to limit resource usage, forming the foundation for tools like and that implement operating system-level .

Introduction

Definition and Core Concepts

Linux namespaces are a feature that enable the partitioning of kernel resources, allowing processes in different namespaces to perceive isolated views of global system resources such as processes, network interfaces, mount points, and user identifiers. This abstraction wraps a global system resource in a way that makes it appear to processes within the namespace as if they possess their own private instance, thereby confining changes and interactions to within that namespace. By creating multiple instances of these resources, namespaces facilitate resource separation without duplicating the underlying kernel structures. At their core, Linux namespaces operate by associating processes with specific namespace instances, where each type of namespace targets isolation of a particular resource category. Processes can create new namespaces or join existing ones, typically through kernel interfaces that allow for flexible management of these isolated environments. Namespaces exhibit a hierarchical structure in certain cases, where child namespaces inherit properties from parent namespaces unless explicitly configured for , enabling nested or layered isolation schemes. For instance, in a scenario involving , a launched within its own dedicated might perceive itself as process ID 1, viewing only co-namespaced processes and remaining unaware of others on the , which demonstrates the effectiveness of resource view separation. This capability delivers key benefits, including lightweight that supports secure multi-tenancy and application containment with minimal overhead compared to full machines, enhancing by limiting the scope of untrusted code.

Role in Isolation and Virtualization

Linux namespaces play a pivotal role in by wrapping global system resources in per-process abstractions, allowing processes to perceive private instances of these resources while remaining invisible to those outside the . For example, a process in a dedicated mount sees only its own filesystem hierarchy, preventing interference with the host's mounts, and similarly, network namespaces isolate interfaces and routing tables to avoid cross-process network conflicts. This mechanism ensures that modifications within one do not propagate globally, enhancing and resource segregation in multi-tenant environments. In virtualization, namespaces enable OS-level virtualization, a lightweight alternative to full-system emulation, by partitioning kernel resources without the need for a separate guest kernel. Paired with control groups (cgroups) for resource limiting, namespaces underpin container technologies, permitting multiple isolated user-space instances to run efficiently on a single host kernel, which reduces overhead compared to hypervisor-based systems. This combination supports scalable deployments, as seen in container orchestration platforms where namespaces delineate boundaries for applications. Unlike traditional virtualization via hypervisors, which simulates hardware and incurs significant performance costs from running independent OS instances, namespaces achieve at the level, sharing the host OS for greater efficiency and density. machines provide stronger hardware-level separation but at the expense of resource duplication, whereas namespace-based approaches excel in scenarios requiring rapid provisioning and minimal footprint. Container runtimes typically invoke namespaces during process creation to establish isolated views; for instance, spawning a process with PID and network namespaces simulates a standalone , where the process tree appears rooted at PID 1 and network traffic is confined to virtual interfaces, all while leveraging the host for execution.

History

Early Development (2000s)

The early development of Linux namespaces during the 2000s focused on enhancing within the to support lightweight , addressing the shortcomings of earlier mechanisms like , which offered limited filesystem and was prone to escapes. This work was motivated by the growing demand for running multiple isolated environments on a single physical machine, enabling efficient resource sharing in server consolidation, , and secure application deployment without the overhead of hypervisors. Influences included FreeBSD's jails for per-process resource views and ' Solaris Zones for , which demonstrated the benefits of namespace-like separation for and manageability. Eric W. Biederman played a pivotal role in conceptualizing namespaces as multiple instances of global resources, proposing this framework in his 2006 paper "Multiple Instances of the Global Linux Namespaces" presented at the Ottawa Linux Symposium. The proposal aimed to create distinct views of objects—such as process IDs, network stacks, and user IDs—for groups of processes, facilitating container technologies like those in . Biederman's efforts, initially under Linux Networx and later at , emphasized unprivileged isolation to mitigate privilege escalation risks, laying the groundwork for broader adoption. , originating from Virtuozzo's commercial patches since 2005, contributed early implementations of isolation features akin to namespaces, with developers like Pavel Emelyanov pushing for mainline integration to enable virtual private servers (VPS) with shared resources. The initial practical implementations emerged with the mount namespace, introduced by Al Viro in Linux kernel 2.4.19 on August 3, 2002, via the CLONE_NEWNS flag in clone(2). This allowed processes to maintain independent mount tables, isolating filesystem views and enabling chroot-like environments with greater flexibility, inspired by Plan 9's per-process namespaces. Building on this, the PID namespace was added in Linux kernel 2.6.24, released on January 24, 2008, providing separate process ID numbering to prevent PID conflicts across isolated groups and support nested process trees. Biederman's work on user namespaces began around 2005–2006, with prototype patches discussed in kernel mailing lists by 2008, focusing on mapping user and group IDs to enable unprivileged container roots, though full mainline merging occurred later. These foundational additions, often developed through out-of-tree patches from projects like OpenVZ, gradually converged into the upstream kernel, establishing namespaces as a core isolation primitive.

Major Additions and Kernel Integrations

The development of namespaces accelerated in the late 2000s and , with several key types integrated into the to support advanced isolation features for . The UTS , which isolates and views, was merged in kernel version 2.6.19 in December 2006, though its practical maturity evolved with subsequent refinements in later releases. Similarly, the IPC , providing isolation for resources such as System V IPC objects and message queues, was also introduced in Linux 2.6.19. The network namespace followed in kernel 2.6.24 in January 2008, enabling separate network stacks, interfaces, and routing tables per namespace, with fuller integration and usability enhancements completed in 3.8 in February 2013. A pivotal addition was the user namespace, developed starting around 2007 by Eric Biederman to enable unprivileged creation by mapping user and group IDs across namespaces. Despite ongoing security debates on the kernel mailing lists regarding potential privilege escalation risks and the complexity of capability mappings, it was merged into Linux 3.8 in 2013 after extensive review and patching. This integration marked a significant milestone, allowing non-root users to create namespaces without full system privileges, thereby facilitating safer container deployments. Subsequent expansions included the cgroup namespace in Linux 4.6 in May 2016, which virtualizes the view of control groups to prevent container processes from accessing host cgroup hierarchies. The time namespace arrived in Linux 5.6 in March 2020, offering isolated monotonic and boottime clocks with adjustable offsets, primarily to support checkpoint/restore functionality in containers. These additions were largely driven by the rise of container technologies, including the Linux Containers (LXC) project launched in 2008, which relied on namespaces for OS-level virtualization, and Docker's debut in 2013, which popularized namespaces through its lightweight container runtime and influenced kernel discussions on completeness and security. Kernel mailing list threads, particularly around user namespaces, highlighted tensions between isolation benefits and attack surface concerns, leading to iterative security improvements. As of 2025, no major new namespace types have been merged since the time namespace in 2020, with development efforts focusing on refinements such as enhanced user mappings and security mitigations in kernels 5.10 and later. Ongoing proposals, like a syslog for isolating logging resources, remain in discussion without upstream integration.

Namespace Types

Mount (mnt)

The mount , also known as the mnt , provides isolation of the filesystem mount table and tree, ensuring that processes in different mount perceive distinct sets of mounted filesystems. Each process views only the mounts established within its own , and operations such as mounting or unmounting filesystems affect solely that without impacting others. This isolation allows for independent filesystem hierarchies, where the (/) can vary per , enabling processes to operate within customized environments. Key features of the mount namespace include support for bind mounts, which permit remounting a directory subtree at another location within the same or different namespace, facilitating flexible filesystem reconfiguration. It also integrates seamlessly with advanced filesystems like , which leverages mount namespaces to create layered, union-mounted filesystems for read-write overlays on read-only bases, commonly used in container images. Mount propagation mechanisms further enhance control: mounts can be configured as private (MS_PRIVATE, the default), shared (MS_SHARED, for bidirectional propagation), or slave (MS_SLAVE, for unidirectional reception from a master), allowing selective sharing of mount events across related namespaces while maintaining isolation where needed. A new mount namespace is created by invoking clone(2) or unshare(2) with the CLONE_NEWNS flag, requiring the CAP_SYS_ADMIN capability in the caller's namespace (except when nested under a namespace with mapped privileges). To join an existing mount , setns(2) is used with a obtained from /proc/[pid]/ns/mnt, which serves as the namespace's identifier and can be bind-mounted to persist it beyond process lifetime. The enforces a per- limit on mount namespaces via /proc/sys//max_mnt_namespaces, with creation failing via ENOSPC if exceeded. In practice, mount namespaces are essential for container runtimes, such as providing a container with its own /proc and /sys mounts populated from the host but isolated to prevent interference, allowing safe introspection of container-specific kernel interfaces without exposing or altering the host's view. However, limitations arise with mount propagation: if a mount is not explicitly set to private (MS_PRIVATE), shared or slave configurations can cause unintended visibility of mounts across namespaces, potentially leaking filesystem changes unless propagation types are carefully managed at namespace creation or via mount(2) flags. Within a user namespace, this isolation extends to unprivileged mounting, where root privileges map to the caller's user ID in the parent namespace, enabling non-root users to establish private mount trees.

Process ID (pid) Namespace

The namespace provides isolation of the process ID number space, enabling processes in different namespaces to have identical PIDs without interference. In this namespace, each instance maintains its own view of the hierarchy, where the (PID 1) serves as the , and processes outside the namespace are invisible to those within it. This isolation ensures that process identifiers, such as those used in system calls like kill(2) or wait(4), are confined to the local namespace, preventing cross-namespace process management. Key features include support for nested hierarchies, allowing up to 32 levels of PID namespaces since Linux 3.7, where child namespaces inherit visibility of parent processes but maintain separate PID assignments. The /proc filesystem reflects this isolation, displaying only processes local to the viewing namespace, while signals sent to PIDs are confined within the namespace unless explicitly handled across boundaries. Additionally, orphaned processes in a namespace are reparented to the namespace's process rather than the global . PID namespaces are created using the CLONE_NEWPID flag in clone(2) or unshare(2) system calls, which places new processes into a fresh PID space starting from 1. Since Linux 5.3, pidfd_open(2) provides a file descriptor-based to a , facilitating namespace-aware management, such as joining via setns(2) or signaling with pidfd_send_signal(2), without relying on /proc paths. The /proc/sys/kernel/ns_last_pid file tracks the last allocated in the current namespace and can be adjusted with appropriate capabilities (CAP_SYS_ADMIN or CAP_CHECKPOINT_RESTORE since Linux 5.9). A representative example is in containerization, where the container's init process runs as PID 1 within its PID namespace, unaware of host processes, allowing the container to manage its own process lifecycle independently while the host views the container processes under higher PIDs. Limitations include the inability to namespace the host's global init process (PID 1 in the root namespace), which remains visible and cannot be isolated, potentially requiring careful signal handling to avoid unintended propagation. Furthermore, joining a PID namespace via setns(2) affects only future children of the calling process, not the caller itself, necessitating forking for full immersion. PID namespaces are often combined with control groups (cgroups) to enforce resource limits on isolated process sets.

Network (net) Namespace

The network namespace in Linux provides isolation for networking resources, allowing processes within a namespace to have their own set of network devices, IPv4 and IPv6 protocol stacks, tables, rules (such as those managed by ), and other related configurations. This separation ensures that network operations in one namespace do not interfere with those in another, enabling independent network environments on the same . Key isolated views include the /proc/net and /sys/class/net directories, which reflect only the resources visible to the namespace, as well as port numbers (to prevent conflicts) and UNIX domain sockets bound to namespace-local paths. Physical network devices are bound to a single namespace at a time; when a device is moved to a new , it becomes invisible in the original one, and freed devices revert to the initial (root) namespace. Virtual devices, particularly virtual Ethernet (veth) pairs, facilitate communication between namespaces by acting as tunnels: one end resides in one namespace and the other in another, with packets transmitted on one immediately received by its peer. These veth pairs support integration with bridges and VLANs, configurable via tools like (8) and brctl(8), allowing complex topologies such as bridging a endpoint to a physical interface for external connectivity. Firewall rules, including chains, are also namespace-specific, ensuring that filtering and policies apply only within the isolated stack. A new network namespace is created using the clone(2) system call with the CLONE_NEWNET flag, which requires CAP_SYS_ADMIN privilege and results in the child process inheriting a stack. Alternatively, the unshare(2) system call with the same flag can detach the calling process into a new . For user-space management, the ip netns tool from the iproute2 package allows creation (e.g., ip netns add mynet), listing (ip netns list), execution of commands within a namespace (e.g., ip netns exec mynet ip link set lo up), and deletion (ip netns del mynet). veth pairs are created with commands like ip link add veth0 type veth peer name veth1, followed by moving endpoints to namespaces using ip link set veth1 netns mynet. In a practical example, a might create a network namespace for a , assigning it a private IP address (e.g., 192.168.1.2) via a veth pair connected to a host bridge, ensuring the container has no direct access to the host's network interfaces or routing tables while allowing outbound traffic through the bridge. This setup is commonly used in container networking to provide isolated, virtualized network environments. Limitations include the fact that physical devices cannot be shared across namespaces simultaneously, and veth pairs are destroyed when their owning namespace is freed. Certain kernel-wide network parameters, such as those in /proc/sys/net (e.g., IPv4/IPv6 configuration inheritance), may propagate from the initial namespace to new ones unless explicitly configured otherwise via sysctls like devconf_inherit_init_net, potentially leading to unintended shared behaviors. Management typically requires the ip netns tool or equivalent, as direct namespace handling is privileged.

Inter-process Communication (ipc) Namespace

The (IPC) namespace in isolates System V IPC resources, including shared memory segments (shm), semaphores (sem), and message queues (msg), as well as message queues, ensuring that these objects are confined to processes within the same . This isolation makes keys and identifiers unique per namespace, preventing processes in different IPC namespaces from accessing or sharing the same IPC objects, even if they use identical keys generated by functions like ftok(). For instance, ftok() paths are effectively isolated because the resulting IPC objects remain namespace-bound, avoiding unintended cross-namespace collisions. Additionally, /dev/shm mounts, which often back , can be namespaced to align with this isolation, providing a separate instance per IPC namespace since Linux 2.6.19. Creation of an IPC namespace occurs through system calls such as or unshare(2) with the , which establishes a new namespace for the calling process or its child, requiring the CONFIG_IPC_NS kernel configuration option. This flag affects only System V and IPC primitives; other communication mechanisms like pipes (isolated via PID namespaces) and Unix domain sockets (handled by or namespaces) remain unaffected. Processes can join an existing IPC namespace using setns(2). When the last process in an IPC namespace exits, all associated IPC objects are automatically destroyed, cleaning up resources without leakage to other namespaces. In practice, IPC namespaces enable containers to utilize and other IPC mechanisms without contaminating the host system or adjacent containers; for example, a containerized application can create a shared memory segment for internal process coordination that remains invisible and inaccessible to the host or other isolated environments. This complements PID namespaces by providing finer-grained isolation for legacy IPC primitives beyond simple process tree separation. However, limitations arise with older applications designed around global IPC assumptions, where processes expect to communicate across what would now be namespace boundaries, potentially requiring configuration adjustments like sharing the host's namespace (e.g., via container runtime flags) to maintain functionality. In some cases, migrating such applications to fully namespaced environments may necessitate recompilation or modifications to avoid reliance on cross-namespace , particularly if they use hardcoded global identifiers. message queue support, added in Linux 2.6.30, further mitigates issues for modern applications but highlights the evolutionary constraints on legacy code.

UTS Namespace

The UTS namespace provides isolation for system identifiers associated with the Unix Time-sharing System (UTS), specifically the hostname, nodename, and domainname fields returned by the uname(2) system call. This isolation ensures that processes in different UTS namespaces perceive distinct values for these identifiers, enabling independent system identities without global impact. The namespace copies the parent's hostname and NIS (Network Information Service, also known as Yellow Pages or YP) domain name upon creation, allowing subsequent modifications to remain local to the namespace. Key features include the ability to modify the hostname using sethostname(2) and the domain name using setdomainname(2), with changes visible only to processes sharing the same UTS namespace. This locality is particularly useful for establishing per-process or per-group identities in virtualized environments, such as assigning unique hostnames to isolated workloads. A new UTS namespace is created via the clone(2) or unshare(2) system calls with the CLONE_NEWUTS flag. Inspection occurs through system calls like gethostname(2), getdomainname(2), or uname(2) executed within the namespace, or by examining the namespace handle in /proc/[pid]/ns/uts, which displays the namespace type and inode number (e.g., uts:[4026531838]). For example, in container orchestration, each container can operate with its own —such as "app-server1"—facilitating , logging, and application configuration while the host retains its original identity like "prod-host". This per-container hostname isolation simplifies management in multi-tenant setups without requiring full system reconfiguration. The UTS namespace affects only UTS-specific identifiers, providing isolation for the /YP but not for DNS resolution, which depends on configurations like /etc/[resolv.conf](/page/Resolv.conf) managed through or namespaces. Thus, while hostname-based lookups may appear isolated, broader name resolution remains subject to other namespace interactions.

User (uid) Namespace

The user namespace in isolates the user and group ID number spaces, allowing within the namespace to perceive a remapped set of user IDs () and group IDs (GIDs) distinct from those on the host system. This isolation enables a process to operate as the (UID 0) inside the while being mapped to an unprivileged UID on the host, thereby containing privilege escalations and enhancing security for containerized or sandboxed environments. The primary purpose is to support unprivileged container execution by allocating subsets of the host's UID/GID ranges to the namespace, preventing processes from accessing or modifying resources outside their mapped . Key features include the configuration of and GID mappings through the /proc/[pid]/uid_map and /proc/[pid]/gid_map files, which define how IDs in the child namespace correspond to IDs in the parent namespace; for example, a mapping might specify that the range 0-65536 in the child maps to 100000-165535 on the host. These mappings support sub-UID and sub-GID allocation, often managed via tools like newuidmap and newgidmap, allowing non-root users to delegate portions of their UID range for nested namespaces. User namespaces enable unprivileged creation since Linux kernel 3.8, provided the kernel is configured with CONFIG_USER_NS=y and the creating process has appropriate sub-UID/GID ranges allocated in /etc/subuid and /etc/subgid. User namespaces are created using the clone(2) or unshare(2) system calls with the CLONE_NEWUSER flag, which establishes a new namespace as the first step if combined with other namespace creation flags; privileged processes require CAP_SYS_ADMIN, but unprivileged creation relies on the aforementioned kernel configuration and ID mappings. For instance, a container process with root privileges inside the namespace (UID 0) can be mapped to a host UID such as 1000, ensuring that any attempts to access host resources are restricted to that non-privileged identity and mitigating potential privilege escalations. Despite these benefits, user namespaces have limitations, including early security vulnerabilities such as symlink attacks exploitable before kernel 4.2 due to incomplete ID mapping enforcement in filesystem operations (e.g., CVE-2013-1858). Additionally, not all system calls and kernel interfaces fully respect mappings; for example, early implementations had issues with binaries that were resolved in subsequent kernels through improved checks and VFS adjustments. As of 2025, enhancements in 6.x series, including refined idmapped mount support and stricter bounding, have bolstered container by better integrating namespace mappings with . namespaces complement mount namespaces in handling binaries by applying ID remapping to filesystem views.

Control Groups (cgroup) Namespace

The control groups (cgroup) namespace in provides isolation of the view of the cgroup for processes within the , ensuring that they perceive only their own subtree as the of the rather than the full host system structure. This hides the host's cgroup organization from containerized or isolated processes, preventing information leakage about the broader system and enhancing abstraction in environments like containers. By remapping cgroup paths to be relative to the namespace's , it allows processes to operate as if their local cgroup is the global , which is particularly useful for and scenarios. Introduced in Linux kernel version 4.6 in 2016, the cgroup namespace supports both cgroup v1 and v2 hierarchies, with the latter having been unified starting in kernel 4.5. Key features include the modification of views in /proc/<pid>/cgroup, which displays cgroup paths relative to the namespace root, and adjustments to /proc/<pid>/mountinfo to reflect only the visible cgroup mountpoints. For instance, a process outside the namespace might see a full path like 0::/user.slice/user-1000.slice/session-1.scope, while inside, it appears as 0::/. This isolation applies to cgroup roots but does not alter the underlying resource controls themselves. Creation of a cgroup namespace occurs via the clone(2) or unshare(2) system calls using the CLONE_NEWCGROUP flag, where the calling process's current cgroup becomes the root for the new namespace. Joining an existing namespace is possible with setns(2), provided the process has CAP_SYS_ADMIN capability in the target namespace. Upon creation, mountpoints such as /sys/fs/cgroup are affected, showing only the namespace-local view, which may require remounting specific cgroup filesystems (e.g., mount -t cgroup -o freezer none /sys/fs/cgroup/freezer) for full visibility within the isolated context. In practice, this enables to remain unaware of the host's controllers; for example, a might see its cgroup path as /docker/<container_id> as the root, abstracting away host-level hierarchies like slices and improving portability without exposing sensitive details. This complements the user by providing a fuller layer for -related views, though it requires a configured with CONFIG_CGROUPS. Limitations include the fact that the does not isolate or modify actual limits or , which are handled by the mechanism itself rather than the namespace virtualization. Additionally, while it supports cgroup v2's unified hierarchy from 4.5 onward, older v1 setups may exhibit inconsistencies in visibility without explicit remounting. Processes cannot migrate outside their namespace , enforcing the but potentially complicating certain administrative tasks.

Time Namespace

The time namespace in isolates specific time-related counters, virtualizing the CLOCK_MONOTONIC (including its COARSE and RAW variants) and CLOCK_BOOTTIME (including ALARM) clocks to provide per-namespace offsets, while the wall-clock time (CLOCK_REALTIME) remains shared globally. This isolation ensures that processes in different namespaces perceive distinct views of monotonic time progression and boot-time elapsed, which is particularly valuable for maintaining time consistency during migration, checkpoint/restore operations, and freezing without affecting the host system's . The feature was merged into the in version 5.6, released in March 2020, and requires the kernel to be configured with the CONFIG_TIME_NS option. Key system calls affected by time namespaces include clock_gettime(2), clock_nanosleep(2), nanosleep(2), timer_settime(2), and timerfd_settime(2), all of which return or use the offset-adjusted time values specific to the calling 's namespace; similarly, the /proc/uptime file reflects namespace-specific uptime. Offsets for these clocks are managed via the /proc//timens_offsets file, where each line specifies a clock ID followed by seconds and nanoseconds to add (e.g., "1 172800 0" to offset CLOCK_MONOTONIC by two days), inheriting from the namespace upon creation and remaining fixed once the first enters the . Setting offsets requires the CAP_SYS_TIME capability in the user namespace, with limits ensuring offsets do not result in negative times or exceed approximately 146 years to prevent overflow. Time namespaces are created by invoking unshare(2) with the CLONE_NEWTIME flag, which enters child processes into the new namespace while leaving the calling process in its original one; the namespace can also be referenced and joined via the /proc//ns/time symlink. For example, in container testing or scheduling scenarios, a namespace might apply a zero offset to CLOCK_MONOTONIC to effectively pause time perception for paused processes, or add a custom offset to simulate accelerated testing environments without altering the host's global time. In checkpoint/restore tools like CRIU, time namespaces enable restoring containers with adjusted boottime and monotonic offsets to match the checkpointed state, supporting seamless migration. Limitations include the inability to virtualize CLOCK_REALTIME or gettimeofday(2), meaning wall time and certain legacy interfaces remain unisolated, and offsets cannot be modified after namespace population to avoid inconsistencies for existing processes. This design prioritizes safety in multi-process environments but restricts full time , making it unsuitable for scenarios requiring isolated real-time clocks, such as full emulation.

Implementation Details

System Calls and Commands

Linux namespaces are managed primarily through kernel system calls that allow processes to create, join, or unshare namespaces. The clone(2) system call is used to create a new process while specifying one or more new namespaces for the via the CLONE_NEW* flags in its flags argument. These flags include CLONE_NEWNS for mount namespaces, CLONE_NEWPID for process ID namespaces, CLONE_NEWNET for network namespaces, CLONE_NEWUTS for UTS namespaces, CLONE_NEWIPC for namespaces, CLONE_NEWUSER for namespaces, CLONE_NEWCGROUP for cgroup namespaces, and CLONE_NEWTIME for time namespaces. Multiple flags can be combined bitwise in a single clone(2) call to create several new namespaces simultaneously for the . The unshare(2) enables a to disassociate parts of its execution context from shared resources, effectively moving the calling into new namespaces without forking a child. It accepts the same CLONE_NEW* flags as clone(2) to specify which namespaces to unshare and enter. For example, unshare(2) with CLONE_NEWNET would create and join a new network namespace for the caller. To enter an existing namespace, the setns(2) is employed, which joins the calling process to a specified by a file descriptor obtained from the /proc filesystem. The nstype argument in setns(2) indicates the type of namespace to join, using the same CLONE_NEW* constants for . This allows processes to migrate into namespaces created by other processes. Namespaces are exposed in user space through the /proc//ns/ directory, where each namespace type appears as a symbolic link or bind-mountable representing the namespace's inode. These entries, such as /proc//ns/net for the network , can be opened to obtain file descriptors for use with setns(2). Processes sharing the same have identical inode numbers for these entries. User-space tools facilitate namespace management without direct system call invocation. The unshare command from the package invokes the unshare(2) based on command-line options corresponding to types, then executes a specified program in the new namespaces. For instance, unshare --net --fork /bin/[bash](/page/Bash) creates a new network and forks a shell into it. The nsenter command, also from , uses setns(2) to enter specified namespaces of a target or PID and run a command therein. It supports options like -t to target a and -n for network namespaces. For network namespaces specifically, the ip netns subcommand from the package provides utilities to add, delete, list, and execute commands within named network namespaces. Commands such as ip netns add <name> create a new network namespace, while ip netns exec <name> <cmd> runs in that namespace, often by leveraging mount namespaces internally for compatibility. Common errors in namespace operations include EPERM, returned when the required CAP_SYS_ADMIN capability is lacking, though user namespaces since Linux 3.8 permit unprivileged creation. Other errors like EINVAL occur if invalid flags or file descriptors are provided.

Namespace Hierarchy and Joining

Linux namespaces are organized in a hierarchical for each namespace type, where child namespaces are nested within parent namespaces. This ensures that namespaces form a forest across the system, with the global (root) namespace serving as the top-level parent for all types. For and namespaces specifically, the structure is explicitly hierarchical, allowing a namespace to persist as long as it has active child namespaces or, in the case of user namespaces, owns subordinate non-user namespaces. When a new is created via fork(2), it inherits all of its parent's namespaces by default, maintaining continuity in the hierarchy unless explicitly overridden during creation. Processes can join existing namespaces to alter their view of system resources, enabling peer relationships outside the default parent-child inheritance. The setns(2) system call facilitates this by allowing a process to reassociate itself with a target namespace, specified by a file descriptor obtained from /proc/[pid]/ns/[type] entries. This fd-based approach enhances safety by avoiding direct path manipulations and permitting atomic joins for multiple namespace types when using a PID file descriptor (available since Linux 5.8). For instance, a process might join a different network namespace while remaining in its original PID namespace, demonstrating how processes can belong to distinct namespaces across types simultaneously—such as operating in the host's PID space but an isolated network environment. However, joining imposes restrictions: for PID namespaces, the target must be a descendant or the same as the caller's; user namespace joins require appropriate capabilities like CAP_SYS_ADMIN in the target. This multi-namespace capability allows fine-grained isolation, where a single views resources through a combination of inherited and joined namespaces, without requiring a full hierarchical shift. Forked children thus start in the same set as their parent, but subsequent setns(2) calls or unshare(2) operations can create or enter peers, forming branches in the per-type tree. To inspect namespace hierarchies and memberships, tools and interfaces provide visibility into the and per-process affiliations. The lsns(1) command from lists all accessible namespaces system-wide, displaying details like namespace ID (inode number), type, number of processes, owner, and command, which helps trace hierarchical relationships by inode comparisons. For per-process inspection, the /proc/[pid]/ns/ directory contains symbolic links for each namespace type (e.g., pid, net), where matching device IDs and inodes indicate shared membership; the /proc/[pid]/status file further reports the PID namespace via the NSpid field. These mechanisms allow administrators to map the overall tree and verify joins without kernel modifications.

Creation, Destruction, and Lifecycle

Linux namespaces are primarily created through two mechanisms: implicitly, when a new process is forked using the clone system call with CLONE_NEW* flags specifying the desired namespace types, which allocates fresh namespace instances for the child process; or explicitly, when an existing process calls unshare with the same flags to detach itself from its current namespaces and enter new ones. These operations are handled by the kernel's namespace subsystem, which initializes the appropriate structures based on the flags provided. While most namespace types require the CAP_SYS_ADMIN capability within the creating process's user namespace to prevent unauthorized isolation, user namespaces can be created by unprivileged users since Linux kernel version 3.8, enabling safer experimentation with container-like isolation. Destruction of namespaces is handled automatically by the through a reference-counting mechanism, ensuring resources are reclaimed only when no entities depend on the . A remains alive as long as at least one is bound to it, or while it is pinned by open s—typically obtained from entries in /proc/[pid]/ns/—or by bind mounts of those descriptors. When the final reference is released, such as upon the last exiting or closing the pinning , the decrements the count to zero and frees the 's associated data structures. In cases of hierarchies, like those formed by PID or namespaces, a parent persists until all descendant namespaces and their es are gone, preventing premature cleanup. The lifecycle of a is tightly coupled to management and tracking within the . Key events include creation and termination, which can alter reference counts, and propagation rules that dictate how changes in one (such as operations) may affect related namespaces under specific sharing configurations. or orphaned namespaces—those with no active but lingering references—are automatically cleaned up by the upon reference release, avoiding indefinite retention. This design ensures efficient memory and object reuse, with the nsfs pseudo-filesystem facilitating visibility into states via /proc. Management of namespaces during their lifecycle often involves obtaining namespace file descriptors (nsfds) from /proc/[pid]/ns/ entries, which enable operations like joining namespaces or monitoring their status without direct process attachment. Process file descriptors (pidfds) can complement this by allowing signaling of processes within specific namespaces. However, capabilities such as CAP_SYS_ADMIN are typically required for , joining, or modification to enforce boundaries. Unprivileged s face additional constraints: since Linux 4.9, per- limits on namespace (e.g., maximum number of each type) are enforced via tunable files in /proc/sys/[user](/page/User)/, charged recursively across nested namespaces to curb potential denial-of-service from excessive allocations.

Adoption and Applications

Container Technologies

Linux namespaces form the foundational isolation mechanism in modern container technologies, enabling lightweight virtualization by segregating processes into distinct views of system resources. Their adoption began with Linux Containers () in 2008, which first combined namespaces with control groups to create user-space containers that mimic full operating system environments without requiring a separate . Docker significantly popularized this approach in 2013 by introducing an accessible tooling layer atop LXC's primitives, shifting from niche server management to widespread application deployment. By 2025, namespaces underpin over 90% of container-based deployments, reflecting the surge in cloud-native architectures where 89% of organizations report substantial use of such techniques. In , namespaces are integrated through the libcontainer library, which has evolved into the runc runtime under the (OCI). Runc employs system calls like clone()—with flags such as CLONE_NEWPID and CLONE_NEWNET—and unshare() to instantiate and manage namespaces, creating isolated scopes for processes. By default, Docker activates the core namespaces of for process IDs, for interfaces and routing tables, mount for filesystem hierarchies, UTS for hostname and domain details, and for (user namespaces require daemon-level configuration for /GID mappings). This setup ensures containers operate in a self-contained , with root privileges remapped to non-privileged users on the for enhanced security when user namespaces are enabled. Configuration flexibility includes flags like --network=host, which bypasses the network namespace entirely, allowing the container to utilize the host's networking directly for scenarios requiring low-latency access to host ports. The time namespace is not used by default. Kubernetes builds on these primitives by incorporating namespaces into pod sandboxes, which provide a secure boundary for co-located containers sharing resources like volumes and networks while isolating them from other pods and the host. Pod sandboxes leverage namespaces for PID, network, IPC, and user isolation—as of Kubernetes v1.33 (April 2025), user namespaces are enabled by default when stack requirements are met—managed through compliant runtimes such as CRI-O—which focuses on OCI standards and uses runc for execution—and containerd, a high-level runtime that handles container lifecycle operations including namespace setup. Kubernetes NetworkPolicies further utilize network namespaces to define fine-grained traffic controls, selecting pods or entire namespaces via labels to permit or deny ingress/egress flows, thereby enforcing isolation between workloads in multi-tenant clusters. The performance impact of namespaces in container technologies remains negligible, with benchmarks showing less than 1% overhead in CPU and I/O operations compared to bare-metal execution, primarily due to the kernel-level efficiency of namespace switching. Namespaces are typically complemented by control groups for resource limiting, ensuring balanced scalability in production environments.

Security and Sandboxing Use Cases

Linux namespaces play a crucial role in enhancing security by providing isolation mechanisms that confine processes and limit their access to system resources, thereby reducing the potential for privilege escalation and lateral movement in case of compromises. In sandboxing scenarios, tools like Firejail leverage namespaces alongside seccomp-bpf filters to create lightweight, restricted environments for running untrusted applications, isolating their view of the filesystem, network, and processes to prevent unauthorized access or malware propagation. This approach is particularly effective for confining potentially malicious code, such as downloaded executables or third-party software, by dropping unnecessary capabilities and enforcing syscall restrictions within the sandboxed namespace boundaries. In multi-tenant environments, providers utilize Linux namespaces to achieve separation on shared instances, allowing multiple isolated workloads to coexist without interfering with each other. For instance, and namespaces segment traffic and process trees per , mitigating risks from noisy neighbors or cross- attacks on shared infrastructure. This isolation supports scalable, cost-effective deployments while maintaining boundaries, as each operates within its own abstracted resource view. Key security benefits of namespaces include a reduced , particularly through user namespaces, which remap user and group IDs to prevent containerized or sandboxed processes from gaining host-level privileges and escaping to perform exploits. When combined with mandatory access control systems like or SELinux, namespaces further enforce policy-based restrictions on namespace creation and operations, confining unprivileged users from abusing kernel interfaces for escalation. For example, SELinux can label and restrict user namespace transitions, while profiles can mediate access to namespace-related syscalls, layering fine-grained controls atop namespace . Despite these advantages, namespaces have historical limitations, such as breakout vulnerabilities in pre-user configurations where shared resources allowed escapes via bugs, as seen in exploits like CVE-2022-0492 affecting cgroup interactions. Early implementations without user support exposed higher risks of privilege escalation from confined environments. To mitigate these, 2025 best practices emphasize enabling user remapping on production systems, which maps container UIDs to non-privileged users (e.g., via /etc/subuid and /etc/subgid), combined with hardening options like user.max_user_namespaces to limit proliferation. Beyond traditional uses, projects like Google's gVisor integrate namespaces with a user-space implementation to create robust sandboxes, intercepting syscalls from namespaced processes and emulating them securely to block direct access and further isolate untrusted workloads. This layered approach enhances protection against exploits in high-security contexts, such as running legacy or unverified binaries.

Proposed Extensions

Syslog Namespace

The syslog namespace was proposed in 2013 by Rui Xiang, an engineer at , to isolate syslog communications and prevent containers from accessing host logs via the shared /dev/log device. The proposal aimed to enable each to maintain independent syslog identifiers, buffers, and daemons, ensuring that activities remain confined within boundaries. This would address a key limitation in containerized environments, where processes could otherwise inadvertently read or interfere with the host's system logs. The core goals included separating kernel printks and user-space syslog(3) calls per namespace, allowing container-specific handling of logs such as those from iptables rules without leaking to the host. By providing per-namespace log buffers, the feature would support dedicated syslog daemons running inside containers, enhancing security and manageability in multi-tenant setups. The syslog namespace relates briefly to the IPC namespace, as /dev/log relies on Unix domain sockets for logging primitives. In terms of implementation, the syslog namespace was designed to be tightly coupled with the user namespace: a new syslog namespace would be created automatically upon user namespace initialization, with an additional ioctl command (SYSLOG_ACTION_NEW_NS on /proc/kmsg) enabling user-space creation of namespaces. Patches introduced structures like struct syslog_ns for namespace tracking, per-namespace ring buffers for printk messages, and modifications to syslog(3) to route calls based on the current namespace, using a ns_printk variant for kernel logging. Despite these details, the proposal was not merged into the mainline due to concerns about increasing kernel complexity for a feature that could be addressed through existing or emerging alternatives. As of November 2025, the syslog namespace remains unmerged and absent from the kernel's namespace implementations. Instead, systemd's LogNamespace directive offers comparable by creating logical journal namespaces for services, allowing them to log via syslog(3) without mixing streams from the host or other projects. User-space tools like syslog-ng provide further alternatives, supporting multiple systemd-journal sources across namespaces for flexible log collection and forwarding.

Other Emerging Proposals

Ongoing controversies surrounding user namespaces, highlighted in recent presentations as both enabling unprivileged containers and introducing exploit vectors, underscore the caution in approving extensions. As of November 2025, development focuses on refining existing namespaces, such as enhancing time namespace APIs for better checkpoint/restore integration in tools like CRIU, with no major new proposals merged into the kernel. Community discussions continue on the containers@ and linux-kernel@vger mailing lists, exploring namespace evolution for container orchestration and security.

References

  1. [1]
    namespaces(7) - Linux manual page - man7.org
    The namespace is hierarchical (i.e., a PID or user namespace), and has a child namespace. • It is a user namespace that owns one or more nonuser namespaces. • ...User_namespaces(7) · Pid_namespaces(7) · Time_namespaces(7)
  2. [2]
    Namespaces in operation, part 1: namespaces overview - LWN.net
    Jan 4, 2013 · The Linux 3.8 merge window saw the acceptance of Eric Biederman's sizeable series of user namespace and related patches.
  3. [3]
    [PDF] Multiple Instances of the Global Linux Namespaces
    Jul 19, 2006 · Currently I have identified ten separate names- paces in the kernel. The filesystem mount namespace, uts namespace, the SYS V IPC namespace, ...<|separator|>
  4. [4]
    What Are Namespaces and cgroups, and How Do They Work?
    Jul 21, 2021 · Namespaces partition kernel resources for process isolation, while cgroups limit and control resource usage of processes.<|control11|><|separator|>
  5. [5]
    The 7 most used Linux namespaces - Red Hat
    Jan 11, 2021 · Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources and another set of ...
  6. [6]
    Containers vs VMs - Red Hat
    Dec 13, 2023 · Containers and virtual machines (VMs) are 2 approaches to packaging computing environments that combine various IT components and isolate ...What is a container? · Cloud-native vs. traditional IT · How do they work?
  7. [7]
    History of containers in Linux kernel - OpenVz
    Kernel namespaces are another key part of a container solution, with Red Hatter Eric W. Biederman's 2008 user namespaces patches being arguably the most ...
  8. [8]
    Linux 2.6.24 - LWN.net
    Jan 24, 2008 · The release is out there (both git trees and as tarballs/patches), and for the next week many kernel developers will be at (or flying ...
  9. [9]
  10. [10]
    Overlay Filesystem - The Linux Kernel documentation
    The “xino” feature can be enabled with the “-o xino=on” overlay mount option. If all underlying filesystems support NFS file handles, the value of st_ino for ...
  11. [11]
    Shared Subtrees — The Linux Kernel documentation
    ### Summary of Mount Propagation in Mount Namespaces
  12. [12]
    unshare(2) - Linux manual page - man7.org
    24) This flag has the same effect as the clone(2) CLONE_NEWNET flag. Unshare the network namespace, so that the calling process is moved into a new network ...Description Top · Errors Top · Examples Top
  13. [13]
    setns(2) - Linux manual page - man7.org
    Reassociating with a PID namespace is allowed only if the target PID namespace is a descendant (child, grandchild, etc.) of, or is the same as, the current PID ...
  14. [14]
    user_namespaces(7) - Linux manual page - man7.org
    When an unsupported subsystem is configured into the kernel, it is not possible to configure user namespaces support. As at Linux 3.8, most relevant subsystems ...Missing: merge | Show results with:merge
  15. [15]
    pid_namespaces(7) - Linux manual page - man7.org
    PID namespaces isolate the process ID number space, meaning that processes in different PID namespaces can have the same PID. PID namespaces allow containers to ...
  16. [16]
    pidfd_open(2) - Linux manual page
    ### Summary of pidfd_open System Call
  17. [17]
    network_namespaces(7) - Linux manual page - man7.org
    Network namespaces isolate system resources associated with networking, including network devices, protocol stacks, routing tables, and firewall rules.
  18. [18]
    veth(4) - Linux manual page - man7.org
    A veth is a virtual Ethernet device that acts as a tunnel between network namespaces, created in interconnected pairs. Packets are immediately received on the ...
  19. [19]
    clone(2) - Linux manual page - man7.org
    Only a privileged process (CAP_SYS_ADMIN) can employ CLONE_NEWNET. CLONE_NEWNS (since Linux 2.4.19) If CLONE_NEWNS is set, the cloned child is started in a new ...
  20. [20]
    ip-netns(8) - Linux manual page - man7.org
    A network namespace is logically another copy of the network stack, with its own routes, firewall rules, and network devices. By default a process inherits its ...
  21. [21]
    Documentation for /proc/sys/net - The Linux Kernel documentation
    This file contains the documentation for the sysctl files in /proc/sys/net. The interface to the networking parts of the kernel is located in /proc/sys/net.
  22. [22]
    ipc_namespaces(7) - Linux manual page - man7.org
    IPC namespaces isolate certain IPC resources, namely, System V IPC objects (see sysvipc(7)) and (since Linux 2.6.30) POSIX message queues (see mq_overview(7)).
  23. [23]
    Chapter 1. Introduction to Linux Containers - Red Hat Documentation
    IPC namespaces isolate certain interprocess communication (IPC) resources, such as System V IPC objects and POSIX message queues. This means that two ...<|control11|><|separator|>
  24. [24]
    uts_namespaces(7) - Linux manual page - man7.org
    UTS namespaces provide isolation of two system identifiers: the hostname and the NIS domain name. These identifiers are set using sethostname(2) and ...
  25. [25]
    Building a container by hand using namespaces: The UTS ... - Red Hat
    Jan 5, 2022 · The UTS namespace controls the hostname and NIS domain, and is useful in containers. It is not the most complicated namespace.
  26. [26]
    User namespaces progress - LWN.net
    Dec 13, 2012 · Eric is now engaged in pushing further patches into the kernel with the goal of having a more or less complete implementation of user namespaces in Linux 3.8.
  27. [27]
    Anatomy of a user namespaces vulnerability - LWN.net
    Mar 20, 2013 · An exploit posted on March 13 revealed a rather easily exploitable security vulnerability (CVE 2013-1858) in the implementation of user namespaces.Overview · The Vulnerability Scenario · A Three-Phase Attack
  28. [28]
    Idmappings - The Linux Kernel documentation
    However, it is perfectly possible to combine idmapped mounts with filesystems mountable inside user namespaces. We will touch on this further below. Filesystem ...
  29. [29]
    cgroup_namespaces(7) - Linux manual page - man7.org
    Use of cgroup namespaces requires a kernel that is configured with the CONFIG_CGROUPS option. The virtualization provided by cgroup namespaces serves a number ...
  30. [30]
    CGroup Namespaces - LWN.net
    Oct 13, 2014 · The new CGroup Namespace allows a process to “unshare” its cgroup hierarchy starting from the cgroup its currently in.
  31. [31]
    Linux_4.6 - Linux Kernel Newbies
    May 15, 2016 · This release adds support for cgroup namespaces, which provides a mechanism to virtualize the view of the /proc/$PID/cgroup file and cgroup ...Support for cgroup namespaces · Add support for the pNFS SCSI... · Networking
  32. [32]
    Control Group v2 - The Linux Kernel documentation
    cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner.
  33. [33]
    time_namespaces(7) - Linux manual page - man7.org
    The /proc/pid/ns/time_for_children symbolic link shows the time namespace in which the children of a process will be created.
  34. [34]
    Linux_5.6 - Linux Kernel Newbies
    Apr 23, 2020 · This release adds another per-process namespace to kernel: a time namespace. This namespace is required mainly in the context of checkpoint/ ...
  35. [35]
    Time namespace - CRIU
    Apr 29, 2020 · The time namespace support has been merged in the 5.6 kernel. The current implementation allows setting per-namespace offsets to the system ...
  36. [36]
    unshare system call - The Linux Kernel documentation
    unshare() system call adds a primitive to the Linux thread model that allows threads to selectively 'unshare' any resources that were being shared at the time ...Missing: setns | Show results with:setns
  37. [37]
  38. [38]
  39. [39]
    lsns(8) - Linux manual page - man7.org
    lsns lists information about all the currently accessible namespaces or about the given namespace. The namespace identifier is an inode number.
  40. [40]
  41. [41]
    Documentation for /proc/sys/user - The Linux Kernel documentation
    The files in this directory can be used to override the default limits on the number of namespaces and other objects that have per user per user namespace ...
  42. [42]
    A Brief History of Containers: From the 1970s Till Now - Aqua Security
    Jan 10, 2020 · LXC (LinuX Containers) was the first, most complete implementation of Linux container manager. It was implemented in 2008 using cgroups and ...
  43. [43]
    Evolution of Docker from Linux Containers - Baeldung
    Nov 16, 2020 · Docker evolved from Linux Containers (LXC), initially using LXC, then switching to libcontainer, and is now compatible with LXC, but not as its ...
  44. [44]
    [PDF] Cloud Native 2024
    The adoption of cloud native techniques (some, much, or nearly all) reached a new high of 89% in 2024 (Figure 1). Overall cloud native momentum is increasing ...
  45. [45]
    Introducing runC: a lightweight universal container runtime - Docker
    Jun 22, 2015 · runC is a lightweight, portable container runtime. It includes all of the plumbing code used by Docker to interact with system features related to containers.
  46. [46]
    Network Policies - Kubernetes
    Apr 1, 2024 · If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), NetworkPolicies allow you to specify rules for ...Declare Network Policy · DNS for Services and Pods · Network Plugins
  47. [47]
    So, how much slower are containers? - Hathora Blog
    Apr 11, 2024 · Containers ran just 0.12% slower than native processes on the physical server and were actually 0.08% quicker on the Cloud VM.
  48. [48]
    Sandboxing with Firejail - LWN.net
    Jan 13, 2016 · The firejail command is a setuid-root program that sets up namespaces, seccomp filters, and capabilities before executing the desired program.
  49. [49]
    netblue30/firejail: Linux namespaces and seccomp-bpf sandbox
    Firejail is a lightweight security tool intended to protect a Linux system by setting up a restricted environment for running (potentially untrusted) ...
  50. [50]
    firejail(1) - Linux manual page - man7.org
    Firejail is a SUID sandbox program that reduces the risk of security breaches by restricting the running environment of untrusted applications.
  51. [51]
    Evolving Container Security With Linux User Namespaces
    Dec 23, 2020 · The building blocks of multi-tenancy are Linux namespaces, the very technology that makes LXC, Docker, and other kinds of containers possible.
  52. [52]
    Providing Multi-Tenancy and Network Isolation to the Edge - Red Hat
    Feb 10, 2022 · Network namespaces are widely used by containers to provide network isolation to the containerized process. Netns can be created and managed ...
  53. [53]
    Chapter 2. Container Hosts and Multi-tenancy | 3.11
    Linux namespaces enable creating an abstraction of a particular global system resource to make it appear as a separate instance to processes within a namespace.
  54. [54]
    Controlling access to user namespaces - LWN.net
    Jan 27, 2016 · The user namespaces feature holds an interesting promise for system security: users can be confined within a namespace, given full root privileges within that ...
  55. [55]
    User Namespaces - Kubernetes
    Jul 1, 2025 · This page explains how user namespaces are used in Kubernetes pods. A user namespace isolates the user running inside the container from the one in the host.Use a User Namespace With a... · Downward API · Feature state
  56. [56]
    Confining user namespaces with SELinux - Fedora Discussion
    Jan 21, 2025 · User namespaces allow unprivilegrd user apps to interact with the kernel like they normally couldnt. Access filesystem drivers, networking and more.Missing: controversy | Show results with:controversy
  57. [57]
    Linux Kernel Vulnerability: Escaping Containers by Abusing Cgroups
    Mar 9, 2022 · The risks of user namespaces. Another thing that this vulnerability has in common with CVE-2022-0185 is that when user namespaces are enabled ...
  58. [58]
    Isolate containers with a user namespace - Docker Docs
    Each file contains three fields: the username or ID of the user, followed by a beginning UID or GID (which is treated as UID or GID 0 within the namespace) ...About Remapping And... · Prerequisites · Enable Userns-Remap On The...
  59. [59]
    Best Practices for Docker Security and Configuration - IBM
    Feb 27, 2025 · Modify Docker's daemon.json configuration file (located at /etc/docker/daemon.json) to enable user namespace remapping. If the daemon.json file ...
  60. [60]
    Kubernetes v1.33: User Namespaces enabled by default!
    Apr 25, 2025 · In Kubernetes v1.33 support for user namespaces is enabled by default. This means that, when the stack requirements are met, pods can opt-in to use user ...
  61. [61]
    Security Model - gVisor
    gVisor was created in order to provide additional defense against the exploitation of kernel bugs by untrusted userspace code.Threats: The Anatomy Of An... · System Api · Goals: Limiting ExposureMissing: Google | Show results with:Google<|control11|><|separator|>
  62. [62]
    Open-sourcing gVisor, a sandboxed container runtime - Google Cloud
    May 2, 2018 · These limits are implemented through the use of Linux cgroups and namespaces, but not all resources can be controlled via these mechanisms.
  63. [63]
    gVisor: The Container Security Platform
    gVisor is an open-source Linux-compatible sandbox that runs anywhere existing container tooling does. It enables cloud-native container security and portability ...gVisor Docs · Who's Using gVisor · Installation · BlogMissing: namespaces | Show results with:namespaces
  64. [64]
    Using kernel keyrings with containers - Linux Plumbers Conference
    Sep 10, 2019 · To get around this, keys are now being tagged with a namespace tag that allows keys operating in different namespaces to coexist in the same ...
  65. [65]
    Perf events and tool security - The Linux Kernel documentation
    perf_events performance monitoring and observability operations are the subject for security access control management.Missing: namespace | Show results with:namespace
  66. [66]
    Rethinking the Linux cloud stack for confidential VMs - LWN.net
    Jul 25, 2025 · Confidential computing adds a new layer of isolation, protecting guests even from potentially compromised hosts. In parallel to the work on ...Missing: proposal | Show results with:proposal
  67. [67]
    Network namespaces - LWN.net
    Jan 30, 2007 · Getting any sort of network namespace implementation merged is going to require quite a bit of very careful work. One approach can be seen in ...Missing: UTS | Show results with:UTS
  68. [68]
    Linux user namespaces: a blessing and a curse - Ignat Korchagin
    Jan 9, 2025 · ... linux Unprivileged Linux user namespaces is a rather controversial topic in the security community, Linux Kernel community and in software ...Missing: list | Show results with:list
  69. [69]
    Linux Plumbers Conference 2025 (11-13 December 2025): Program
    Time namespace; A variety of CRIU features and checkpoint/restore kernel interfaces with the latest among them being; Unpriviledged checkpoint/restore; Support ...
  70. [70]
    404 Not Found
    - **Status**: Insufficient relevant content.