Fact-checked by Grok 2 weeks ago

LXC

Linux Containers (LXC) is an open-source operating system-level technology that serves as a userspace interface for the kernel's built-in containment and isolation features, enabling the creation, management, and execution of lightweight, isolated environments called containers. These containers provide a near-native system experience without the overhead of a separate or , positioning LXC between traditional environments and full virtual machines in terms of isolation and resource efficiency. LXC leverages core mechanisms such as namespaces (for process ID, network, mount, user, IPC, and UTS isolation), control groups () for resource limiting, security modules like , SELinux, and for confinement, capabilities for privilege reduction, and s for filesystem isolation. Development of LXC began in as a low-level , with core contributors playing a key role in implementing primitives directly in the . The project achieved its first stable release, 1.0.0, in February 2014, introducing a stable that has remained unbroken since, adhering to semantic versioning practices. Licensed primarily under the GNU LGPLv2.1+ (with some components under GPLv2 or ), LXC is written and follows coding conventions, ensuring compatibility with kernels from version 2.6.32 onward across multiple architectures including x86_64, ARM64, and others, as well as C libraries like and . It supports both privileged and unprivileged containers, with the latter utilizing user namespaces for enhanced security in non-root scenarios. LXC is managed through the liblxc library, which provides a C API along with bindings for other languages, command-line tools for operations like creating, starting, stopping, and configuring containers, and distribution-specific templates for bootstrapping environments. The technology emphasizes configurability, allowing fine-grained control over aspects such as root filesystem paths, networking (e.g., veth or macvlan interfaces), and resource limits via a key-value configuration file. Widely integrated into Linux distributions and used in production environments for tasks ranging from application sandboxing to full system emulation, LXC forms the foundational runtime for higher-level container managers like LXD and Incus, and is employed in platforms such as Proxmox VE for virtualized hosting. As of 2025, the latest long-term support release is LXC 6.0, maintained until June 2029, underscoring its maturity and ongoing evolution in the container ecosystem.

History and Development

Origins and Early Development

LXC emerged in as an operating-system-level method that utilizes features to enable multiple isolated environments on a single host without requiring separate for each instance. The development was driven by the need for a resource-efficient alternative to heavier full-system approaches like KVM or , allowing better and management of processes while minimizing overhead through shared kernel resources. Key foundational work began in 2007 under the community, with initial contributions from engineers Daniel Lezcano and Serge Hallyn, who integrated emerging primitives such as control groups (, introduced in kernel 2.6.24) for resource limiting and namespaces (developed incrementally from 2005, with expansions like network namespaces in 2008) for . The project's first public release arrived in August 2008, accompanied by early prototypes and community announcements that highlighted its potential for lightweight .

Major Releases and Milestones

LXC's development has progressed through a series of major releases since its initial stable version, each introducing enhancements in stability, security, and compatibility while maintaining where possible. The project follows a (LTS) model for select releases, providing five years of maintenance including security fixes and critical bugfixes. The first stable release, LXC 1.0, arrived on February 20, 2014, marking a significant with the introduction of a stable and bindings for multiple languages, alongside improved features such as enhanced capabilities support and the debut of unprivileged containers enabled by namespaces. This version also included a consistent set of command-line tools and updated documentation, laying the foundation for production use. LXC 1.0 received LTS support until June 2019. Subsequent releases built on this base. LXC 2.0, released on April 6, 2016, focused on security improvements, including a complete rework of cgroup handling and better integration with modern init systems like , which had gained prominence around 2014. It also enhanced checkpoint/restore functionality and provided a more uniform across tools. This LTS version was supported until June 2021. LXC 3.0 followed on March 27, 2018, emphasizing compatibility with evolving kernel features, such as support for the unified cgroup v2 hierarchy and deprecation of older cgroup managers like cgroupsfs and cgmanager. New capabilities included a ringbuffer for console logging and additional container templates, with LTS extending to June 2023. Later LTS releases continued this trajectory of refinement. LXC 4.0, released March 25, 2020, introduced better and API extensions for advanced networking, supported until June 2025. LXC 5.0 arrived on June 17, 2022, with optimizations for in dense environments and further hardening, backed by LTS until June 2027. The most recent LTS, LXC 6.0, was released on April 3, 2024, featuring streamlined configuration options and deeper integration with contemporary kernels, including the 6.x series for improved efficiency and stability as of 2025. This version, supported until June 2029, includes bugfix updates such as 6.0.4 in April 2025 and 6.0.5 in August 2025, prioritizing reliability in production deployments. Under the governance of the Linux Containers project hosted at linuxcontainers.org, LXC is collaboratively maintained by a of developers, with substantial contributions from , particularly through lead developer Stéphane Graber. This structure ensures vendor-neutral evolution, focusing on core advancements without ties to specific distributions. Recent developments as of 2025 emphasize ongoing stability enhancements and seamless compatibility with the latest releases, solidifying LXC's role in system containerization.

Technical Foundations

Kernel Features

LXC relies on several core technologies to enable lightweight through . These features provide and resource management without requiring a separate kernel or . The primary mechanisms include control groups (cgroups) for resource allocation and for isolation, supplemented by additional primitives such as capabilities, , and (MAC) modules like and SELinux. Control groups, or cgroups, are a Linux kernel feature that organizes processes into hierarchical groups to limit, account for, and isolate resource usage, such as CPU time, memory, and I/O bandwidth. Introduced in Linux kernel 2.6.24 in 2008, cgroups version 1 (v1) allowed multiple hierarchies, one per resource controller, which led to complexity in management. In 2016, with Linux kernel 4.5, cgroups version 2 (v2) was officially released, introducing a unified hierarchy to simplify administration and improve consistency across controllers. Under v2, resource limiting is enforced hierarchically; for example, the CPU controller uses weights (ranging from 1 to 10000, default 100) for proportional sharing among groups, while the memory controller sets hard limits via memory.max (default unlimited) to prevent overconsumption, and the I/O controller applies bandwidth limits like bytes per second (BPS) or I/O operations per second (IOPS) through io.max. Linux namespaces provide isolation by creating separate views of kernel resources for processes within a container, ensuring that changes in one namespace do not affect others. The key namespaces used by LXC include: the PID namespace, which isolates process ID numbering so each container has its own init process (PID 1); the network namespace, which provides independent network stacks, interfaces, and routing tables; the mount namespace, which allows separate filesystem mount points and hierarchies; the user namespace, which maps user and group IDs to enable non-root users to appear as root inside the container; the IPC namespace, which isolates System V and inter-process communication resources like message queues; and the UTS namespace, which separates and configurations. These namespaces collectively create boundaries for processes, network, filesystems, user IDs, communication, and system identity. Additional kernel primitives enhance security in LXC by restricting privileges and system interactions. Linux capabilities divide traditional superuser privileges into granular units, allowing processes to drop unnecessary ones (e.g., via CAP_SYS_ADMIN restriction) to minimize attack surfaces. Seccomp (secure computing mode), introduced in Linux 2.6.12, enables syscall filtering using Berkeley Packet Filter (BPF) programs to block or trace specific system calls, preventing unauthorized kernel interactions. For mandatory access control, LXC integrates AppArmor, a kernel security module that enforces path-based policies to confine applications by restricting file, network, and capability access, and SELinux, which uses label-based policies for fine-grained control over subjects, objects, and operations. Together, these features form the foundation of LXC: namespaces establish isolation boundaries for container processes, while enforce resource limits and accounting within those boundaries, with capabilities, , and MAC modules providing layered security to prevent or unauthorized actions.

Architecture and Components

LXC's architecture centers on a user-space interface that leverages primitives, such as namespaces and control groups, to enable lightweight without requiring a separate guest . The system design emphasizes modularity, with core components handling container creation, execution, and through a combination of libraries, tools, and configuration mechanisms. This setup allows LXC to provide that is more efficient than full virtual machines while offering stronger isolation than simple chroots. At the heart of LXC is the liblxc library, which exposes a stable, versioned C API for programmatic access to container operations, including creation, configuration, and monitoring. This library serves as the foundational layer for higher-level tools and bindings in languages like Python, Go, and Ruby, enabling developers to integrate container management into applications. Accompanying the library are command-line utilities for manual lifecycle management: lxc-create initializes a container by invoking a template script to populate its root filesystem (rootfs), lxc-start boots the container by forking an init process within isolated namespaces, and lxc-stop gracefully halts it by sending signals to the container's processes. These tools operate on container directories typically located under /var/lib/lxc, where each container maintains its own rootfs and configuration. Container configuration is defined through text-based files in a key-value format, with the global system file at /etc/lxc/lxc.conf setting defaults like backends and lookup paths, and per-container files (e.g., /var/lib/lxc//config) specifying runtime details such as , hostname, and resource limits. The lifecycle begins with lxc-create using distribution-specific templates—for instance, the "" template fetches and unpacks a rootfs for or from remote images, cloning the host for shared execution while setting up via kernel namespaces. Upon lxc-start, the container's (often /sbin/init from the distro) runs in this isolated , providing a full user-space view without kernel duplication. Networking in LXC is configured declaratively in the container's config file, supporting modes like bridge via virtual ethernet (veth) pairs, where the container's interface (e.g., eth0) connects to a host bridge like br0 for shared network access (lxc.net.0.type = veth; lxc.net.0.link = br0). Macvlan mode allows direct attachment to a physical interface, enabling the container to appear as a separate device on the network with modes such as bridge or private for varying isolation levels (lxc.net.0.type = macvlan; lxc.net.0.link = eth0; lxc.net.0.mode = bridge). These options facilitate connectivity between containers and the host or external networks without full emulation. For storage, LXC employs efficient layering techniques like , which mounts a read-write upper directory over a read-only lower one to create a unified rootfs (e.g., lxc.rootfs.path = overlayfs:/var/lib/lxc//lower:/var/lib/lxc//upper), allowing changes without modifying the base image. Bind mounts provide another mechanism for sharing host directories into the , specified via lxc.mount.entry (e.g., /host/dir /container/dir none 0 0), enabling persistent or shared data access while maintaining filesystem isolation through mount namespaces. This approach avoids the overhead of full disk , focusing instead on lightweight, operations for scalability.

Implementation and Usage

Installation

LXC installation requires a host system running a version 2.6.32 or later, with () and namespaces enabled to provide the necessary mechanisms. For unprivileged containers specifically, version 3.12 or higher is needed to support user namespaces adequately. Kernel support can be verified by extracting the configuration from /proc/config.gz using commands like zcat /proc/config.gz | [grep](/page/Grep) CONFIG_CGROUPS to check for options such as CONFIG_CGROUPS=y, CONFIG_USER_NS=y, and related features, or more comprehensively via the lxc-checkconfig tool post-installation. On Debian and Ubuntu-based distributions, LXC is available through the standard package repositories and can be installed using the Advanced Package Tool (APT) with the command sudo apt install lxc, which pulls in dependencies like liblxc1 and tools for unprivileged operation. For Fedora, installation is similarly straightforward via the DNF package manager with sudo dnf install lxc, ensuring the latest stable version from the Fedora repositories. If distribution packages are outdated or unavailable, LXC can be built from source by cloning the repository with git clone https://github.com/lxc/lxc.git, followed by running ./autogen.sh, ./configure, make, and sudo make install after installing build dependencies such as libcap-dev, libselinux1-dev, and libseccomp-dev. After installation, configuring unprivileged containers involves setting up and mappings to allow non- users to run containers securely. This includes editing /etc/subuid and /etc/subgid to allocate subordinate IDs (e.g., root:100000:65536 for the user) and adding ID mapping lines to /etc/lxc/default.conf or ~/.config/lxc/default.conf for user-specific setups, such as lxc.idmap = u 0 100000 65536 and lxc.idmap = g 0 100000 65536. Additionally, to enable user namespaces for non-privileged processes, set the parameter with sudo [sysctl](/page/Sysctl) [kernel](/page/Kernel).unprivileged_userns_clone=1 (or add kernel.unprivileged_userns_clone = 1 to /etc/sysctl.conf and apply with sudo [sysctl](/page/Sysctl) -p), which is required on some distributions where this feature is disabled by default. For networking in unprivileged mode, grant veth device permissions by adding an entry like username veth lxcbr0 10 to /etc/lxc/lxc-usernet. To verify the setup, execute lxc-checkconfig, which scans the kernel for LXC-required features and reports any missing configurations, such as disabled or namespaces, ensuring the host is ready for operations.

Creating and Managing Containers

Creating and managing LXC containers involves a series of command-line operations that leverage the LXC toolkit to build, run, and maintain isolated environments on a host. The basic workflow begins with container creation using the lxc-create command, which initializes a new container , , and root filesystem based on a specified template. For instance, to create a container named "mycontainer" using a pre-built distribution image, the command lxc-create -t download -n mycontainer prompts for selection of a distribution, release, and architecture from the official image server at images.linuxcontainers.org. Once created, the container can be started with lxc-start -n mycontainer, which boots the container's init process in the background. To interact with a running container, lxc-attach -n mycontainer provides a shell session inside the container, allowing direct execution of commands as if on a separate system. Routine management tasks are handled through dedicated LXC commands for oversight and . The lxc-ls -f command lists all containers with details on their (e.g., STOPPED, RUNNING), including options like --fancy for a tabular view with active processes and addresses. To remove a container and its associated resources, lxc-destroy -n mycontainer deletes the container and , ensuring cleanup after use. For temporary , lxc-freeze -n mycontainer pauses all processes within the container by leveraging the freezer cgroup, while lxc-unfreeze -n mycontainer resumes them; these operations are useful for without full shutdown. LXC supports both cgroup v1 and v2 hierarchies as of version 6.0 (2025). Container behavior is customized via the configuration file located at /var/lib/lxc/<container>/config, where key-value pairs define resource limits and interfaces. To restrict CPU access, for example, add lxc.cgroup2.cpuset.cpus = 0-1 (for cgroup v2, the default on many modern distributions) to limit the container to the first two CPU cores; for cgroup v1, use lxc.cgroup.cpuset.cpus = 0-1. This is enforced through cgroups for precise resource allocation. Network configuration similarly uses entries like lxc.net.0.type = veth to create a virtual Ethernet pair, paired with lxc.net.0.link = lxcbr0 to bridge the container's interface to the host's default bridge for external connectivity; this setup enables the container to obtain an IP via DHCP on the host network. After editing, restart the container to apply changes. For advanced operations, LXC supports and to facilitate backups and transfers. capture the container's using lxc-snapshot -n mycontainer, creating a named snapshot (e.g., snap0) stored under /var/lib/lxc/mycontainer/snaps/ that includes the filesystem and ; list them with lxc-snapshot -n mycontainer -L, restore via lxc-snapshot -n mycontainer -r snap0, or destroy with lxc-snapshot -n mycontainer -d snap0. is achieved through lxc-copy, which clones containers locally or remotely; for a local copy, lxc-copy -n source -N destination duplicates the entire root filesystem, while remote transfers require specifying paths like -P /var/lib/lxc -p /remote/path and manual over SSH. These features, backed by storage types like or LVM, enable efficient container lifecycle management without downtime in production scenarios.

Security Aspects

Isolation Techniques

LXC achieves process and resource isolation primarily through Linux kernel primitives, including namespaces, control groups (), private filesystem mounts, and privilege restrictions, enabling lightweight without full virtualization overhead. These mechanisms collectively prevent containers from interfering with the host system or each other, while allowing efficient sharing of the underlying kernel.

Namespace-Based Isolation

Namespaces provide isolation by partitioning resources, such as process IDs and user identities, so that within a perceive a separate environment from the host. The PID namespace, for instance, confines the 's process tree, hiding host and making the container's init (PID 1) appear as the only visible from within. This is configured via options like lxc.namespace.clone = pid, ensuring that signals and process listings are isolated. User namespaces further enhance by remapping and group IDs inside the to unprivileged IDs on the host, allowing the container's to operate without actual privileges on the host. For example, a mapping such as lxc.idmap = u 0 100000 65536 shifts the container's 0 to host 100000, preventing attacks. Other namespaces, like , , , and UTS, can be enabled similarly (e.g., lxc.namespace.clone = mount net ipc uts), isolating filesystems, networking stacks, , and views respectively.

Resource Controls

LXC employs to enforce resource limits and prevent denial-of-service scenarios by allocating and monitoring CPU, , and I/O usage per . Cgroups v1, used in older setups, allow directives like lxc.cgroup.memory.limit_in_bytes = 512M to cap at 512 megabytes, triggering out-of-memory kills if exceeded. In modern unified hierarchies ( v2), equivalent controls such as lxc.cgroup2.memory.high = 512M provide softer limits, while lxc.cgroup2.devices.allow = c 1:3 rwm permits specific device access (e.g., tty) with read/write/mknod permissions, blocking unauthorized hardware interactions. These controls ensure fair resource distribution across multiple s on a shared host.

Filesystem Security

Filesystem isolation in LXC relies on private mounts and chroot-like environments to restrict container access to host resources, using bind mounts and pivot roots for a self-contained view. The lxc.rootfs directive specifies the container's root filesystem path, while lxc.mount.entry or lxc.mount.fstab configures private overlays, such as mounting /proc inside the with proc proc proc nodev,noexec,nosuid 0 0 to prevent executable code execution from . Automount options like lxc.mount.auto = proc sys cgroup ensure essential directories are mounted privately, invisible to the host, thus containing any filesystem traversals or modifications within the . This setup mimics but with namespace-backed unshareability, enhancing containment without exposing the host's full directory tree.

Privilege Reduction

To minimize attack surfaces, LXC drops unnecessary capabilities and filters system calls, running containers with reduced privileges even when initiated by . The lxc.cap.drop option removes capabilities like sys_admin mknod (e.g., lxc.cap.drop = sys_module mknod), preventing actions such as loading or node creation that could compromise the host. Complementing this, profiles via lxc.seccomp.profile restrict syscalls; for version 2 profiles, a denylist like 2\ndenylist\nmknod errno 0 blocks mknod calls while allowing others, or an allowlist permits only essential operations. These measures collectively ensure that even compromised lack the privileges to affect the host or other processes.

Potential Risks and Mitigations

One of the primary risks in LXC deployments involves container escape vulnerabilities, particularly when using misconfigured namespaces in privileged . For instance, CVE-2019-5736, a flaw in the runc runtime exploited to overwrite the host binary and gain access, affected LXC users by allowing malicious code execution from within a container to compromise the system. Similarly, can occur if unprivileged mode is disabled, enabling attackers to map container root to host root and bypass boundaries. Historical incidents highlight these dangers; the 2019 runc escape (CVE-2019-5736) impacted LXC environments running vulnerable versions, leading to widespread patches across distributions. Additionally, kernel bugs in v1, such as those enabling attacks through resource exhaustion like fork bombs or memory limits evasion, have allowed containers to disrupt host operations or other workloads. Another example is CVE-2022-0492, a v1 flaw permitting and potential container escapes in unpatched systems. To mitigate these risks, LXC users should always employ unprivileged containers, which map container UIDs/GIDs to non-root host ranges (e.g., allocating 65536 IDs per user) to prevent direct host access. Applying (MAC) profiles, such as or SELinux, confines container processes and blocks unauthorized actions like file access or network operations. filters can further restrict syscalls, while cgroup limits on CPU, memory, and PIDs curb attempts. Regular kernel updates are essential, including post-2025 patches enhancing namespace isolation against emerging exploits, such as those addressing use-after-free bugs in network namespaces. Best practices include implementing network isolation via per-container bridges or firewalls (e.g., rules to block inter-container traffic), enabling auditing with tools like auditd to log suspicious activities, and strictly avoiding rootful (privileged) containers for untrusted workloads. These measures, when combined, significantly reduce the without compromising LXC's benefits.

Alternatives

LXD

LXD is a modern container and virtual machine manager built on top of LXC, introduced by in 2015 as a daemon that exposes a REST for managing instances. It supports both system containers, which run full distributions, and application containers for lighter workloads, providing a unified interface for orchestration across scales from single hosts to clusters. Unlike direct LXC usage, LXD emphasizes ease of management through its client tool, lxc, which abstracts low-level operations into intuitive commands. Key enhancements in LXD over traditional LXC include image-based deployment, where users can launch pre-built images from remote servers such as images.linuxcontainers.org, enabling rapid provisioning without manual configuration. It supports clustering for , allowing seamless distribution of workloads across multiple nodes, and storage pooling with backends like or Ceph for efficient data management and replication. Additional features encompass of running instances between hosts, container snapshots for versioning and rollback, and device passthrough for direct hardware access, such as GPUs or USB devices. For instance, creating and starting a can be done with a single command like lxc launch images:[ubuntu](/page/Ubuntu)/24.04 myvm, which downloads and boots an 24.04 image. Incus is a community-driven of LXD, initiated in 2023 by former LXD developers and the Linux Containers project to maintain an open-source alternative under more permissive governance. It retains the core features of LXD, including API management, image-based deployment, clustering, , and VM support, while being built on LXC and emphasizing daemonless operation in some modes for improved . As of November 2025, the latest release is Incus 6.18, aligning with LXC 6.0 for enhanced kernel compatibility and performance. LXD has seen significant adoption in enterprise environments, notably integrated by Canonical into MAAS for automated machine provisioning and into OpenStack via the nova-compute-lxd driver for high-performance cloud deployments. As of 2025, its latest releases, including LXD 6.5, align with LXC 6.0 to leverage improved kernel integration and performance optimizations.

Other Container Technologies

Docker, introduced in 2013, represents an application-focused platform that initially leveraged as its execution driver to utilize features like and namespaces for isolating processes. Over time, Docker transitioned to its own libcontainer (now runc) backend for greater control and portability, emphasizing layered images for application deployment rather than full operating system emulation. This shift highlights Docker's strengths in fostering a vast ecosystem for and cloud-native development, with tools like Docker Compose and , but it is less ideal for system-level containers compared to LXC, which supports complete distributions with persistent services and configurations. Podman, released in 2018 by , serves as a daemonless and rootless alternative to , enabling users to run OCI-compliant containers without a central service, thereby improving security by avoiding privileged daemons. Containerd, originally part of but now a standalone CNCF project, acts as a lightweight runtime focused on image management and execution, also supporting rootless modes for enhanced isolation in production environments. In contrast to LXC's capability for full support in containers, Podman and containerd prioritize application containers with strict OCI standards, making them more aligned with ephemeral workloads but requiring additional layering for OS-like persistence. LXC and its extension LXD can integrate with via CRI shims like LXE, allowing them to serve as container runtimes for orchestrating system containers, though this setup remains less prevalent than containerd due to the latter's optimized performance and native support. Containerd's efficiency in handling high-density deployments gives it an edge in large-scale clusters, whereas LXC excels in scenarios demanding hosting of full OS environments with minimal overhead, such as testing or resource-constrained servers. As of 2025, emerging trends point to successors of the deprecated rkt runtime, such as (Wasm)-based containers, which offer even lighter alternatives through bytecode compilation for near-native speed, strong sandboxing, and portability across environments without full kernel dependencies. Wasm containers, integrable with , target and serverless use cases where LXC's Linux-specific isolation may introduce unnecessary complexity, prioritizing sub-millisecond startups and reduced attack surfaces over traditional OS emulation.

References

  1. [1]
    LXC - Introduction - Linux Containers
    LXC is a userspace interface for Linux kernel containment, allowing users to create and manage containers, similar to a chroot but without a separate kernel.Getting started · Documentation · Lxcfs · Deutsch
  2. [2]
    LXC - Linux Containers - GitHub
    LXC is the well-known and heavily tested low-level Linux container runtime. It is in active development since 2008 and has proven itself in critical production ...LXC · Workflow runs · Issues 157 · Incus
  3. [3]
    LXC - News - Linux Containers
    Feb 20, 2014 · LXC 1.0 is the result of 10 months of development and over a thousand commits, including a major rework of the way LXC is structured. It's ...
  4. [4]
    Linux Container - Proxmox VE
    Nov 28, 2024 · Proxmox VE uses Linux Containers (LXC) as its underlying container technology. The “Proxmox Container Toolkit” (pct) simplifies the usage and management of LXC.
  5. [5]
    LXC - News - Linux Containers
    Aug 15, 2025 · The LXC team is pleased to announce the release of LXC 6.0.4! This is the fourth bugfix release for LXC 6.0 which is supported until June 2029.<|control11|><|separator|>
  6. [6]
    LXC | Stéphane Graber's website
    Aug 15, 2025 · This low-level container runtime and library was first released in August 2008, led to the creation of projects like Docker and today is still ...Futurfusion · What We Did · What's Coming In 2025Missing: origins | Show results with:origins<|control11|><|separator|>
  7. [7]
    A Brief History of Linux Containers - Oracle Blogs
    Nov 21, 2019 · The first fundamental building block that led to the creation of Linux containers was submitted to the Kernel by Google.Missing: early | Show results with:early
  8. [8]
  9. [9]
  10. [10]
    LXC 1.0: Unprivileged containers [7/10] | Stéphane Graber's website
    Jan 17, 2014 · Introduction to unprivileged containers. The support of unprivileged containers is in my opinion one of the most important new features of LXC ...
  11. [11]
    LXC 2.0 has been released! | Stéphane Graber's website
    Apr 6, 2016 · LXC 1.0, released February 2014 will EOL on the 1st of June 2019 · LXC 1.1, released February 2015 will EOL on the 1st of September 2016 · LXC 2.0 ...Missing: history | Show results with:history
  12. [12]
    LXC 3.0.0 has been released - Linux Containers Forum
    Mar 27, 2018 · Support for daemonized app containers. LXC has been running application container through a minimal init system since its first release in 2008.
  13. [13]
    LXC 4.0 LTS has been released - Linux Containers Forum
    Mar 25, 2020 · LXC 4.0.0 will be supported until June 2025 and our current LTS release, LXC 3.0 will now switch to a slower maintenance pace, only getting ...
  14. [14]
    LXC 5.0 LTS has been released - News - Linux Containers Forum
    Jun 17, 2022 · LXC 5.0 will be supported until June 2027 and our current LTS release, LXC 4.0 will now switch to a slower maintenance pace, only getting ...Missing: timeline | Show results with:timeline
  15. [15]
    LXC 6.0 LTS has been released - News - Linux Containers Forum
    Apr 3, 2024 · LXC 6.0 will be supported until June 2029 and our current LTS release, LXC 5.0 will now switch to a slower maintenance pace, only getting ...
  16. [16]
    LXC/LXCFS/Incus 6.0.4 LTS release | Stéphane Graber's website
    Apr 4, 2025 · We're expecting another LTS bugfix release for the 6.0 branches in the third quarter of 2025. In the mean time, Incus will keep going with its usual monthly ...
  17. [17]
    Linux Containers
    linuxcontainers.org is the umbrella project behind Incus, LXC, LXCFS, Distrobuilder and more. The goal is to offer a distro and vendor neutral environment.LXC - Introduction · LXC - Getting started · LXC - Documentation · LXC - Downloads
  18. [18]
    cgroups(7) - Linux manual page - man7.org
    Control groups, usually referred to as cgroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups.
  19. [19]
    Control Group v2 - The Linux Kernel documentation
    This is the authoritative documentation on the design, interface and conventions of cgroup v2. It describes all userland-visible aspects of cgroup including ...
  20. [20]
    Cgroup v2 Is To Be Made Official With Linux 4.5 - Phoronix
    Jan 11, 2016 · The cgroup v2 interface will be made official with the in-development Linux 4.5 kernel. Maintainer Tejun Heo sent in the cgroup changes today ...
  21. [21]
    namespaces(7) - Linux manual page - man7.org
    It is a time namespace, and there is a process that refers to the namespace via a /proc/pid/ns/time_for_children symbolic link. • It is an IPC namespace, and a ...
  22. [22]
    capabilities(7) - Linux manual page - man7.org
    The kernel must provide system calls allowing a thread's capability sets to be changed and retrieved. • The filesystem must support attaching capabilities to an ...
  23. [23]
    Seccomp BPF (SECure COMPuting with filters)
    Seccomp filtering provides a means for a process to specify a filter for incoming system calls. The filter is expressed as a Berkeley Packet Filter (BPF) ...
  24. [24]
    seccomp(2) - Linux manual page - man7.org
    The seccomp() system call operates on the Secure Computing (seccomp) state of the calling process. Currently, Linux supports the following operation values: ...
  25. [25]
    AppArmor - The Linux Kernel documentation
    AppArmor is MAC style security extension for the Linux kernel. It implements a task centered policy, with task “profiles” being created and loaded from user ...Missing: integration | Show results with:integration
  26. [26]
    LXC - Documentation - Linux Containers
    lxccontainer.h is our public C API. Some of the best examples of API usage are the bindings and the LXC tools themselves.
  27. [27]
    LXC - Getting started - Linux Containers
    Privileged containers are the easiest way to get started learning about and experimenting with LXC, but they may not be appropriate for production use.
  28. [28]
    LXC - Manpages - lxc.conf.5 - Linux Containers
    Jun 3, 2021 · This configuration file is used to set values such as default lookup paths and storage backend settings for LXC.DESCRIPTION · CONTAINER CONFIGURATION
  29. [29]
    LXC - Manpages - lxc.container.conf.5 - Linux Containers
    Jun 3, 2021 · LXC is the well-known and heavily tested low-level Linux container runtime. It is in active development since 2008 and has proven itself in critical production ...
  30. [30]
    Containerization - Fedora Docs
    Nov 5, 2022 · Libvirt LXC is natively supported by Fedora Server (via libvirt as default virtualization tool). LXC (linux containers). Its characteristics ...Podman · LXC (libvirt) · systemd-nspawn container
  31. [31]
    lxc.conf(5) - Linux manual page
    ### Summary of LXC Configuration Options (LXC man page lxc.conf(5))
  32. [32]
    lxc-snapshot(1) - Linux manual page - man7.org
    lxc-snapshot creates, lists, and restores container snapshots. Snapshots are stored as snapshotted containers under the contain‐ er's configuration path.
  33. [33]
    LXC - Manpages - lxc-copy.1 - Linux Containers
    lxc-copy creates copies of existing containers. Copies can be complete clones of the original container. In this case the whole root filesystem of the container ...
  34. [34]
    lxc(7) - Linux manual page - man7.org
    It provides resource management through control groups and resource isolation via namespaces. lxc, aims to use these new functionalities to provide a userspace ...
  35. [35]
  36. [36]
    Databricks Security Advisory: Critical Runc Vulnerability (CVE-2019 ...
    Feb 19, 2019 · This vulnerability affects many container runtimes, including Docker and LXC. The Databricks security team has evaluated the vulnerability and ...<|control11|><|separator|>
  37. [37]
    LXC - Security - Linux Containers
    To make unprivileged containers work, LXC interacts with 3 pieces of setuid code: ... release date for you and the Linux distribution community. Project ...
  38. [38]
    [PDF] Abusing Privileged and Unprivileged Linux Containers - NCC Group
    On both LXC and Docker, this is an easy way to DoS other containers running on the same host. Forgoing ulimits, two other DoS conditions are often ...
  39. [39]
    The Linux Kernel in 2025: Security Enhancements, Emerging ...
    Jul 16, 2025 · Let's dig into the state of Linux kernel security development in 2025, examining game-changing features, the new wave of threats, and practical ways to keep ...Memory Tagging Extension... · Rust In The Kernel · Ransomware Targeting Linux
  40. [40]
    CVE-2025-38052 - Red Hat Customer Portal
    CVE-2025-38052 is a Linux kernel vulnerability in network namespaces, a slab-use-after-free in TIPC crypto, potentially causing crashes or memory leaks.Missing: hardening | Show results with:hardening
  41. [41]
    [PDF] Application Container Security Guide
    This publication explains the potential security concerns associated with the use of containers and provides recommendations for addressing these concerns.
  42. [42]
    Getting started with LXD – the container lightervisor - Ubuntu
    Apr 28, 2015 · LXD is what we call our container “lightervisor”. The core of LXD is a daemon which offers a REST API to drive full system containers just like you'd drive ...
  43. [43]
    LXD documentation
    LXD ([lɛks'di:] ) is a modern, secure and powerful system container and virtual machine manager. It provides a unified experience for running and managing ...
  44. [44]
    lxd and lxc - Ubuntu documentation
    LXD is a more intuitive and user-friendly tool aimed at making it easy to work with Linux containers. It is an alternative to LXC's tools and distribution ...
  45. [45]
    Introduction to nova-compute-lxd - Canonical
    In order to facilitate the OpenStack integration with LXD we have created a plugin called nova-compute-lxd (nclxd). The plugin uses the REST API to interact ...
  46. [46]
    LXD 6.5 has been released - Ubuntu Discourse
    Jul 22, 2025 · The LXD team would like to announce the release of LXD 6.5! This is the fifth feature release in the 6.x series. It includes many new features ...
  47. [47]
    Evolution of Docker from Linux Containers - Baeldung
    Nov 16, 2020 · Linux Containers, often referred to as LXC, was perhaps the first implementation of a complete container manager. It's operating-system-level ...4. Understanding Linux... · 5. Arrival Of Docker · 6. Understanding Docker...
  48. [48]
    LXC vs Docker: Why Docker is Better | UpGuard
    Jul 3, 2025 · While it started out being built on top of LXC, Docker later moved beyond LXC containers to its own execution environment called libcontainer.What They Do · Ecosystem And Cloud Support · Related Container Technology
  49. [49]
    The History of Containers - Red Hat
    Aug 28, 2015 · Docker built on all of the incremental developments that came before it and upped the ante by (originally) wrapping the LXC userspace tools ...
  50. [50]
    Top 12 Most Useful Docker Alternatives for 2025 [List] - Spacelift
    Jun 2, 2025 · Podman is an open tool for working with containers and images. It's ... LXC containers are system containers that include a full operating system.
  51. [51]
    Top Docker Alternatives in 2025: A Complete Guide - DataCamp
    Jul 2, 2025 · While Podman targets Docker compatibility, CRI-O and containerd focus specifically on Kubernetes production environments. These runtimes strip ...
  52. [52]
    LXE released, a Kubernetes integration of LXC/LXD - Announcements
    Oct 4, 2018 · Hi everybody, as promised we're happy to announce LXE, a Kubernetes integration of LXC/LXD - or in other words a LXD shim.Missing: containerd | Show results with:containerd
  53. [53]
    What's Next for Containerization Technology? - HAKIA.com
    Apr 29, 2025 · Explore the future of containerization beyond Docker and Kubernetes. Discover key trends like WebAssembly, serverless computing, unikernels, ...