Fact-checked by Grok 2 weeks ago

Proxmox Virtual Environment

Proxmox Virtual Environment (Proxmox VE) is an open-source server virtualization management platform designed for enterprise use, providing a complete solution for creating and managing virtual machines (VMs) and Linux containers (LXC) on a single server or cluster. It is based on Debian GNU/Linux and tightly integrates the KVM hypervisor for full virtualization of operating systems like Linux and Windows, alongside LXC for lightweight containerization, all accessible via an intuitive web-based interface. Developed by Proxmox Server Solutions GmbH, a Vienna-based company founded in 2005, Proxmox VE emphasizes high availability, software-defined storage, and networking to support scalable IT infrastructures. The platform was first publicly released in version 0.9 in April 2008, with the first stable version 1.0 following in October 2008, combining early container-based virtualization (initially using OpenVZ before transitioning to LXC in later versions) with KVM and introducing a centralized web management tool that set it apart from contemporaries. Over the years, it has evolved through regular updates aligned with Debian releases, incorporating features like live migration, clustered high-availability setups, and integrated backup solutions to optimize resource utilization and minimize downtime. Released in August 2025, the latest major version, Proxmox VE 9.0, is built on Debian 13 (Trixie) and includes enhancements such as improved Ceph storage integration and enhanced security protocols for modern data centers. Proxmox VE stands out for its hyper-converged infrastructure capabilities, allowing storage, networking, and compute resources to be managed cohesively without proprietary hardware dependencies, making it a cost-effective alternative for businesses seeking robust virtualization without licensing fees. Its community-driven development model, supported by optional enterprise subscriptions for stable repositories and professional support, has fostered widespread adoption in sectors ranging from small businesses to large-scale cloud environments.

History

Origins and Development

Proxmox Virtual Environment (Proxmox VE) originated from the efforts of Proxmox Server Solutions GmbH, a company founded in February 2005 in Vienna, Austria, by brothers Martin Maurer and Dietmar Maurer. The company's initial focus was on developing efficient, open-source Linux-based software to enable secure, stable, and scalable IT infrastructures, addressing the high costs and licensing restrictions of proprietary virtualization platforms like VMware. This motivation stemmed from the need for accessible alternatives that small and medium-sized enterprises could deploy without significant financial barriers, leveraging free and open-source technologies. Development of Proxmox VE began in 2007, with the project emphasizing the integration of container-based virtualization via OpenVZ and full virtualization through the KVM hypervisor, all managed via an intuitive web-based interface. The first public release, version 0.9, arrived on April 15, 2008, marking the debut of this unified platform built directly on Debian GNU/Linux as its base distribution to ensure stability and broad hardware compatibility from the outset. Key early contributors included the Maurer brothers, who led the core development, alongside an emerging community of open-source enthusiasts contributing to initial testing and refinements. Over time, Proxmox VE evolved from a purely community-driven initiative to a hybrid model supported by enterprise services from Proxmox Server Solutions GmbH, introducing subscription-based options for stable repositories, professional support, and enhanced features while maintaining the core codebase under the GNU Affero General Public License version 3 (AGPLv3). This licensing choice, adopted early on, ensured that the software remained freely available for modification and redistribution, fostering widespread adoption and ongoing contributions from the open-source community. The Debian foundation continued to underpin this growth, providing a reliable ecosystem for integrating virtualization tools without deviating from the project's open-source ethos.

Major Releases and Milestones

Proxmox VE 2.0, released in March 2012, marked a significant evolution by introducing a REST API for programmatic management, improved clustering capabilities powered by Corosync for enhanced reliability, and the replacement of OpenVZ with LXC container support to provide more lightweight and secure virtualization options. In October 2015, Proxmox VE 4.0 shifted its base to Debian 8 (Jessie), bolstering ZFS support with version 0.6.5.3 including root filesystem capabilities, and implementing initial high-availability fencing mechanisms via a new HA Manager that utilized watchdog-based isolation for streamlined cluster resilience. Proxmox VE 6.0 arrived in July 2019 on Debian 10 (Buster), featuring integrated Ceph Nautilus storage with a dedicated dashboard for cluster-wide monitoring and a redesigned modern graphical user interface that improved usability across web and mobile access. The June 2023 release of Proxmox VE 8.0, built on Debian 12 (Bookworm), advanced backup processes with native incremental support via Proxmox Backup Server integration, introduced VirtIO-fs for efficient shared filesystem access between host and guests, and refined software-defined networking (SDN) with better zone management and VLAN awareness. Proxmox VE 9.0, launched in August 2025 on Debian 13 (Trixie), enabled VM snapshots on thick-provisioned LVM storage for greater flexibility in backup strategies, added SDN "fabrics" for simplified multi-tenant network orchestration, and incorporated high-availability affinity rules to optimize resource placement in clusters. Key milestones include the project's 10-year anniversary in April 2018, celebrating its growth from initial open-source roots to a robust enterprise platform with widespread adoption in virtualization environments.

Overview

Core Functionality

Proxmox Virtual Environment (Proxmox VE) is an open-source virtualization platform based on Debian GNU/Linux, designed to manage KVM hypervisor-based virtual machines and LXC Linux containers within a single, unified interface. This integration allows administrators to deploy and oversee both full virtual machines and lightweight containers efficiently on the same host system, leveraging QEMU/KVM for hardware-assisted virtualization and LXC for OS-level containerization. At its core, Proxmox VE enables centralized orchestration of compute, storage, and networking resources, particularly in hyper-converged infrastructure configurations where these elements are consolidated on shared server nodes to simplify management and scale operations. This approach supports software-defined storage solutions like Ceph or ZFS and virtual networking via bridges or SDN, all coordinated through the platform's built-in services without requiring separate tools. The platform's primary management interface is a web-based graphical user interface (GUI), accessible securely via HTTPS on TCP port 8006, which provides comprehensive dashboards for resource monitoring, VM/container creation, and configuration tasks. For advanced users and automation, Proxmox VE includes a robust command-line interface (CLI) with tools such as pvesh for interacting with the REST API, qm for QEMU/KVM virtual machine operations (e.g., start, stop, snapshot), and pct for LXC container management (e.g., enter, resize, migrate). These CLI utilities support scripting in environments like Bash or integration with orchestration tools. Proxmox VE is installed directly on bare-metal x86/AMD64 hardware, delivering a complete, kernel-optimized operating system with built-in support for virtualization extensions like Intel VT-x or AMD-V, ensuring high performance for enterprise workloads.

Target Use Cases

Proxmox Virtual Environment (VE) is deployed in enterprise data centers to provide cost-effective orchestration of virtual machines and containers, leveraging its open-source architecture under the GNU AGPLv3 license, which eliminates proprietary licensing expenses. This makes it an attractive alternative to VMware ESXi for small and medium-sized businesses (SMBs), where budget constraints and the need for full-featured virtualization without ongoing costs drive migrations. For instance, in Brazil, public health facilities rely on Proxmox VE to deliver enterprise-grade high availability databases across distributed sites. In homelab and development setups, Proxmox VE enables users to test multi-operating system environments efficiently, supported by its free licensing model that avoids fees associated with commercial hypervisors. Its intuitive web-based interface facilitates rapid prototyping and experimentation on personal hardware, allowing developers to simulate complex infrastructures without financial barriers. For edge computing and remote sites, Proxmox VE's lightweight resource footprint—requiring minimal overhead for mixed workloads—and capabilities like serial console access for headless management suit environments with limited connectivity. These attributes enable offline operation post-installation, ideal for distributed deployments such as manufacturing facilities or isolated outposts. A notable example is Eomnia's renewable-powered data center in Antarctica, which uses a Proxmox VE cluster for storage and recovery in extreme remote conditions. Proxmox VE facilitates hybrid cloud integrations by allowing API-driven exports and backups of workloads to public clouds like AWS or Azure, enabling burst capacity scaling during peak demands. This approach supports seamless tiering of resources for off-site redundancy, combining on-premises control with cloud elasticity. Case studies highlight its adoption in education and healthcare by 2025. In university labs, the Austrian college HTL Leonding employs Proxmox VE to teach computer networking, supporting 450 students across 18 classes with a clustered environment for hands-on virtualization exercises. Similarly, academic institutions in Ukraine have developed Proxmox-based clouds for training computer science educators, emphasizing scalable resource allocation for pedagogical simulations. In healthcare, secure isolated workloads are managed effectively, as demonstrated by the Portuguese online pharmacy Farmácia Nova da Maia, which uses Proxmox VE with ZFS for high-availability operations ensuring data integrity and compliance. By 2025, such implementations underscore its role in protecting sensitive patient data through containerized isolation and redundancy.

Architecture

Underlying Platform

Proxmox Virtual Environment (Proxmox VE) is built upon Debian GNU/Linux as its foundational operating system, providing a stable and widely supported base for virtualization and containerization tasks. The latest release, Proxmox VE 9.0, is based on Debian 13 "Trixie", incorporating updated packages and security enhancements from this Debian version to ensure compatibility and reliability in enterprise and homelab environments. At the kernel level, Proxmox VE employs a custom Linux kernel derived from upstream sources, with version 6.14.8-2 serving as the stable default in Proxmox VE 9.0. This kernel includes essential modules for hardware virtualization, such as KVM (Kernel-based Virtual Machine), enabling efficient paravirtualization for guest operating systems. Additional patches in the Proxmox kernel optimize performance for storage, networking, and container isolation, while optional newer kernels like 6.17 are available for advanced users seeking cutting-edge features. Core virtualization technologies are integrated directly into the platform without relying on external hypervisor abstractions. QEMU provides full virtual machine emulation, supporting a wide range of guest architectures and hardware passthrough, while LXC handles OS-level containerization for lightweight, efficient deployment of Linux-based workloads. These components leverage the kernel's capabilities to deliver both full isolation in VMs and shared-kernel efficiency in containers. Package management in Proxmox VE utilizes the Debian APT system, allowing seamless installation and updates of software components. Users can access the no-subscription community repository for free, timely updates derived directly from Debian and Proxmox testing, or opt for the enterprise repository, which offers rigorously tested packages with extended stability guarantees for production use. This dual-repository model ensures flexibility while prioritizing security and reliability in updates. Built-in security features enhance the underlying platform's robustness from the ground up. AppArmor mandatory access control profiles are enforced for LXC containers, restricting process capabilities and file access to mitigate potential exploits in containerized environments. Additionally, two-factor authentication (2FA) is natively supported via TOTP (Time-based One-Time Password) methods, such as those compatible with Google Authenticator, adding a critical layer of protection for administrative access to the Proxmox VE web interface and API.

Key Components and Services

The pve-manager service serves as the core management component in Proxmox Virtual Environment, responsible for orchestrating API requests, user authentication, and persistent configuration storage within the /etc/pve directory, which utilizes the Proxmox Cluster File System (pmxcfs) for replicated access across nodes. This setup ensures centralized management of system settings, including user permissions and realm configurations, integrated with authentication backends like PAM or LDAP. The pvedaemon operates as the privileged REST API daemon, running under root privileges to execute operations requiring elevated access, such as VM and container lifecycle management, including start, stop, and migration events through integration with QEMU for virtual machines and LXC for containers via predefined hooks. It delegates incoming requests to worker processes—typically three by default—to handle these tasks efficiently, ensuring secure and concurrent processing of privileged commands without direct exposure from the web interface. Complementing pvedaemon, the pveproxy daemon functions as the unprivileged API proxy, listening on TCP port 8006 over HTTPS and running as the www-data user with restricted permissions to serve the web-based management interface. It forwards requests needing higher privileges to the local pvedaemon instance, maintaining security isolation while enabling cluster-wide API access from any node. Storage backends in Proxmox VE are configured through datacenter-wide storage pools, supporting diverse types such as LVM-thin for thin-provisioned local block storage with snapshot and clone capabilities, directory-based storage for file-level access on existing filesystems, and iSCSI initiators for shared block-level storage over networks. These backends are defined in /etc/pve/storage.cfg, allowing flexible pooling of local and remote resources for VM disks, container images, and ISO files without manual mounting in most cases. For instance, LVM-thin pools enable overcommitment of storage space, while iSCSI supports dynamic discovery of targets via the pvesm tool. Logging and monitoring are facilitated by the pvestatd daemon, which continuously queries and aggregates real-time status data for all resources, including VMs, containers, and storage pools, broadcasting updates to connected clients for dashboard displays. This service integrates with the system's syslog for event logging, capturing operational details like service starts, errors, and resource utilization to support troubleshooting and performance oversight. Proxmox VE, built on Debian, leverages standard syslog facilities alongside pvestatd's metrics collection for comprehensive internal monitoring.

Features

Virtualization Technologies

Proxmox Virtual Environment (Proxmox VE) supports two primary virtualization technologies: full virtualization via the Kernel-based Virtual Machine (KVM) hypervisor and OS-level virtualization via Linux Containers (LXC). These options allow users to deploy virtual machines (VMs) and containers tailored to different workload requirements, with KVM providing hardware emulation for broad OS compatibility and LXC offering lightweight isolation for Linux-based environments. KVM in Proxmox VE leverages QEMU for device emulation, enabling the creation of fully virtualized VMs that support a wide range of guest operating systems, including Windows, Linux distributions, and BSD variants. The hypervisor utilizes paravirtualized drivers, such as VirtIO for block, network, and other devices, which reduce emulation overhead by allowing guests to communicate directly with the host kernel for improved I/O performance. Live migration is supported for running VMs, facilitating seamless transfers between cluster nodes without downtime when shared storage is available. Additionally, snapshotting capabilities, including live snapshots that capture VM memory state, enable point-in-time backups and quick rollbacks. LXC provides lightweight, OS-level virtualization specifically for Linux guests, sharing the host kernel to minimize resource usage while isolating processes, filesystems, and network stacks. Proxmox VE's Proxmox Container Toolkit (pct) manages LXC instances, supporting unprivileged containers by default through user namespaces, which map container UIDs and GIDs to non-privileged ranges on the host for enhanced security against privilege escalation attacks. Bind mounts allow efficient sharing of host directories or resources with containers, enabling data access without full filesystem duplication. This setup suits applications requiring low-latency execution, such as web servers or microservices. Proxmox VE 9.1 integrates support for Open Container Initiative (OCI) images, allowing users to pull images from registries like Docker Hub or upload them manually to create LXC container templates. These images are automatically converted to the LXC format via the Proxmox Container Toolkit, enabling provisioning as full system containers or lightweight application containers optimized for microservices with minimal resource footprint. This facilitates seamless deployment of standardized applications, such as databases or API services, directly through the GUI or CLI, bridging container build pipelines with Proxmox environments. In comparison, KVM VMs offer stronger isolation suitable for heterogeneous or security-sensitive workloads but introduce some performance overhead in CPU and I/O operations relative to native execution, due to the emulation layer. LXC containers, by contrast, achieve performance close to native execution with lower overhead than full virtualization, though they are limited to Linux guests and provide process-level rather than hardware-level isolation. Proxmox VE includes a template system for rapid deployment, allowing users to create reusable templates from existing VMs or containers, which can then be cloned to instantiate multiple instances. Templates can be derived from ISO installations for OS images or imported from Open Virtualization Format (OVF) files for compatibility with other platforms, streamlining provisioning in both standalone and clustered setups.

Clustering and High Availability

Proxmox Virtual Environment supports multi-node clustering to enable centralized management and high availability for virtual machines (VMs) and containers. The cluster is built using Corosync for reliable cluster communication and membership, which handles heartbeat messaging and node synchronization across the network. Pacemaker serves as the cluster resource manager, overseeing the state of resources such as VMs and containers, and coordinating their migration or restart as needed. For stable operation, clusters require at least three nodes to establish a quorum, where a majority vote (e.g., two out of three nodes) prevents decisions in partitioned networks; this quorum mechanism ensures cluster integrity during node failures or network issues. High availability (HA) in Proxmox VE allows configured VMs and containers to automatically restart on healthy nodes if the hosting node fails, minimizing downtime to seconds or minutes depending on resource size. This process relies on fencing to isolate faulty nodes and prevent data corruption; supported methods include hardware watchdogs, which trigger automatic reboots on unresponsive nodes via integrated circuits on the motherboard, or external fencing devices such as IPMI (Intelligent Platform Management Interface) for remote power control. Fencing configuration is performed via command-line tools, with each node requiring at least one verified fencing method to ensure reliable isolation. Live migration enables seamless movement of running VMs and containers between nodes without interruption, provided shared storage is available for disk images. The network connecting nodes should have low latency to maintain synchronization during the transfer of memory pages and CPU state. Migration time depends on the VM's RAM size, vCPU count, network bandwidth, and features like compression and deduplication. To avoid split-brain scenarios where multiple nodes attempt concurrent access to shared resources, Proxmox VE implements uncorrelated fencing, ensuring that fencing actions on one node do not depend on communication from others. This is complemented by lease-based storage locks on shared storage systems, which grant exclusive access rights to a single node for a limited time, preventing conflicts during failover and enforcing data consistency.

Storage Management

Proxmox Virtual Environment (Proxmox VE) employs a flexible storage model that allows virtual machine (VM) images, container data, and other resources to be provisioned across local or shared backends, enabling efficient resource utilization in both single-node and clustered setups. This model supports a variety of storage types, from simple directory-based to advanced distributed systems, with built-in tools for configuration via the web interface or command-line utilities like pvesm. Among the supported filesystems, ZFS stands out for its advanced capabilities in snapshotting, cloning, and deduplication, making it ideal for local storage needs. ZFS datasets are used to store VM images in raw format and container data volumes, providing efficient space management through features like copy-on-write snapshots that capture VM states without full duplication. For distributed environments, Ceph integration offers robust block and object storage, utilizing RADOS Block Device (RBD) images for high-performance, scalable VM disks that support thin provisioning and replication across nodes. Ceph's architecture ensures data redundancy via placement groups and CRUSH maps, allowing Proxmox VE to manage Ceph clusters directly for hyper-converged deployments. Storage in Proxmox VE is organized into pools, which logically group underlying backends such as local directories, LVM volumes, or remote protocols including NFS, iSCSI, and GlusterFS. These pools enable unified access to diverse storage resources, with content types like VM disk images, ISO files, and backups selectable per pool to enforce access controls. Thin provisioning is facilitated through LVM-thin pools, which allocate blocks only upon writing to optimize space usage on physical volumes, or via qcow2 image formats on directory or NFS backends that support dynamic growth up to predefined limits. Replication enhances data redundancy for local storage by periodically syncing VM disks between cluster nodes, minimizing downtime during migrations or failures. This feature leverages ZFS send/receive streams for efficient incremental transfers of ZFS-based volumes, or block-level replication for LVM-thin and directory storages, scheduled via Proxmox VE's built-in job manager accessible through the GUI or pvesr CLI tool. Jobs can be configured with schedules, such as hourly syncs, and support multiple targets per VM disk, ensuring consistent data availability across nodes without interrupting live operations. For enhanced reliability with shared block storage like iSCSI, Proxmox VE supports Multipath I/O (MPIO) to aggregate multiple physical paths to a logical unit number (LUN), providing failover and load balancing. MPIO is configured using the multipath-tools package, which detects paths and applies policies like round-robin for I/O distribution, while tuning parameters such as queue depths (e.g., via queue_length in multipath.conf) allow optimization for performance under high load by controlling outstanding I/O requests per path. This setup ensures continuous access even if individual paths fail, commonly used with enterprise storage arrays.

Backup and Restore

Proxmox Virtual Environment employs the vzdump tool as its primary mechanism for performing backups of virtual machines (VMs) and Linux containers (LXC). This integrated utility generates consistent full backups that encompass the complete configuration files and disk data for each VM or LXC, ensuring data integrity without partial captures. Backups can be initiated through the web-based graphical user interface (GUI) or the command-line interface (CLI) using the vzdump command, allowing administrators to schedule automated jobs or execute them manually as needed. The vzdump tool supports multiple backup modes to balance minimal disruption with data consistency: stop mode shuts down the VM or LXC before capturing the data; suspend mode temporarily freezes the guest for the duration of the backup; and snapshot mode enables live backups for running KVM-based VMs by leveraging storage-level snapshots, which is particularly useful for shared or block-based storages. For LXC, only stop or suspend modes are available, as they rely on freezing the container processes. Supported archive formats include .vma for VM backups, which is optimized for efficient storage of sparse files and out-of-order disk data, .tar for LXC and configuration archives, and qcow2 for individual disk images within VM backups. These archives can be directed to local storage, such as directories or LVM volumes, or remote storage options like NFS, iSCSI, or Ceph, selectable from configured storage pools. Proxmox VE supports incremental backups when targeting Proxmox Backup Server (PBS) as the storage backend, utilizing dirty bitmap tracking within QEMU to identify and transfer only modified disk blocks after an initial full backup. This feature reduces data transfer volumes and storage requirements for subsequent backups compared to full backups. Traditional vzdump backups to non-PBS storages remain full only, but the incremental capability enhances efficiency for large-scale deployments. Integration with Proxmox Backup Server provides advanced data protection through chunk-based repositories that employ content-defined deduplication to eliminate redundant data across multiple backups, minimizing overall storage footprint. PBS supports client-side encryption using AES-256 in GCM mode, where encryption keys are managed on the Proxmox VE host before transmission, ensuring data remains secure even on untrusted storage. Additionally, client-side pruning automates the removal of obsolete backup snapshots based on configurable retention policies, such as keep-last or keep-hourly schemes, directly from the backup job configuration without server-side intervention. This seamless integration treats PBS as a native storage type in Proxmox VE, enabling unified management via the GUI. Restore operations in Proxmox VE allow for straightforward recovery of full VMs or LXC by importing the backup archive through the GUI or CLI tools like qm restore for VMs and pct restore for LXC, which recreates the original configuration and attaches restored disks to the target storage. Guest-driven restores leverage the qemu-guest-agent installed within the VM to facilitate file-level recovery or application-consistent restores, such as quiescing file systems during the process to avoid corruption. Post-restore verification employs checksum comparisons on the imported data chunks to confirm integrity, particularly with PBS backups where built-in validation ensures no transmission errors or corruption occurred. These processes support overwriting existing guests or creating new ones, with options for selective disk restoration.

Networking Capabilities

Proxmox Virtual Environment (VE) leverages the Linux network stack to provide flexible networking options for virtual machines (VMs) and containers, enabling configurations from basic bridging to advanced software-defined setups. Network interfaces are managed through the /etc/network/interfaces file or the web-based GUI, allowing administrators to define bridges, bonds, and VLANs directly on the host. This integration ensures seamless connectivity for guest systems while supporting high-performance passthrough to physical NICs. Bridge-based networking forms the foundation of Proxmox VE's connectivity model, utilizing Linux bridges such as the default vmbr0 to connect VMs and containers to the physical network. These bridges act as virtual switches, allowing guest network interfaces to be attached directly for passthrough, where traffic appears as if originating from the host's MAC address. VLAN tagging (IEEE 802.1q) can be applied to any network device, including NICs, bonds, or bridges, to segment traffic without requiring separate physical interfaces; for instance, a VLAN-aware bridge like vmbr0.10 isolates traffic on VLAN ID 10. Bonding for link aggregation, particularly using LACP (802.3ad mode), combines multiple NICs into a single logical interface for increased bandwidth and redundancy, though it necessitates compatible switch configuration on the upstream side. Available as an experimental feature since 2019 and enhanced in Proxmox VE 8.0, Software-Defined Networking (SDN) extends these capabilities with overlay networks for enhanced isolation and scalability across clusters. SDN supports EVPN-VXLAN configurations, where Ethernet VPN (EVPN) uses BGP for layer-3 routing over VXLAN tunnels, enabling multi-zone isolation that separates tenant networks without physical segmentation. Controllers such as EVPN handle dynamic routing and peering, while Open vSwitch (OVS) serves as an alternative to native Linux bridges for advanced features like flow-based forwarding. Configurations are stored in /etc/pve/sdn and synchronized across cluster nodes, allowing GUI-based zone and VNet definitions for automated deployment. Firewall integration enhances network security through nftables-based rules applied at multiple levels: datacenter-wide for cluster policies, host-specific for node traffic, and VM/container-level for granular control. The proxmox-firewall service, opt-in for nftables in recent versions, supports stateful filtering, rate limiting to prevent DoS attacks, and anti-spoofing measures via MAC and IP validation on ingress traffic. Rules can reference VNets in SDN setups, ensuring consistent enforcement across virtual networks. For cluster communication, Proxmox VE uses the Corosync protocol over dedicated interfaces to minimize latency and contention with guest traffic. Since version 6.0, the default transport via Kronosnet employs unicast UDP for reliable messaging, simplifying deployments in environments without multicast support; however, multicast or legacy unicast modes remain configurable for compatibility. A separate NIC is recommended for this traffic to ensure consistent low-latency performance essential for quorum and synchronization.

Proxmox Datacenter Manager

Introduction and Purpose

Proxmox Datacenter Manager (PDM) is an open-source centralized management tool developed by Proxmox Server Solutions GmbH, released under the GNU Affero General Public License version 3 (AGPLv3). The project launched its initial alpha version on December 19, 2024, providing early access to core functionalities for testing and feedback. It progressed to beta status in September 2025, specifically with version 0.9 on September 11, 2025, incorporating significant enhancements such as support for Debian 13 Trixie and improved resource aggregation. As of November 2025, it remains in beta, with a stable release planned for later. The primary purpose of PDM is to offer a unified dashboard for monitoring and overseeing multiple independent Proxmox VE clusters and standalone nodes distributed across various data centers, overcoming the limitations of managing each instance through its individual web GUI. This addresses scalability challenges in large-scale deployments where traditional Proxmox VE clustering may not be feasible due to geographical or architectural constraints. By aggregating metrics, resource utilization, and status information from disparate environments, PDM enables administrators to gain a holistic view without requiring the consolidation of underlying infrastructures. PDM is designed for straightforward deployment as a standalone Debian-based virtual appliance via ISO installation or within a container environment, such as an LXC container on Proxmox VE. Integration with managed Proxmox VE instances relies on API tokens, which are either automatically generated with minimal privileges during setup or manually provided to ensure secure communication. The tool adopts a non-intrusive architecture, defaulting to read-only monitoring to minimize impact on production systems, while supporting optional delegated administrative actions through integrated role-based access control (RBAC) for granular permission management.

Key Features and Integration

Proxmox Datacenter Manager offers a centralized dashboard that aggregates resource utilization metrics from multiple Proxmox VE clusters, providing real-time views of CPU, memory, disk, and network usage across nodes and virtual machines. This unified interface supports monitoring of multiple nodes and clusters. Alert aggregation consolidates notifications from disparate sources, such as hardware failures or high load events, into a single pane for streamlined triage and response. Additionally, the interface provides hierarchical navigation of datacenter, remotes, nodes, and resources. Cross-cluster operations in Proxmox Datacenter Manager streamline multi-site administration through features like virtual machine migrations. These capabilities reduce operational overhead in distributed setups, with API extensions allowing scripted automation for custom workflows. Integration with Proxmox VE 8.0 and subsequent versions relies on secure token-based authentication, where API tokens from VE clusters are used to establish trusted connections to the manager, eliminating the need for shared credentials. This setup supports federated logins, integrating with external identity providers for seamless access control across federated environments. Webhook notifications further enhance connectivity by pushing event data—such as VM migrations or storage alerts—from VE to external systems or the manager itself, enabling proactive monitoring in hybrid infrastructures. The tool's extensibility is provided through its REST API and CLI tools, allowing for custom automation and integration with external systems. Custom dashboards allow tailoring the interface to specific needs, with widgets for metrics visualization and role-based access controls, making it adaptable for enterprise-scale deployments.

References

  1. [1]
    Proxmox Virtual Environment
    Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor ...
  2. [2]
    Features - Proxmox Virtual Environment
    Proxmox Virtual Environment is a powerful open-source server virtualization platform to manage two virtualization technologies - KVM (Kernel-based Virtual ...
  3. [3]
    About Proxmox Server Solutions GmbH
    Proxmox Server Solutions GmbH was founded in 2005 by Martin Maurer and Dietmar Maurer, aiming to develop user-friendly GNU/Linux software for secure and ...
  4. [4]
    Roadmap - Proxmox VE
    Release History. See also Announcement forum. Proxmox VE 9.0. Released 05. August 2025: See Downloads. Based on Debian Trixie ( ...Roadmap · Release History · Proxmox VE 9.0
  5. [5]
    Proxmox Virtual Environment 9.0 with Debian 13 released
    Aug 5, 2025 · Proxmox Virtual Environment 9.0 with Debian 13 released · VIENNA, Austria – August 05, 2025 – · Facts · About Proxmox Server Solutions
  6. [6]
    Proxmox Celebrates 10-Year Anniversary
    Apr 29, 2015 · The company Proxmox Server Solutions has been established in 2005. Exactly ten years ago today, the co-founders Martin Maurer and Dietmar Maurer ...
  7. [7]
    Proxmox VE Administration Guide
    The project started in 2007, followed by a first stable version in 2008. At the time we used OpenVZ for containers, and QEMU with KVM for virtual machines.
  8. [8]
    Proxmox VE celebrates its 10-year anniversary
    Apr 16, 2018 · Proxmox VE version 0.9 released in April 15, 2008: a web-based GUI for easily managing OpenVZ and KVM. · Proxmox VE 2.0 brings a REST API, a ...Missing: origins | Show results with:origins
  9. [9]
    How to contribute to Proxmox projects
    The majority of our code is licensed under the GNU Affero General Public License, version 3 (AGPLv3) or is published under a similar, widely used FLOSS license.Missing: Debian base
  10. [10]
    Proxmox VE 2.0 final release
    Mar 30, 2012 · We just released Proxmox VE 2.0! A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions.
  11. [11]
    Proxmox VE 4.0 with Linux Containers (LXC) and new HA Manager ...
    Oct 6, 2015 · VIENNA, Austria – October 6, 2015 – Proxmox Server Solutions GmbH today released version 4.0 of its Proxmox Virtual Environment (VE).
  12. [12]
    Proxmox VE 4.0 released!
    Oct 6, 2015 · We are proud to announce the final release of our Proxmox VE 4.0 - based on the great Debian Jessie. Watch our short introduction video ...
  13. [13]
    Proxmox VE 6.0 with Ceph Nautilus and Corosync 3
    Jul 16, 2019 · This includes: a cluster-wide overview for Ceph being displayed in the 'Datacenter View'; a new donut chart visualizing the activity and state ...
  14. [14]
    Proxmox VE 6.0 released!
    Jul 16, 2019 · We're excited to announce the final release of our Proxmox VE 6.0! It's based on the great Debian 10 codename "Buster" and the latest 5.0 ...
  15. [15]
    Proxmox Virtual Environment 8.0 with Debian 12 "Bookworm" released
    Jun 22, 2023 · Proxmox Virtual Environment 8.0 with Debian 12 "Bookworm" released · VIENNA, Austria – June 22, 2023 · About Proxmox Server Solutions · Facts ...
  16. [16]
    Proxmox VE 8.0 released!
    Jun 22, 2023 · We're very excited to announce the major release 8.0 of Proxmox Virtual Environment! It's based on the great Debian 12 "Bookworm" but using a newer Linux ...
  17. [17]
    Proxmox Virtual Environment 9.0 released!
    Aug 5, 2025 · We are pleased to announce the first stable release of Proxmox Virtual Environment v9.0 - immediately available for download!<|control11|><|separator|>
  18. [18]
    Introduction - Proxmox VE
    Aug 5, 2025 · Proxmox VE is a virtualization platform that tightly integrates compute, storage and networking resources, manages highly available clusters, ...
  19. [19]
    Hyper-converged Infrastructure - Proxmox VE
    Nov 22, 2022 · Proxmox VE is a virtualization platform that tightly integrates compute, storage and networking resources, manages highly available clusters, backup/restore as ...
  20. [20]
    Graphical User Interface - Proxmox VE
    Aug 5, 2025 · The web interface can be reached via https://youripaddress:8006 (default login is: root, and the password is specified during the installation ...Features · Login · GUI Overview · Content Panels
  21. [21]
    Command Line Tools - Proxmox VE
    Dec 16, 2022 · This page lists some important Proxmox VE and Debian command line tools. Virtual Machines (QEMU/KVM) specific. qm. To view a list of KVMs: qm ...Missing: pvesh | Show results with:pvesh
  22. [22]
    Hardware Requirements - Proxmox Virtual Environment
    Proxmox VE supports clustering, this means that multiple Proxmox VE installations can be centrally managed thanks to the integrated cluster functionality.
  23. [23]
    Installing Proxmox VE
    The installer will guide you through the setup, allowing you to partition the local disk(s), apply basic system configurations (for example, timezone, language ...Using the Proxmox VE Installer · Accessing the Management...
  24. [24]
    Proxmox VE vs. ESXi: Choosing the Right Virtualization Platform for ...
    Apr 9, 2025 · Proxmox VE vs. ESXi: Choosing the Right Virtualization Platform for Your Needs. Updated 9th April 2025, Rob Morrison.
  25. [25]
    Success Stories from Proxmox customers & users
    "By using Proxmox VE we are able to provide an enterprise-grade high availability database to all public health facilities in Brazil.".
  26. [26]
    Proxmox vs OpenStack: Virtualization / Private Cloud Platform ...
    Sep 26, 2025 · Edge computing deployments increasingly favor Proxmox for remote locations requiring minimal operational overhead. Manufacturing facilities ...
  27. [27]
    Eomnia - Proxmox
    Jan 29, 2018 · The antarctic data center is built with Proxmox VE and provides state-of-the-art storage and data recovery for scientific instruments ...
  28. [28]
    Proxmox VE is Enterprise-Ready: Now with Veeam Backup Integration
    Jun 14, 2025 · Cloud Integration: Proxmox workloads backed up by Veeam can be tiered to AWS, Azure, or Google Cloud for off-site redundancy or archiving.
  29. [29]
  30. [30]
    The practice of developing the academic cloud using the Proxmox ...
    Apr 27, 2025 · The article analyzes the projects for the introduction of cloud technologies in education and identifies the main advantages and risks in ...
  31. [31]
    Success Stories Success Stories Health Proxmox Virtual ...
    Portuguese online pharmacy Farmácia Nova da Maia uses Proxmox VE with ZFS for high availability (HA). ... Copyright © 2004 - 2025 Proxmox Server Solutions GmbH.Missing: case studies
  32. [32]
    Proxmox VE Kernel
    Oct 30, 2025 · Proxmox VE 8. The stable 8.x release uses currently the 6.8 kernel since Proxmox VE 8.2. This kernel version is derived from Ubuntu 24.04.
  33. [33]
    Opt-in Linux 6.17 Kernel for Proxmox VE 9 available on test & no ...
    Oct 13, 2025 · The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option. We plan to use the 6.17 kernel as the new ...
  34. [34]
    Linux Container - Proxmox VE
    Nov 28, 2024 · Proxmox VE uses Linux Containers (LXC) as its underlying container technology. The “Proxmox Container Toolkit” (pct) simplifies the usage and management of LXC.<|control11|><|separator|>
  35. [35]
    Package Repositories - Proxmox VE
    Aug 5, 2025 · Repositories are a collection of software packages, they can be used to install new software, but are also important to get new updates.Repositories in Proxmox VE · Proxmox VE Enterprise... · Ceph Repositories
  36. [36]
    Two-Factor Authentication - Proxmox VE
    Aug 21, 2019 · For a complete step-by-step guide to setup OATH OTP two-factor authentication (2FA) see Two Factor Authentication section of our Documentation.
  37. [37]
    Service daemons - Proxmox VE
    May 24, 2017 · pve-manager. This is just a startup script (not a daemon), used to start/stop all VMs and Containers. pve-firewall.Proxmox VE Services · pve-cluster · Cluster Services
  38. [38]
    pmxcfs(8) - Proxmox VE
    The Proxmox Cluster file system (“pmxcfs”) is a database-driven file system for storing configuration files, replicated in real time to all cluster nodes ...<|control11|><|separator|>
  39. [39]
    pvedaemon(8) - Proxmox VE
    pvedaemon delegates handling of incoming requests to worker processes. By default, pvedaemon spawns 3 worker processes, which is sufficient for most ...
  40. [40]
    pveproxy(8) - Proxmox VE
    It runs as user www-data and has very limited permissions. Operation requiring more permissions are forwarded to the local pvedaemon. Requests targeted for ...
  41. [41]
    Proxmox VE Storage
    All Proxmox VE related storage configuration is stored within a single text file at /etc/pve/storage.cfg. As this file is within /etc/pve/, it gets ...
  42. [42]
    Storage - Proxmox VE
    Aug 5, 2025 · All Proxmox VE related storage configuration is stored within a single text file at /etc/pve/storage. cfg. As this file is within /etc/pve/, it ...Storage: Directory · LVM Thin · Storage: NFS · Storage: LVM
  43. [43]
    Storage: LVM - Proxmox VE
    Aug 5, 2025 · You can use the LVM-thin backend for non-shared local storage. It supports snapshots and linked clones. Table 1. Storage features for backend ...
  44. [44]
    pvesm(1) - Proxmox VE
    Nevertheless, there is a command-line tool called pvesm (“Proxmox VE Storage Manager”), which is able to perform common storage management tasks. Examples. Add ...<|control11|><|separator|>
  45. [45]
    Qemu/KVM Virtual Machines - Proxmox VE
    Nov 22, 2022 · QEMU is a user program which has access to a number of local resources like partitions, files, network cards which are then passed to an emulated computer.Missing: LXC | Show results with:LXC
  46. [46]
    Windows VirtIO Drivers - Proxmox VE
    Aug 28, 2023 · VirtIO Drivers are paravirtualized drivers for kvm/Linux (see http://www.linux-kvm.org/page/Virtio). In short, they enable direct (paravirtualized) access to ...Qemu-guest-agent · Paravirtualized Block Drivers... · Paravirtualized Network...
  47. [47]
    Live Snapshots - Proxmox VE
    May 24, 2017 · By using Proxmox VE live snapshots you can preserve the KVM virtual machine state. A snapshot includes the contents of the virtual machine memory.Introduction · Troubleshooting
  48. [48]
    Unprivileged LXC containers - Proxmox VE
    Mar 16, 2021 · These kind of containers use a new kernel feature called user namespaces. All of the UIDs (user id) and GIDs (group id) are mapped to a different number range ...
  49. [49]
    Does Proxmox degrade Windows performance?
    Apr 4, 2022 · The QEMU/KVM performance losses to native install (bare metal), were all in the 5%-10% range, except for video encoding, were QEMU/KVM lost 27% of performance.<|control11|><|separator|>
  50. [50]
    How fast is KVM? Host vs virtual machine performance! - Linux
    Dec 1, 2016 · KVM can run a guest OS at 95-98% of the host's native performance. Bioshock Infinite only saw a 0.74% difference in performance, CS:GO only saw a 1.63% ...Missing: percentage | Show results with:percentage
  51. [51]
    VM Templates and Clones - Proxmox VE
    Jan 31, 2018 · Proxmox VE includes container based templates since 2008 and beginning with the V3.x series, additionally KVM templates can be created and ...Missing: OVF | Show results with:OVF
  52. [52]
    Migrate to Proxmox VE
    Aug 21, 2025 · This article aims to assist users in transitioning to Proxmox Virtual Environment. The first part explains the core concepts of Proxmox VE.Missing: virtualization | Show results with:virtualization
  53. [53]
    Cluster Manager - Proxmox VE
    Aug 5, 2025 · The Proxmox VE cluster manager pvecm is a tool to create a group of physical servers. Such a group is called a cluster.
  54. [54]
    High Availability - Proxmox VE
    Aug 5, 2025 · Proxmox VE provides a software stack called ha-manager, which can do that automatically for you. It is able to automatically detect errors and do automatic ...
  55. [55]
    Fencing - Proxmox VE
    Jul 18, 2019 · This prevents two nodes from simultaneously accessing the same data and corrupting it. Fence devices are used to guarantee data integrity under all failure ...
  56. [56]
    Storage: ZFS - Proxmox VE
    Apr 9, 2025 · ZFS is probably the most advanced storage type regarding snapshot and cloning. The backend uses ZFS datasets for both VM images (format raw) and container data ...Configuration · File naming conventions · Storage Features
  57. [57]
    Storage: RBD - Proxmox VE
    Aug 5, 2025 · When configuring an external RBD storage via the GUI, you can copy and paste the keyring into the appropriate field. The keyring will be stored ...
  58. [58]
    Storage: LVM Thin - Proxmox VE
    Apr 9, 2025 · LVM normally allocates blocks when you create a volume. LVM thin pools instead allocates blocks when they are written.
  59. [59]
    Storage Replication - Proxmox VE
    Apr 9, 2025 · Storage replication brings redundancy for guests using local storage and reduces migration time. It replicates guest volumes to another node so that all data ...<|separator|>
  60. [60]
    Multipath - Proxmox VE
    Oct 17, 2024 · Multipath provides redundant access to storage by using multiple paths to a LUN, ensuring access even if one path fails.Set up multiple paths to the... · Set up multipath-tools · LVM on top of a LUN
  61. [61]
    ISCSI Multipath - Proxmox VE
    Oct 17, 2024 · The main purpose of multipath connectivity is to provide redundant access to a storage device, ie, to have access to the storage device when one or more of the ...Introduction · Activate Multipath · Vendor specific settings
  62. [62]
    Backup and Restore - Proxmox VE
    Aug 5, 2025 · Proxmox VE backups are always full backups - containing the VM/CT configuration and all data. Backups can be started via the GUI or via the vzdump command-line ...Missing: incremental | Show results with:incremental
  63. [63]
    Backup and Restore - Proxmox VE
    For volume mount points you can set the Backup option to include the mount point in the backup. Device and bind mounts are never backed up as their content ...
  64. [64]
    [PDF] Proxmox VE Administration Guide
    Mar 22, 2023 · A copy of the license is included in the section entitled "GNU Free Documentation License". Page 3. Proxmox VE Administration Guide iii.
  65. [65]
    [PDF] Proxmox VE Administration Guide
    Apr 9, 2025 · Proxmox VE supports multiple authentication sources like Microsoft Active Directory ... The Proxmox VE storage model is very flexible. Virtual ...
  66. [66]
    Storage: Proxmox Backup Server
    Aug 5, 2025 · This backend allows direct integration of a Proxmox Backup Server into Proxmox VE like any other storage.
  67. [67]
    qm(1) - Proxmox VE
    The backup archive. Either the file system path to a .tar or .vma file (use - to pipe data from stdin) or a proxmox storage backup volume identifier.
  68. [68]
    Network Configuration - Proxmox VE
    Aug 5, 2025 · The configuration can be done either via the GUI, or by manually editing the file /etc/network/interfaces, which contains the whole network ...Apply Network Changes · Choosing a network... · Routed Configuration
  69. [69]
    Open vSwitch - Proxmox VE
    Jan 11, 2022 · This requires switch support on the other end. A simple bond using eth0 and eth1 that will be part of the vmbr0 bridge might look like this.
  70. [70]
    Software-Defined Network - Proxmox VE
    The Proxmox VE SDN configurations are located in /etc/pve/sdn, which is shared with all other cluster nodes through the Proxmox VE configuration file system.
  71. [71]
    Proxmox VE Firewall
    Proxmox VE Firewall provides an easy way to protect your IT infrastructure. You can setup firewall rules for all hosts inside a cluster, or define rules for ...Missing: anti- | Show results with:anti-
  72. [72]
    Multicast notes - Proxmox VE
    Jul 24, 2024 · Multicast allows a single transmission to be delivered to multiple servers at the same time. This is the basis for cluster communications in Proxmox VE 2.0 to ...
  73. [73]
    Proxmox Datacenter Manager - First Alpha Release
    Dec 19, 2024 · The alpha phase will run until the first beta will be available in the first half of 2025. · Development of initial core feature-set which will ...Proxmox Datacenter Manager 0.9 Beta released!Upgrade from Alpha to Beta Fails - Proxmox Support ForumMore results from forum.proxmox.com
  74. [74]
    Proxmox Datacenter Manager Roadmap
    Sep 11, 2025 · This article provides release notes and a high-level roadmap overview for Proxmox Datacenter Manager.Overview · Roadmap · Proxmox Datacenter Manager... · Changelog OverviewMissing: service | Show results with:service
  75. [75]
    Proxmox Datacenter Manager Upgrade from Alpha to Beta
    Sep 19, 2025 · Proxmox Datacenter Manager Beta is based on Debian 13 Trixie, a new major release, and introduces several new major features and changes. You ...In-Place Upgrade · Prerequisites · Actions Step-by-Step · Following the Proxmox...
  76. [76]
    proxmox-datacenter-manager packages - Repology
    Summary: Manage multiple Proxmox VE cluster and other Proxmox projects · Maintainer: iv@altlinux.org · Category: System/Servers · License: AGPL-3.0 · Link(s):.Missing: AGPLv3 | Show results with:AGPLv3
  77. [77]
    Proxmox Datacenter Manager (Beta 0.9) – Technical Overview
    Sep 16, 2025 · The system probes the connection, auto-creates a dedicated API token with minimal privileges, and begins syncing data. Supported remotes include ...
  78. [78]
    Proxmox Datacenter Manager Beta Documentation
    Sep 12, 2025 · The Proxmox Datacenter Manager will then automatically create an API token to use for communicating with the Proxmox VE remote node. Once ...Missing: appliance | Show results with:appliance
  79. [79]
    Proxmox Datacenter Manager LXC - Proxmox VE Helper-Scripts
    Proxmox Datacenter Manager (PDM)LXC. Added 2024-12-24•debian 13. Default. 2 vCPU 2GB 10 GB. Default Interface: 8443.<|control11|><|separator|>
  80. [80]
    Proxmox Datacenter Manager 0.9 Beta released!
    Sep 11, 2025 · The Datacenter Manager project has been developed with the objective of providing a centralized overview of all your individual nodes and ...
  81. [81]
    Proxmox Virtual Environment 9.1 Press Release
    Official announcement detailing the integration of OCI image support in Proxmox VE 9.1, including pulling from registries and provisioning as system or application containers.
  82. [82]
    Proxmox Container Toolkit Documentation
    Official documentation on using OCI images with LXC containers in Proxmox VE, covering automatic conversion and deployment options.