OpenNebula
OpenNebula is an open-source cloud and edge computing platform that provides unified management for heterogeneous infrastructures, including virtualized data centers, public clouds, and edge environments, enabling the orchestration of compute, storage, and networking resources. It is available in a free Community Edition and a supported Enterprise Edition for mission-critical deployments.[1][2] The project originated in 2005 as an internal research initiative at the Complutense University of Madrid's Distributed Systems Architecture Research Group and evolved into an open-source effort with the launch of the OpenNebula.org community in November 2007, culminating in its first software release in 2008.[3][4] In 2010, OpenNebula Systems (formerly C12G Labs) was founded by the project's creators in Madrid, Spain, to support its commercial development and enterprise adoption, marking over 15 years of continuous innovation by 2025.[4][5] Key features of OpenNebula include support for virtual machines via hypervisors like KVM, container orchestration with tools such as Docker and Kubernetes, and serverless computing, all managed through a single control panel that facilitates private, hybrid, and multi-cloud deployments.[6][1] It emphasizes vendor neutrality, multi-tenancy, federation across sites, and automatic provisioning, allowing users to integrate with existing infrastructures without lock-in.[4] The platform's open cloud architecture unifies public cloud simplicity with private cloud security and control, supporting scalability for hundreds of thousands of cores.[4] OpenNebula has been adopted by over 5,000 organizations worldwide, including enterprises like EveryMatrix and academic institutions such as UCLouvain, for building sovereign and efficient cloud solutions.[4] As the only major open-source cloud management platform developed in Europe, it plays a pivotal role in initiatives like Gaia-X, the European Open Science Cloud (EOSC), and the Important Project of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), where it contributes to digital sovereignty and edge computing advancements.[4][7] The project is actively maintained on GitHub with contributions from 171 developers and is backed by European research efforts such as ONEnextgen and SovereignEdge.Cognit.[6]Introduction
Overview
OpenNebula is an open-source cloud management platform designed to orchestrate and manage heterogeneous resources across data centers, public clouds, and edge computing infrastructures.[8] It enables enterprises to deploy and operate private, hybrid, and edge clouds, primarily supporting Infrastructure as a Service (IaaS) models as well as multi-tenant Kubernetes environments for containerized workloads.[8] The platform emphasizes simplicity in deployment and operation, scalability to handle large-scale infrastructures, and vendor independence through its open architecture, allowing users to avoid lock-in to specific providers.[1] It unifies the agility of public cloud services with the security and control of private clouds, facilitating seamless integration of diverse environments.[8] OpenNebula supports a range of virtualization technologies, including KVM, LXC containers, AWS Firecracker for lightweight virtual machines, and VMware vCenter for hybrid setups.[8] Key benefits include robust multi-tenancy for isolated user environments, automatic provisioning of resources, elasticity to scale workloads dynamically, and on-demand management of compute, storage, and networking assets.[8] Originally developed as a research project, OpenNebula has evolved into a mature, production-ready platform widely adopted in enterprise settings.[3]Editions
OpenNebula is available in two primary editions: the Community Edition and the Enterprise Edition, each tailored to different user needs in cloud management.[9] The Community Edition provides a free, open-source distribution under the Apache License 2.0, offering full core functionality for users managing their own deployments without commercial support.[9] It includes binary and source packages, with updates released every six months and patch versions addressing critical bug fixes, making it ideal for non-profit organizations, educational institutions, or testing environments.[9] Community support is available through public forums, but users must handle self-management and upgrades independently.[9] In contrast, the Enterprise Edition is a commercial offering designed for production environments, incorporating the open-source core under Apache License 2.0 for source code while requiring a subscription for binary packages under commercial terms.[9] It delivers a hardened, tested version with additional bug fixes, minor enhancements, long-term support releases, and enterprise-specific integrations not available in the Community Edition.[10] Key differences include professional services such as SLA-based support, deployment assistance, training, consulting, technical account management, and priority access to maintenance updates and upgrades.[11] The Enterprise Edition also provides enhanced security features and warranty assurances, ensuring reliability for large-scale operations.[11] Both editions can be downloaded from the official OpenNebula website, though the Enterprise Edition requires an active subscription—priced on a per-cloud basis with host-based licensing—for full access to upgrades, tools, and support.[12] From version 5.12 onward, major upgrades are restricted to Enterprise subscribers or qualified non-commercial users and significant community contributors, emphasizing the platform's sustainability strategy.[12]History
Origins
OpenNebula originated as a research project in 2005 at the Distributed Systems Architecture (DSA) Research Group of the Universidad Complutense de Madrid in Spain, led by Ignacio M. Llorente and Rubén S. Montero.[13][14] The initiative stemmed from efforts to address challenges in virtual infrastructure management, with an initial emphasis on developing efficient and scalable services for deploying and managing virtual machines across large-scale distributed systems.[13] This work built on prior research in grid computing and virtualization, aiming to create decentralized tools that could handle dynamic resource allocation without relying on proprietary solutions.[15] The project's transition to an open-source model occurred with its first public technology preview release in March 2008, under the Apache License 2.0, motivated by the burgeoning cloud computing paradigm's demand for flexible, vendor-agnostic platforms that avoided lock-in and supported heterogeneous environments.[13][15] This shift enabled broader collaboration among researchers and early adopters, fostering innovation in infrastructure-as-a-service (IaaS) technologies while aligning with European research initiatives like the RESERVOIR project, which sought to integrate cloud and grid computing.[13] By prioritizing modularity and extensibility, the open-source approach positioned OpenNebula as a foundational tool for academic and experimental cloud deployments in the late 2000s.[16] To sustain development and provide enterprise-grade support, the original developers founded C12G Labs in March 2010 in Madrid, Spain, focusing on commercial services such as consulting, training, and customized integrations for OpenNebula users.[5] The company, initially headquartered in Madrid, was renamed OpenNebula Systems in September 2014 to better reflect its core technology and later expanded operations to include a headquarters in Burlington, Massachusetts, USA, enhancing its global reach.[5][17] This corporate backing marked the evolution from pure research to a supported ecosystem, while the project continued to grow internationally through community contributions.[4]Key Milestones
OpenNebula's first stable release (version 1.0) in July 2008 marked a pivotal shift from its origins in academic research to a collaborative open-source community project, enabling broader adoption of cloud management technologies.[18][19] The project reached its 10th anniversary in November 2017, underscoring a decade of continuous innovation, community contributions, and the establishment of thousands of cloud infrastructures worldwide.[20] By 2025, OpenNebula had achieved significant adoption, powering clouds for more than 5,000 organizations globally across diverse sectors including research institutions, telecommunications providers, and financial services.[4] Organizational growth accelerated through strategic affiliations, with OpenNebula Systems becoming a day-1 member of Gaia-X, a participant in the European Open Science Cloud (EOSC) and the Important Project of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), and a corporate member of the Linux Foundation, LF Edge, and Cloud Native Computing Foundation (CNCF); the company also leads as chair of the European Alliance for Industrial Data, Edge and Cloud.[4][7] In recent years, OpenNebula has emphasized advancements in AI-ready hybrid clouds and edge computing strategies, prominently featured at events such as Data Center World 2025 where it demonstrated re-virtualization solutions for modern infrastructure.[21] Spanning over 10 years of dedicated research and development into the 2020s, OpenNebula has prioritized sovereign cloud solutions to enhance digital autonomy, including federated AI factories and reference architectures for European data sovereignty.[4][7]Development
Release History
OpenNebula's initial development featured two technology previews in 2008. The first Technology Preview (TP) was released on March 26, 2008, providing basic host and virtual machine management capabilities based on the Xen hypervisor.[22] This was followed by TP2 on June 17, 2008, which expanded on these features for virtual infrastructure management.[22] The project's first stable release, version 1.0, arrived on July 24, 2008, introducing core cloud computing functionalities for data center virtualization and dynamic resource allocation.[23] Early major versions followed a rapid cycle, with upgrades approximately every 1-2 years and 3-5 minor updates per major release to incorporate community feedback and stability improvements. For instance, version 2.0 launched in October 2010 as a significant upgrade, followed by 3.0 in October 2011.[22] This pattern continued through the 4.x and 5.x series, culminating in version 5.12 on July 21, 2020—the first Long Term Support (LTS) release, which received extended maintenance until its end-of-life on February 10, 2023.[24][25] In recent years, the release cadence has shifted toward quarterly updates, with major versions emerging every 3-5 years to align with enterprise needs.[26] The latest major release, 7.0 "Phoenix," was issued on July 3, 2025, bringing advancements in AI workload support, edge computing, and hybrid cloud orchestration.[27] A subsequent patch, 7.0.1, followed on October 27, 2025, enhancing enterprise-grade features and AI cloud integrations.[28] Changes to the release process began with version 5.12, where upgrade scripts for the Enterprise Edition became partially closed-source, accessible only via paid subscriptions to ensure professional support and security; the Community Edition, however, maintains fully open-source availability.[29] The post-7.0 roadmap prioritizes deeper containerization support, hybrid cluster management, and 5G integration to enable efficient telco edge deployments.[30][31]| Version | Release Date | Key Notes | Type |
|---|---|---|---|
| TP1 | March 26, 2008 | Xen-based host/VM management | Development |
| TP2 | June 17, 2008 | Expanded virtual infrastructure | Development |
| 1.0 | July 24, 2008 | First stable, basic cloud features | Stable |
| 5.12 | July 21, 2020 | First LTS, supported to 2023 | LTS |
| 7.0 "Phoenix" | July 3, 2025 | AI, edge, hybrid enhancements | Major |
| 7.0.1 | October 27, 2025 | Enterprise and AI cloud updates | Patch |
Community and Ecosystem
OpenNebula's community is driven by a collaborative model involving developers, translators, and users who contribute through the project's GitHub repository, where code enhancements, documentation improvements, and localization efforts are submitted under the Apache License 2.0.[32] Users also participate by reporting bugs, requesting features, and providing feedback via GitHub issues, while translators support multilingual portal content to broaden accessibility.[32] The contributor base encompasses academics, enterprises, and non-profits, fostering over a decade of collaborative innovation since the project's inception.[33] Notable contributors include organizations such as Telefónica I+D, universities like the University of Chicago and Clemson University, and research centers including SARA Supercomputing Center and INRIA, alongside individual champions from entities like AWS, Datadog, and academic institutions such as Ghent University.[33][34] These participants, recognized through the Champion Program, enhance community engagement by developing tools, offering technical support, and promoting adoption globally.[34] The ecosystem features partnerships that integrate OpenNebula with broader open-source initiatives, including corporate membership in the Linux Foundation, Cloud Native Computing Foundation (CNCF), and Linux Foundation Europe, as well as Day-1 membership in Gaia-X and participation in the European Open Science Cloud (EOSC).[4][35] OpenNebula Systems has joined the Linux Foundation Edge Initiative and Project Sylva to advance edge and telco cloud technologies, supporting standards for federated, secure infrastructures in Europe.[36][37] The Connect Partner Program further enables technology and business collaborations, with third-party providers offering compatible hardware, software, and services to extend OpenNebula's capabilities.[38] Support channels include the official community forum for discussions and troubleshooting, comprehensive documentation at docs.opennebula.io, and events such as the annual OpenNebulaConf conference since 2013, along with regular webinars and TechDays on topics like storage and integrations.[39][40] Non-profits and community users access upgrade tools through the free Community Edition, supplemented by volunteer-driven advice and the Champion Program for enhanced guidance.[40] Adoption spans diverse environments, from research labs like RISE in Sweden for AI and edge infrastructure to telco clouds at Ooma for virtualization transitions, with deployments supporting hundreds of virtual machines in enterprises such as CEWE and weekly clusters in government settings like the Flemish Department of Environment and Spatial Planning.[41] This widespread use has spurred community extensions, including the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution for seamless Kubernetes cluster deployment on OpenNebula.[42] Community contributions continue to influence release cycles, enabling iterative improvements based on user input.[32]Features
Core Capabilities
OpenNebula provides robust resource orchestration capabilities, enabling the management of virtual machines, containers, and physical resources through automatic provisioning and elasticity mechanisms. This includes centralized governance for deploying and scaling workloads across clusters, with features like live migration and affinity rules to optimize performance and resource utilization. These functionalities ensure efficient handling of heterogeneous environments, supporting over 2,500 nodes in a single instance for enterprise-scale operations.[43] Multi-tenancy in OpenNebula is achieved through secure isolation of resources and data between users and groups, incorporating role-based access control to manage permissions effectively. Administrators can define fine-grained access control lists (ACLs), quotas, and virtual data centers (VDCs) to enforce isolation and compliance for multiple teams or tenants. This setup allows for delegated administration, where specific users or groups are granted controlled access to subsets of infrastructure without compromising overall security.[44] The platform supports hybrid cloud environments by facilitating seamless integration between on-premises infrastructure and public clouds, promoting workload portability and vendor independence. Users can provision and manage resources across federated zones, enabling burst capacity to public providers while maintaining unified control over hybrid setups. This approach simplifies migration and scaling of applications between local and remote resources. As of OpenNebula 7.0.1 (October 2025), hybrid cloud provisioning has been further simplified.[43][45] For edge computing, OpenNebula enables the deployment of lightweight clusters in distributed environments, optimized for low-latency operations such as 5G edge nodes. It provides a unified framework for orchestrating multi-cluster setups, allowing efficient management of resources closer to end-users to reduce latency and enhance performance in telco and IoT scenarios. This capability supports scalable, vendor-agnostic edge infrastructure without requiring complex custom configurations.[46] OpenNebula is designed with AI readiness in mind, offering built-in support for GPU orchestration to handle scalable AI workloads in hybrid configurations. Features like GPU passthrough and virtual GPU partitioning allow direct or shared access to accelerators for tasks such as model training and inference, ensuring high-performance computing while optimizing resource allocation across on-premises and cloud resources. OpenNebula 7.0.1 enhances GPU acceleration for improved performance in AI factories and sovereign AI deployments.[47][45] This enables cost-effective scaling for AI factories and sovereign AI deployments. Monitoring and automation tools are integrated natively into OpenNebula, providing capabilities for resource scaling, health checks, and policy-driven operations. Built-in telemetry tracks system metrics, enabling proactive adjustments through event-driven hooks and distributed resource scheduling. These features automate capacity management and ensure reliability, with support for overcommitment and predictive optimization to maintain operational efficiency.[43]Integrations and Extensibility
OpenNebula provides open APIs, including the primary XML-RPC interface, which enables programmatic control over core resources such as virtual machines, virtual networks, images, users, and hosts. This API allows developers to integrate OpenNebula with external applications for automation and orchestration tasks. Additionally, the OpenNebula Cloud API (OCA) offers simplified wrappers around the XML-RPC methods in multiple programming languages, including Ruby, Java, Python, and Go, facilitating easier integration while supporting JSON data exchange in client implementations. For web-based management, OpenNebula includes Sunstone, a graphical user interface that provides an intuitive dashboard for administrators and users to monitor and configure cloud resources without direct API calls.[48][49][50] In terms of hypervisor compatibility, OpenNebula offers full support for KVM as its primary virtualization technology, enabling efficient management of virtual machines on Linux-based hosts. It also provides complete integration with LXC containers through LXD for lightweight system-level virtualization, VMware vCenter for leveraging existing enterprise VMware environments, and AWS Firecracker for microVM-based deployments in serverless, edge, and enterprise operations. These hypervisors allow OpenNebula to operate across diverse infrastructure setups, from bare-metal servers to virtualized clusters.[43][51][52] For cloud federation and hybrid cloud capabilities, OpenNebula includes drivers that enable seamless integration with public cloud providers, supporting hybrid bursting where private resources extend to public infrastructure during peak loads. Specific drivers facilitate connections to Amazon Web Services (AWS) for EC2 instance provisioning and Microsoft Azure for virtual machines and storage, allowing users to define policies for automatic resource scaling across environments. This federation model uses a master-slave zone architecture to synchronize data centers, ensuring consistent management of users, groups, and virtual data centers (VDCs) across boundaries.[53][54][55] OpenNebula enhances container orchestration through native integration with Kubernetes via the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution based on RKE2 that deploys and manages Kubernetes clusters directly within OpenNebula environments. OneKE supports hybrid deployments, allowing clusters to span on-premises and edge resources while providing built-in monitoring and scaling. Furthermore, OpenNebula accommodates Docker containers for application packaging and Helm charts for simplified deployment of complex Kubernetes applications, enabling users to orchestrate containerized workloads alongside traditional VMs.[56][57][58] The platform's extensibility is achieved through a modular plugin architecture that allows administrators to develop and integrate custom drivers for storage, networking, monitoring, and authorization systems. This driver-based design supports the addition of third-party infrastructure components without modifying the core codebase, promoting adaptability to specific data center needs. OpenNebula also features a marketplace system for distributing applications and blueprints, including public repositories like the official OpenNebula Marketplace with over 48 pre-configured appliances (e.g., for WordPress or Kubernetes setups) and private marketplaces for custom sharing via HTTP or S3 backends. These elements enable rapid deployment of reusable cloud services and foster community-driven extensions.[59][55][60] Regarding standards compliance, OpenNebula aligns with the Open Cloud Computing Interface (OCCI) through dedicated ecosystem projects that implement OCCI 1.1 for interoperable resource management across IaaS providers. This support enables standardized API calls for compute, storage, and network operations, enhancing portability in multi-cloud setups. Similarly, while not natively embedded, OpenNebula integrates with Topology and Orchestration Specification for Cloud Applications (TOSCA) via model-driven tools in collaborative projects, allowing description and deployment of portable cloud applications that map to OpenNebula's infrastructure. These alignments ensure compatibility with broader cloud ecosystems and reduce vendor lock-in. OpenNebula 7.0.1 adds simplified SAML 2.0 integration for enhanced federated authentication compliance.[61][62][45]Architecture
Core Components
Hosts in OpenNebula represent the physical machines that provide the underlying compute resources for virtualization, running hypervisors such as KVM or VMware to host virtual machines (VMs).[63] These hosts are registered in the system using their hostname and associated drivers for information management and VM execution, allowing OpenNebula to monitor their CPU, memory, and storage capacity while enabling the scheduler to deploy VMs accordingly.[63] Clusters serve as logical groupings of multiple hosts within OpenNebula, facilitating resource pooling by sharing common datastores and virtual networks across the group. This organization supports efficient load balancing, high availability, and simplified management of distributed resources without altering the individual host configurations. Virtual Machine Templates define reusable configurations for instantiating VMs, specifying attributes such as CPU count, memory allocation, disk attachments, and network interfaces.[64] These templates enable administrators to standardize VM deployments, allowing multiple instances to be created from a single definition while permitting user-specific customizations like varying memory sizes within predefined limits.[64] Images in OpenNebula encapsulate the storage elements for VMs, functioning as disk files or block devices that hold operating systems, data, or configuration files, categorized into system images for bootable OS disks, regular images for persistent data storage, and file types for contextual elements like scripts.[65] They are managed within datastores or marketplaces, supporting persistency options where changes are either retained for exclusive VM use or discarded to allow multi-VM sharing, and progress through states like ready, used, or locked during operations.[65] Virtual Machines (VMs) are the instantiable compute entities in OpenNebula, created from templates and managed throughout their lifecycle, which includes states such as pending (awaiting deployment), running (actively executing), stopped (powered off with state preserved), suspended (paused with files on host), and done (completed and archived).[66] This state machine governs operations like migration, snapshotting, and resizing, ensuring controlled resource allocation and recovery from failures.[66] Virtual Networks provide the logical connectivity framework for VMs in OpenNebula, assigning IP leases and linking virtual network interfaces to physical host devices via modes like bridging or VXLAN for isolation and traffic management.[67] They integrate with security groups and IP address management to enable secure, scalable networking across clusters, supporting both public and private connectivity without direct exposure to underlying hardware details.[67]Deployment Models
OpenNebula supports a range of deployment models tailored to different scales and environments, from small private clouds to large-scale enterprise and telco infrastructures. These models emphasize openness and flexibility, allowing deployment on any datacenter with support for both virtualized and containerized workloads across physical or virtual resources.[68] The basic deployment model consists of a single front-end node managing worker hosts, suitable for small to medium private clouds with up to 2500 servers and 10,000 virtual machines. This topology uses local or shared storage and basic networking, providing a straightforward setup for testing or initial production environments without high availability requirements.[68][53] For larger or mission-critical setups, the advanced reference architecture employs a Cloud Management Cluster comprising multiple front-end nodes for high availability, alongside a Cloud Infrastructure layer handling hosts, storage, and networks. This model reduces downtime through redundant core services and supports horizontal scaling, making it ideal for environments requiring robust fault tolerance.[68][69] Hybrid and edge models enable federated clusters that span on-premises datacenters, public clouds, and edge sites, facilitating unified management and workload portability. These deployments support disaster recovery via mechanisms like Ceph storage mirroring and allow seamless integration of virtual machines and Kubernetes-based containers on bare-metal or virtualized resources, avoiding vendor lock-in.[70][53] In telco cloud scenarios, OpenNebula integrates with 5G networks for network function virtualization (NFV), supporting both virtual network functions (VNFs) and containerized network functions (CNFs) through enhanced platform awareness (EPA), GPU acceleration, and technologies like DPDK and SR-IOV. Key architectures include highly distributed NFV deployments across geo-distributed points of presence (PoPs) and 5G edge setups for open radio access networks (O-RAN) and multi-access edge computing (MEC), enabling low-latency applications such as user plane functions (UPF) and content delivery networks (CDN).[71][72] Scalability options range from single-node testing environments to multi-site enterprises via federated zones, where a master zone coordinates slave zones for shared user management and resource access. Blueprint guidance, such as the Open Cloud Reference Architecture, provides architects with recommended configurations for these models, including automation tools like OneDeploy for streamlined installation.[53][73][74]Front-end Management
The front-end node serves as the central server in OpenNebula, orchestrating cloud operations by running essential services such as the core daemon (oned), schedulers, and various drivers responsible for resource monitoring and decision-making.[75] The oned daemon acts as the primary engine, managing interactions with cluster nodes, virtual networks, storages, users, and groups through an XML-RPC API exposed on port 2633.[76] Drivers, including those for virtual machine management (VMM), authentication (Auth), information monitoring (IM), marketplace (Market), datastores (Datastore), virtual network management (VNM), and transfer management (TM), are executed from the/var/lib/one/remotes/ directory to handle specific operational tasks.[75]
For high availability, OpenNebula supports multi-node front-end configurations using a distributed consensus protocol like Raft integrated into the oned daemon, which tolerates at least one failure across three or more nodes.[69] This setup requires an odd number of identical servers (typically three or five) with shared filesystems, a floating IP for the leader node, and database synchronization via tools like onedb backup and onedb restore for MySQL or MariaDB backends.[69] The system ensures continuous operation by electing a new leader if the current one fails, with replicated logs maintaining consistency across the cluster.[69]
User interactions with the front-end are facilitated through multiple management interfaces, including the Sunstone web UI (powered by FireEdge on port 2616), the command-line interface (CLI) via the one command suite, and programmatic APIs such as XML-RPC and REST.[77][75] Core services extend to the scheduler, which handles virtual machine placement using policies like the Rank Scheduler to optimize allocation on available hosts, and hooks that trigger custom scripts in response to resource state changes or API calls.[78][79]
Deployment on the front-end requires a Linux-based operating system with dependencies including Ruby for the oned daemon and XML libraries for data handling, alongside optional components like MySQL or MariaDB for the database.[75] Scalability is achieved through clustering in high-availability modes, allowing the front-end to manage larger infrastructures without single points of failure.[69]
Compute Hosts
In OpenNebula, compute hosts serve as the physical worker nodes that provide the underlying resources for virtual machine execution, managed through supported hypervisors to enable virtualization on the cloud infrastructure.[63] These hosts are typically standard servers equipped with compatible hardware, where the primary hypervisor is KVM for full virtualization, alongside LXC for lightweight container-based workloads and integration with VMware for hybrid environments.[63] To set up a host, administrators add it to the system via the front-end using theonehost create command, specifying the hostname along with the information manager (IM) and virtualization manager (VM) drivers, such as --im kvm --vm kvm for KVM-based setups.[63]
Monitoring of compute hosts is handled by dedicated agents that run periodically on each node to gather resource utilization data, including CPU (e.g., total cores, speed in MHz, free/used percentages), memory (total, used, and free in KB), and storage metrics (e.g., read/write bytes and I/O operations).[80] These agents execute probes from directories like im/kvm-probes.d/host/monitor and transmit the collected data as structured messages (e.g., MONITOR_HOST) to the OpenNebula front-end via SSH, where it is stored in a time-series database for use in scheduling decisions and resource allocation.[80] The front-end then aggregates this information, allowing administrators to view host states and metrics through commands like onehost show or the Sunstone interface.[63]
Hosts can be organized into clusters to facilitate efficient resource management, with grouping achieved by adding hosts to a cluster using onecluster host-add or via the web interface.[63] This clustering supports load balancing by enabling the scheduler to distribute virtual machines across hosts based on availability and policies, while also enhancing fault tolerance through organized failover and high-availability configurations that maintain operations during node failures. OpenNebula accommodates heterogeneous hardware in clusters, allowing nodes with varying CPU architectures, memory capacities, or hypervisors to coexist without requiring uniform setups.
The lifecycle of compute hosts is managed entirely through the OpenNebula front-end, supporting dynamic addition and removal to scale the infrastructure as needed.[63] New hosts are enabled with onehost enable after creation, entering an active state for VM deployment, while existing ones can be temporarily disabled (onehost disable) for maintenance or fully removed (onehost delete) without disrupting the overall system, provided no running VMs are present.[63] States such as offline can also be set for long-term decommissioning, ensuring seamless integration of diverse hardware during expansions.[63]
Security for compute hosts relies on secure communication channels and hypervisor-enforced isolation to protect the cloud environment.[81] OpenNebula uses passwordless SSH for front-end to host interactions, configured by generating and distributing the oneadmin user's public key to target hosts, ensuring encrypted and authenticated connections without interactive prompts.[63] Additionally, hypervisors like KVM provide inherent VM isolation through hardware-assisted virtualization features, such as EPT for memory protection, preventing inter-VM interference on the same host.[80]
Storage
OpenNebula manages storage through datastores, which are logical storage units configured to handle different types of data for virtual machines (VMs). There are three primary datastore types: the System datastore, which stores the disks of running VMs cloned from images; the Image datastore, which holds the repository of base operating system images, persistent data volumes, and CD-ROM images; and the File datastore, which manages plain files such as kernels, ramdisks, and contextualization files for VMs.[82] These datastores can be implemented using various backends, including NFS for shared file-based storage accessible across hosts, Ceph for distributed object storage that enables scalability and high availability, and LVM for block-based storage on SAN environments with support for thin provisioning. For instance, NFS setups typically mount shared volumes on the front-end and hosts, while Ceph utilizes RADOS Block Devices (RBD) pools for VM disks, and LVM creates logical volumes from shared LUNs to optimize I/O performance.[82][83][84][85] Image management in OpenNebula allows administrators to upload images via the CLI using commands likeoneimage create --path <file> --datastore <ID>, supporting formats such as QCOW2 for thin provisioning and RAW for direct block access. Cloning operations, performed with oneimage clone <name> <new_name> --datastore <ID>, enable duplication of images to new datastores or for creating persistent copies, while snapshotting facilitates backups and rollbacks through commands like oneimage snapshot-flatten to merge changes or snapshot-revert for restoration, though images with active snapshots cannot be cloned until flattened.[86]
Integration with shared storage is handled via specialized drivers: the Ceph driver supports distributed setups with replication for disaster recovery (DR) mirroring across pools, ensuring data redundancy; NFS drivers enable seamless access for live migration and high availability; and LVM drivers provide block-level operations with thin snapshots for efficient space usage in DR scenarios. These drivers ensure compatibility with VM attachment on hosts, allowing disks to be provisioned dynamically.[84][83][85]
Allocation in datastores supports dynamic provisioning through quotas set in the datastore template, such as SIZE in MB to limit total storage (e.g., 20480 for 20 GB) and IMAGES for the maximum number of images, applied per user or group to prevent overuse. Transfers between datastores are achieved by cloning images to a target datastore or using the Marketplace app as an intermediary for moving files across different backends, with usage tracked via attributes like DATASTORE_USED.[87][88]
Best practices recommend separating System datastores from Image and File datastores to enhance performance, as System datastores handle high-I/O runtime disks while Image datastores focus on static repositories, reducing contention during VM deployments.[89]