Fact-checked by Grok 2 weeks ago

OpenNebula

OpenNebula is an open-source and platform that provides unified management for heterogeneous infrastructures, including virtualized data centers, public s, and edge environments, enabling the orchestration of compute, storage, and networking resources. It is available in a Community Edition and a supported Edition for mission-critical deployments. The project originated in 2005 as an internal research initiative at the Complutense University of 's Distributed Systems Architecture Research Group and evolved into an open-source effort with the launch of the OpenNebula.org in 2007, culminating in its first software release in 2008. In 2010, OpenNebula Systems (formerly C12G Labs) was founded by the project's creators in , , to support its commercial development and enterprise adoption, marking over 15 years of continuous innovation by 2025. Key features of OpenNebula include support for virtual machines via hypervisors like KVM, container orchestration with tools such as and , and serverless computing, all managed through a single control panel that facilitates private, hybrid, and multi-cloud deployments. It emphasizes vendor neutrality, multi-tenancy, federation across sites, and automatic provisioning, allowing users to integrate with existing infrastructures without lock-in. The platform's open cloud architecture unifies public cloud simplicity with private cloud security and control, supporting scalability for hundreds of thousands of cores. OpenNebula has been adopted by over 5,000 organizations worldwide, including enterprises like EveryMatrix and academic institutions such as , for building sovereign and efficient cloud solutions. As the only major open-source cloud management platform developed in , it plays a pivotal role in initiatives like , the European Open Science Cloud (EOSC), and the Important Project of Common European Interest on Cloud Infrastructure and Services (IPCEI-CIS), where it contributes to digital sovereignty and advancements. The project is actively maintained on with contributions from 171 developers and is backed by European research efforts such as ONEnextgen and SovereignEdge.Cognit.

Introduction

Overview

OpenNebula is an open-source cloud management platform designed to orchestrate and manage heterogeneous resources across data centers, public clouds, and infrastructures. It enables enterprises to deploy and operate private, hybrid, and edge clouds, primarily supporting (IaaS) models as well as multi-tenant environments for containerized workloads. The platform emphasizes simplicity in deployment and operation, scalability to handle large-scale infrastructures, and vendor independence through its open architecture, allowing users to avoid lock-in to specific providers. It unifies the agility of public services with the and of clouds, facilitating seamless of diverse environments. OpenNebula supports a range of technologies, including KVM, containers, AWS for lightweight virtual machines, and VMware for hybrid setups. Key benefits include robust multi-tenancy for isolated user environments, automatic provisioning of resources, elasticity to scale workloads dynamically, and management of compute, storage, and networking assets. Originally developed as a research project, OpenNebula has evolved into a mature, production-ready platform widely adopted in enterprise settings.

Editions

OpenNebula is available in two primary editions: the Community Edition and the Enterprise Edition, each tailored to different user needs in cloud management. The Community Edition provides a free, open-source distribution under the 2.0, offering full core functionality for users managing their own deployments without commercial . It includes and packages, with updates released every six months and patch versions addressing critical bug fixes, making it ideal for non-profit organizations, , or testing environments. Community is available through public forums, but users must handle self-management and upgrades independently. In contrast, the Enterprise Edition is a offering designed for production environments, incorporating the open-source core under 2.0 for while requiring a subscription for packages under commercial terms. It delivers a hardened, tested version with additional bug fixes, minor enhancements, long-term releases, and enterprise-specific integrations not available in the Community Edition. Key differences include such as SLA-based , deployment assistance, , consulting, technical account management, and priority access to maintenance updates and upgrades. The Enterprise Edition also provides enhanced security features and warranty assurances, ensuring reliability for large-scale operations. Both editions can be downloaded from the official OpenNebula website, though the Edition requires an active subscription—priced on a per-cloud basis with host-based licensing—for full access to upgrades, tools, and support. From version 5.12 onward, major upgrades are restricted to subscribers or qualified non-commercial users and significant community contributors, emphasizing the platform's strategy.

History

Origins

OpenNebula originated as a research project in 2005 at the Distributed Systems Architecture (DSA) Research Group of the Universidad Complutense de Madrid in Spain, led by Ignacio M. Llorente and Rubén S. Montero. The initiative stemmed from efforts to address challenges in virtual infrastructure management, with an initial emphasis on developing efficient and scalable services for deploying and managing virtual machines across large-scale distributed systems. This work built on prior research in grid computing and virtualization, aiming to create decentralized tools that could handle dynamic resource allocation without relying on proprietary solutions. The project's transition to an open-source model occurred with its first public technology preview release in March 2008, under the Apache License 2.0, motivated by the burgeoning paradigm's demand for flexible, vendor-agnostic platforms that avoided lock-in and supported heterogeneous environments. This shift enabled broader collaboration among researchers and early adopters, fostering innovation in infrastructure-as-a-service (IaaS) technologies while aligning with European research initiatives like the project, which sought to integrate and . By prioritizing and extensibility, the open-source approach positioned OpenNebula as a foundational tool for academic and experimental deployments in the late 2000s. To sustain development and provide enterprise-grade support, the original developers founded C12G Labs in March 2010 in , , focusing on commercial services such as consulting, training, and customized integrations for OpenNebula users. The company, initially headquartered in , was renamed OpenNebula Systems in September 2014 to better reflect its core technology and later expanded operations to include a headquarters in , , enhancing its global reach. This corporate backing marked the evolution from pure research to a supported ecosystem, while the project continued to grow internationally through community contributions.

Key Milestones

OpenNebula's first stable release (version 1.0) in July 2008 marked a pivotal shift from its origins in academic research to a collaborative open-source , enabling broader adoption of technologies. The project reached its 10th anniversary in November 2017, underscoring a decade of continuous innovation, community contributions, and the establishment of thousands of infrastructures worldwide. By 2025, OpenNebula had achieved significant adoption, powering clouds for more than 5,000 organizations globally across diverse sectors including institutions, providers, and . Organizational growth accelerated through strategic affiliations, with OpenNebula Systems becoming a day-1 member of , a participant in the Open Science Cloud (EOSC) and the Important Project of Common Interest on Infrastructure and Services (IPCEI-CIS), and a corporate member of the , LF Edge, and (CNCF); the company also leads as chair of the Alliance for Industrial Data, Edge and . In recent years, OpenNebula has emphasized advancements in AI-ready hybrid clouds and strategies, prominently featured at events such as Data Center World 2025 where it demonstrated re-virtualization solutions for modern infrastructure. Spanning over 10 years of dedicated research and development into the 2020s, OpenNebula has prioritized sovereign cloud solutions to enhance digital autonomy, including federated AI factories and reference architectures for European .

Development

Release History

OpenNebula's initial development featured two technology previews in 2008. The first Technology Preview (TP) was released on March 26, 2008, providing basic host and management capabilities based on the hypervisor. This was followed by TP2 on June 17, 2008, which expanded on these features for virtual infrastructure management. The project's first stable release, version 1.0, arrived on July 24, 2008, introducing core functionalities for and dynamic resource allocation. Early major versions followed a rapid cycle, with upgrades approximately every 1-2 years and 3-5 minor updates per major release to incorporate community feedback and stability improvements. For instance, launched in October 2010 as a significant upgrade, followed by 3.0 in October 2011. This pattern continued through the 4.x and 5.x series, culminating in version 5.12 on July 21, 2020—the first (LTS) release, which received extended maintenance until its end-of-life on February 10, 2023. In recent years, the release cadence has shifted toward quarterly updates, with major versions emerging every 3-5 years to align with enterprise needs. The latest major release, 7.0 "Phoenix," was issued on July 3, 2025, bringing advancements in AI workload support, edge computing, and hybrid cloud orchestration. A subsequent patch, 7.0.1, followed on October 27, 2025, enhancing enterprise-grade features and AI cloud integrations. Changes to the release process began with version 5.12, where upgrade scripts for the Enterprise Edition became partially closed-source, accessible only via paid subscriptions to ensure professional support and security; the Community Edition, however, maintains fully open-source availability. The post-7.0 roadmap prioritizes deeper support, hybrid cluster management, and integration to enable efficient edge deployments.
VersionRelease DateKey NotesType
TP1March 26, 2008Xen-based /VM managementDevelopment
TP2June 17, 2008Expanded virtual infrastructureDevelopment
1.0July 24, 2008First stable, basic featuresStable
5.12July 21, 2020First LTS, supported to 2023LTS
7.0 "Phoenix"July 3, 2025, edge, enhancementsMajor
7.0.1October 27, 2025 and updatesPatch

Community and Ecosystem

OpenNebula's community is driven by a collaborative model involving developers, translators, and users who contribute through the project's repository, where code enhancements, documentation improvements, and localization efforts are submitted under the Apache License 2.0. Users also participate by reporting bugs, requesting features, and providing feedback via issues, while translators support multilingual portal content to broaden accessibility. The contributor base encompasses academics, enterprises, and non-profits, fostering over a decade of collaborative innovation since the project's inception. Notable contributors include organizations such as Telefónica I+D, universities like the and , and research centers including SARA Supercomputing Center and INRIA, alongside individual champions from entities like AWS, , and academic institutions such as . These participants, recognized through the Champion Program, enhance by developing tools, offering , and promoting adoption globally. The ecosystem features partnerships that integrate OpenNebula with broader open-source initiatives, including corporate membership in the , (CNCF), and , as well as Day-1 membership in and participation in the European Open Science Cloud (EOSC). OpenNebula Systems has joined the Edge Initiative and Project Sylva to advance edge and telco cloud technologies, supporting standards for federated, secure infrastructures in . The Connect Partner Program further enables technology and business collaborations, with third-party providers offering compatible hardware, software, and services to extend OpenNebula's capabilities. Support channels include the official for discussions and troubleshooting, comprehensive at docs.opennebula.io, and events such as the annual OpenNebulaConf conference since 2013, along with regular webinars and TechDays on topics like storage and integrations. Non-profits and community users access upgrade tools through the free Community Edition, supplemented by volunteer-driven advice and the Champion Program for enhanced guidance. Adoption spans diverse environments, from research labs like in for and to telco clouds at Ooma for transitions, with deployments supporting hundreds of virtual machines in enterprises such as and weekly clusters in government settings like the Flemish Department of Environment and Spatial Planning. This widespread use has spurred community extensions, including the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution for seamless cluster deployment on OpenNebula. Community contributions continue to influence release cycles, enabling iterative improvements based on user input.

Features

Core Capabilities

OpenNebula provides robust resource orchestration capabilities, enabling the management of virtual machines, containers, and physical resources through automatic provisioning and elasticity mechanisms. This includes centralized for deploying and scaling workloads across clusters, with features like and affinity rules to optimize performance and resource utilization. These functionalities ensure efficient handling of heterogeneous environments, supporting over 2,500 nodes in a single instance for enterprise-scale operations. Multi-tenancy in OpenNebula is achieved through secure of resources and data between users and groups, incorporating to manage permissions effectively. Administrators can define fine-grained lists (ACLs), quotas, and virtual data centers (VDCs) to enforce and compliance for multiple teams or tenants. This setup allows for delegated administration, where specific users or groups are granted controlled access to subsets of infrastructure without compromising overall security. The platform supports hybrid cloud environments by facilitating seamless integration between on-premises infrastructure and public clouds, promoting workload portability and vendor independence. Users can provision and manage resources across federated zones, enabling burst capacity to public providers while maintaining unified control over hybrid setups. This approach simplifies migration and scaling of applications between local and remote resources. As of OpenNebula 7.0.1 (October 2025), hybrid cloud provisioning has been further simplified. For , OpenNebula enables the deployment of lightweight clusters in distributed environments, optimized for low-latency operations such as edge nodes. It provides a unified for orchestrating multi-cluster setups, allowing efficient management of resources closer to end-users to reduce latency and enhance performance in telco and scenarios. This capability supports scalable, vendor-agnostic infrastructure without requiring complex custom configurations. OpenNebula is designed with readiness in mind, offering built-in support for GPU to handle scalable workloads in configurations. Features like GPU passthrough and GPU partitioning allow direct or shared access to accelerators for tasks such as model training and inference, ensuring while optimizing across on-premises and cloud resources. OpenNebula 7.0.1 enhances GPU acceleration for improved performance in factories and deployments. This enables cost-effective scaling for factories and deployments. Monitoring and automation tools are integrated natively into OpenNebula, providing capabilities for resource scaling, health checks, and policy-driven operations. Built-in telemetry tracks system metrics, enabling proactive adjustments through event-driven hooks and distributed resource scheduling. These features automate capacity management and ensure reliability, with support for overcommitment and predictive optimization to maintain operational efficiency.

Integrations and Extensibility

OpenNebula provides open , including the primary interface, which enables programmatic control over core resources such as virtual machines, virtual networks, images, users, and hosts. This allows developers to integrate OpenNebula with external applications for and tasks. Additionally, the OpenNebula Cloud (OCA) offers simplified wrappers around the methods in multiple programming languages, including , , , and Go, facilitating easier integration while supporting data exchange in client implementations. For web-based management, OpenNebula includes , a that provides an intuitive for administrators and users to monitor and configure cloud resources without direct calls. In terms of hypervisor compatibility, OpenNebula offers full support for as its primary virtualization technology, enabling efficient management of virtual machines on Linux-based hosts. It also provides complete integration with containers through LXD for lightweight system-level virtualization, vCenter for leveraging existing VMware environments, and for microVM-based deployments in serverless, , and operations. These hypervisors allow OpenNebula to operate across diverse setups, from bare-metal servers to virtualized clusters. For cloud federation and hybrid cloud capabilities, OpenNebula includes drivers that enable seamless integration with public cloud providers, supporting hybrid bursting where private resources extend to public infrastructure during peak loads. Specific drivers facilitate connections to (AWS) for EC2 instance provisioning and for virtual machines and storage, allowing users to define policies for automatic resource scaling across environments. This federation model uses a master-slave zone architecture to synchronize data centers, ensuring consistent management of users, groups, and virtual data centers (VDCs) across boundaries. OpenNebula enhances container orchestration through native integration with via the OpenNebula Kubernetes Engine (OneKE), a CNCF-certified distribution based on RKE2 that deploys and manages clusters directly within OpenNebula environments. OneKE supports hybrid deployments, allowing clusters to span on-premises and edge resources while providing built-in monitoring and scaling. Furthermore, OpenNebula accommodates containers for application packaging and charts for simplified deployment of complex applications, enabling users to orchestrate containerized workloads alongside traditional . The platform's extensibility is achieved through a modular that allows administrators to develop and integrate custom drivers for , networking, , and systems. This driver-based supports the addition of third-party components without modifying the core codebase, promoting adaptability to specific needs. OpenNebula also features a system for distributing applications and blueprints, including public repositories like the official OpenNebula Marketplace with over 48 pre-configured appliances (e.g., for or setups) and private marketplaces for custom sharing via HTTP or S3 backends. These elements enable rapid deployment of reusable cloud services and foster community-driven extensions. Regarding standards compliance, OpenNebula aligns with the Open Cloud Computing Interface (OCCI) through dedicated ecosystem projects that implement OCCI 1.1 for interoperable across IaaS providers. This support enables standardized calls for compute, storage, and network operations, enhancing portability in multi-cloud setups. Similarly, while not natively embedded, OpenNebula integrates with Topology and Orchestration Specification for Cloud Applications () via model-driven tools in collaborative projects, allowing description and deployment of portable cloud applications that map to OpenNebula's infrastructure. These alignments ensure compatibility with broader cloud ecosystems and reduce . OpenNebula 7.0.1 adds simplified integration for enhanced federated authentication compliance.

Architecture

Core Components

Hosts in OpenNebula represent the physical machines that provide the underlying compute resources for , running hypervisors such as KVM or to host virtual machines (). These hosts are registered in the system using their hostname and associated drivers for and VM execution, allowing OpenNebula to monitor their CPU, , and capacity while enabling the scheduler to deploy VMs accordingly. Clusters serve as logical groupings of multiple hosts within OpenNebula, facilitating resource pooling by sharing common datastores and virtual networks across the group. This organization supports efficient load balancing, , and simplified management of distributed resources without altering the individual host configurations. Virtual Machine Templates define reusable configurations for instantiating , specifying attributes such as CPU count, memory allocation, disk attachments, and network interfaces. These templates enable administrators to standardize VM deployments, allowing multiple instances to be created from a single definition while permitting user-specific customizations like varying memory sizes within predefined limits. Images in OpenNebula encapsulate the elements for , functioning as disk files or block devices that hold operating systems, , or files, categorized into system images for bootable OS disks, regular images for persistent storage, and file types for contextual elements like scripts. They are managed within datastores or marketplaces, supporting persistency options where changes are either retained for exclusive VM use or discarded to allow multi-VM sharing, and progress through states like ready, used, or locked during operations. Virtual Machines (VMs) are the instantiable compute entities in OpenNebula, created from templates and managed throughout their lifecycle, which includes states such as pending (awaiting deployment), running (actively executing), stopped (powered off with state preserved), suspended (paused with files on host), and done (completed and archived). This state machine governs operations like , snapshotting, and resizing, ensuring controlled and from failures. Virtual Networks provide the logical connectivity framework for in OpenNebula, assigning IP leases and linking virtual network interfaces to physical host devices via modes like bridging or VXLAN for isolation and . They integrate with security groups and to enable secure, scalable networking across clusters, supporting both public and private connectivity without direct exposure to underlying hardware details.

Deployment Models

OpenNebula supports a range of deployment models tailored to different scales and environments, from small clouds to large-scale and infrastructures. These models emphasize openness and flexibility, allowing deployment on any datacenter with support for both virtualized and containerized workloads across physical or virtual resources. The basic deployment model consists of a single front-end node managing worker hosts, suitable for small to medium private clouds with up to 2500 servers and 10,000 virtual machines. This uses local or shared storage and basic networking, providing a straightforward setup for testing or initial production environments without requirements. For larger or mission-critical setups, the advanced reference architecture employs a Management Cluster comprising multiple front-end nodes for , alongside a Infrastructure layer handling hosts, storage, and networks. This model reduces through redundant core services and supports horizontal scaling, making it ideal for environments requiring robust . Hybrid and edge models enable federated clusters that span on-premises datacenters, public clouds, and edge sites, facilitating unified management and workload portability. These deployments support via mechanisms like Ceph storage mirroring and allow seamless integration of virtual machines and Kubernetes-based containers on bare-metal or virtualized resources, avoiding . In telco cloud scenarios, OpenNebula integrates with networks for (NFV), supporting both virtual network functions (VNFs) and containerized network functions (CNFs) through enhanced platform awareness (EPA), GPU acceleration, and technologies like DPDK and SR-IOV. Key architectures include highly distributed NFV deployments across geo-distributed points of presence (PoPs) and edge setups for open radio access networks (O-RAN) and (MEC), enabling low-latency applications such as user plane functions (UPF) and content delivery networks (CDN). Scalability options range from single-node testing environments to multi-site enterprises via federated zones, where a master zone coordinates slave zones for shared user management and resource access. Blueprint guidance, such as the , provides architects with recommended configurations for these models, including automation tools like for streamlined installation.

Front-end Management

The front-end node serves as the central server in OpenNebula, orchestrating operations by running such as the daemon (oned), schedulers, and various drivers responsible for resource monitoring and decision-making. The oned daemon acts as the primary engine, managing interactions with cluster nodes, virtual networks, storages, users, and groups through an exposed on port 2633. Drivers, including those for management (VMM), (Auth), information monitoring (IM), (Market), datastores (Datastore), virtual network management (VNM), and transfer management (TM), are executed from the /var/lib/one/remotes/ directory to handle specific operational tasks. For , OpenNebula supports multi-node front-end configurations using a distributed like integrated into the oned daemon, which tolerates at least one failure across three or more nodes. This setup requires an odd number of identical servers (typically three or five) with shared filesystems, a floating IP for the leader node, and database synchronization via tools like onedb backup and onedb restore for or backends. The system ensures continuous operation by electing a new leader if the current one fails, with replicated logs maintaining consistency across the cluster. User interactions with the front-end are facilitated through multiple management interfaces, including the Sunstone web UI (powered by FireEdge on port 2616), the (CLI) via the one command suite, and programmatic APIs such as and . Core services extend to the scheduler, which handles placement using policies like the Rank Scheduler to optimize allocation on available hosts, and hooks that trigger custom scripts in response to resource state changes or API calls. Deployment on the front-end requires a Linux-based operating system with dependencies including for the oned daemon and XML libraries for data handling, alongside optional components like or for the database. Scalability is achieved through clustering in high-availability modes, allowing the front-end to manage larger infrastructures without single points of failure.

Compute Hosts

In OpenNebula, compute hosts serve as the physical worker nodes that provide the underlying resources for execution, managed through supported s to enable on the cloud infrastructure. These hosts are typically standard servers equipped with compatible hardware, where the primary is KVM for , alongside for lightweight container-based workloads and integration with for hybrid environments. To set up a host, administrators add it to the system via the front-end using the onehost create command, specifying the hostname along with the information manager (IM) and virtualization manager (VM) drivers, such as --im kvm --vm kvm for KVM-based setups. Monitoring of compute hosts is handled by dedicated agents that run periodically on each to gather utilization , including CPU (e.g., total cores, speed in MHz, free/used percentages), (total, used, and free in KB), and storage metrics (e.g., read/write bytes and I/O operations). These agents execute probes from directories like im/kvm-probes.d/host/monitor and transmit the collected as structured messages (e.g., MONITOR_HOST) to the OpenNebula front-end via SSH, where it is stored in a time-series database for use in scheduling decisions and . The front-end then aggregates this information, allowing administrators to view host states and metrics through commands like onehost show or the interface. Hosts can be organized into to facilitate efficient , with grouping achieved by adding hosts to a cluster using onecluster host-add or via the web interface. This clustering supports load balancing by enabling the scheduler to distribute virtual machines across hosts based on availability and policies, while also enhancing through organized and high-availability configurations that maintain operations during node failures. OpenNebula accommodates heterogeneous in clusters, allowing with varying CPU architectures, capacities, or hypervisors to coexist without requiring uniform setups. The lifecycle of compute hosts is managed entirely through the OpenNebula front-end, supporting dynamic addition and removal to scale the infrastructure as needed. New hosts are enabled with onehost enable after creation, entering an active state for VM deployment, while existing ones can be temporarily disabled (onehost disable) for or fully removed (onehost delete) without disrupting the overall system, provided no running are present. States such as offline can also be set for long-term decommissioning, ensuring seamless integration of diverse hardware during expansions. Security for compute hosts relies on secure communication channels and hypervisor-enforced to protect the environment. OpenNebula uses passwordless SSH for front-end to host interactions, configured by generating and distributing the oneadmin user's public key to target hosts, ensuring encrypted and authenticated connections without interactive prompts. Additionally, hypervisors like KVM provide inherent VM through hardware-assisted virtualization features, such as EPT for , preventing inter-VM interference on the same host.

Storage

OpenNebula manages storage through datastores, which are logical storage units configured to handle different types of data for virtual machines (VMs). There are three primary datastore types: the System datastore, which stores the disks of running VMs cloned from images; the Image datastore, which holds the repository of base operating system images, persistent data volumes, and CD-ROM images; and the File datastore, which manages plain files such as kernels, ramdisks, and contextualization files for VMs. These datastores can be implemented using various backends, including NFS for shared file-based storage accessible across hosts, Ceph for distributed object storage that enables scalability and , and LVM for block-based storage on environments with support for . For instance, NFS setups typically mount shared volumes on the front-end and hosts, while Ceph utilizes RADOS Block Devices () pools for VM disks, and LVM creates logical volumes from shared LUNs to optimize I/O performance. Image management in OpenNebula allows administrators to images via the CLI using commands like oneimage create --path <file> --datastore <ID>, supporting formats such as QCOW2 for and for direct block access. Cloning operations, performed with oneimage clone <name> <new_name> --datastore <ID>, enable duplication of images to new datastores or for creating persistent copies, while snapshotting facilitates backups and rollbacks through commands like oneimage snapshot-flatten to merge changes or snapshot-revert for restoration, though images with active snapshots cannot be cloned until flattened. Integration with shared storage is handled via specialized drivers: the Ceph driver supports distributed setups with replication for disaster recovery (DR) mirroring across pools, ensuring ; NFS drivers enable seamless access for and ; and LVM drivers provide block-level operations with thin snapshots for efficient space usage in DR scenarios. These drivers ensure compatibility with VM attachment on hosts, allowing disks to be provisioned dynamically. Allocation in datastores supports dynamic provisioning through quotas set in the datastore , such as SIZE in MB to limit total (e.g., 20480 for 20 ) and IMAGES for the maximum number of images, applied per or group to prevent overuse. Transfers between datastores are achieved by images to a target datastore or using the app as an intermediary for moving files across different backends, with usage tracked via attributes like DATASTORE_USED. Best practices recommend separating System datastores from Image and File datastores to enhance performance, as System datastores handle high-I/O runtime disks while Image datastores focus on static repositories, reducing contention during VM deployments.

Networking

OpenNebula's networking subsystem enables the configuration of virtual and physical networks to facilitate VM connectivity while ensuring service isolation. Virtual networks serve as logical overlays that abstract the underlying infrastructure, allowing administrators to define isolated environments for virtual machines (VMs). These networks provide dynamic IP and MAC address leases through address ranges (ARs), where administrators specify IPv4 or IPv6 pools, and OpenNebula automatically generates MAC addresses for Ethernet-based ranges. For example, an AR might allocate addresses from 10.0.0.150 to 10.0.0.200 with a size of 51, ensuring efficient resource utilization without manual assignment. Isolation in virtual networks is achieved through technologies such as VLANs via the 802.1Q driver or VXLAN overlays, which encapsulate traffic to segment VMs across physical hosts. In 802.1Q mode, OpenNebula assigns VLAN IDs from a configured pool (e.g., starting from ID 2, excluding reserved ranges like 0, 1, 4095), tagging ports on bridges like virbr0 for secure separation. VXLAN extends this by creating overlay networks with virtual network identifiers (VNIs) from a similar pool, supporting live updates to attributes like MTU and enabling scalable isolation in large deployments. These mechanisms ensure VM traffic remains contained, preventing unauthorized inter-VM communication while allowing contextualization features like DNS server assignment (e.g., 10.0.0.23) for seamless integration. Physical networks form the foundational in OpenNebula, comprising the hardware-level connections on compute hosts that underpin overlays. These are typically divided into segments for external access—bridged directly to internet-facing NICs—and internal segments for VM-to-VM interactions, configured via parameters in the oned.conf file such as NETWORK_SIZE (default 254 for sizing) and MAC_PREFIX (e.g., 02:00 for generated addresses). networks often use the host's primary physical (PHYDEV) for outbound , while internal ones rely on isolated bridges to maintain separation, with OpenNebula and demands to these physical resources. OpenNebula supports multiple networking models to accommodate diverse environments, including flat (direct attachment without encapsulation), bridged (using bridges for VM traffic passthrough), and (SDN) via for advanced isolation. In bridged mode, VM interfaces connect transparently to the host's bridge (e.g., onebr0), enabling straightforward external connectivity without additional overhead. integration provides tagging on ports and basic filtering rules, ideal for multi-tenant setups. is enforced through security groups, which apply rules (e.g., iptables-based) to network interfaces, and access control lists (ACLs) that restrict operations like NIC attachment based on user permissions. For instance, a security group might default to all networks upon creation, filtering traffic at the VM level to prevent spoofing of / addresses. Integration with physical NICs occurs through drivers specified in virtual network templates, where attributes like PHYDEV designate the uplink (e.g., eth0) for bridging or . OpenNebula also supports cloud extensions, allowing virtual networks to interconnect with providers like AWS or , enabling seamless bursting of workloads across on-premises and remote infrastructures. Advanced features include virtual routers, deployed as appliances to handle routing between virtual networks and provide load balancing for services; for example, a service virtual router can manage IP forwarding and across multiple VNETs, distributing traffic to backend VMs. (QoS) parameters, such as OUTBOUND_AVG_BW (e.g., 1000 Kbps), further optimize allocation on these connections.

References

  1. [1]
    OpenNebula – Open Source Cloud & Edge Computing Platform
    OpenNebula is an Open Source Cloud Computing Platform to build and manage Enterprise Clouds. OpenNebula provides unified management of IT infrastructure and ...DocumentationUse
  2. [2]
    Meet Alberto P. Martí, OpenNebula's new Open Source Community ...
    Nov 21, 2019 · OpenNebula was initially developed between 2005 and 2008 by members of the Distributed Systems Architecture Research Group at the Complutense ...
  3. [3]
    About OpenNebula - Leading Open Source Cloud Management ...
    A Brief History about OpenNebula · A Company Founded in 2010 · First Release of Software in 2008 · + 10 Years of Innovation.
  4. [4]
    OpenNebula Leadership
    Founded in March 2010, OpenNebula Systems—formerly known until September 2014 as C12G Labs—is a company launched by the creators of OpenNebula in order to ...Missing: history | Show results with:history
  5. [5]
    OpenNebula/one: The open source Cloud & Edge ... - GitHub
    OpenNebula is an open source platform delivering a simple but feature-rich and flexible solution to build and manage enterprise clouds.
  6. [6]
    OpenNebula Systems Leads the Release of the IPCEI-CIS ...
    Jul 3, 2025 · OpenNebula Systems marks a key step toward Europe's digital sovereignty with the release of the IPCEI-CIS Reference Architecture, ...<|separator|>
  7. [7]
    Welcome to OpenNebula 7.0 - docs
    Welcome to OpenNebula 7.0. OpenNebula is a powerful, but easy-to-use open source platform for enterprise, private, hybrid or edge cloud infrastructure.
  8. [8]
    OpenNebula Subscription FAQ
    Jun 2, 2020 · The Community Edition (CE) is a full-featured version of OpenNebula released under Apache License 2.0. The Enterprise Edition (EE) incorporates ...
  9. [9]
    Introducing OpenNebula Enterprise Edition
    Jun 4, 2020 · This edition brings a more tested, hardened, and production-ready version of OpenNebula that incorporates additional bug fixes developed by OpenNebula Systems.
  10. [10]
    OpenNebula Subscriptions and Managed Cloud Services
    OpenNebula offers subscriptions with support, deployment, upgrade, training, consulting, and technical account management services.
  11. [11]
    Upgrade Your OpenNebula Cloud
    From OpenNebula 5.12 onwards, as part of our long-term sustainability strategy, the upgrades to the latest version are only available for Corporate Users ...Missing: elements | Show results with:elements
  12. [12]
    OpenNebula: Leading Innovation in Cloud Computing Management
    Oct 13, 2010 · OpenNebula was first established as a research project back in 2005 by the Distributed Systems Architecture Research Group at the Complutense ...Missing: paper | Show results with:paper
  13. [13]
    [PDF] OpenNebula: Open Source IaaS Cloud Computing ... - ResearchGate
    OpenNebula that now operates as an open source project began as a research project by. Ignacio M. Llorente and Rubén S. Montero in 2005, as well as it is ...
  14. [14]
  15. [15]
  16. [16]
    OpenNebula Systems - Wikipedia
    OpenNebula Systems ; Formerly, C12G Labs ; Industry, Cloud computing ; Founded, March 2010; 15 years ago (2010-03) ; Headquarters, La Finca Business Park,. Madrid.
  17. [17]
    Contact Us - OpenNebula
    Our Offices. OpenNebula Systems USA Headquarters. 1500 District Avenue Burlington, MA 01803. USA. OpenNebula Systems EMEA Headquarters. La Finca Business Park ...
  18. [18]
    2024: A Pivotal Year in OpenNebula's Journey and the Evolution of ...
    Jan 10, 2025 · 2024 was a landmark year for OpenNebula Systems, featuring major innovations, team growth, and expanded VMware migration support.Missing: motivation | Show results with:motivation
  19. [19]
    Happy 10th Anniversary to OpenNebula!
    Nov 27, 2017 · Indeed, this week marks 10 years since the founding of OpenNebula and the kick-off of our first software release cycle, a technology preview ...Missing: history | Show results with:history
  20. [20]
    OpenNebula at Data Center World 2025: Transforming Cloud ...
    Apr 30, 2025 · At Data Center World 2025, OpenNebula showcased its Re-Virtualization strategy, AI-ready hybrid cloud solutions, and a sneak peek at ...
  21. [21]
    Archived Release Notes - OpenNebula
    Version, Date, Release Notes, Type. 4.4, Dec, 3 2013, Release Notes, Update. 4.4 RC, November, 22 2013, Incremental Release Notes, Development.Missing: history timeline
  22. [22]
    Release of OpenNebula 1.0 for Data Center Virtualization & Cloud ...
    Release of OpenNebula 1.0 for Data Center Virtualization & Cloud Solutions · Ignacio M. Llorente · Jul 24, 2008 ...
  23. [23]
    New Enterprise Edition Lifecycle with LTS Releases - OpenNebula
    Jul 21, 2020 · The newly-released OpenNebula 5.12 “Firework” is our first LTS version with an expected EOL of September 15, 2022.
  24. [24]
    Announcing the end of support life for OpenNebula 5.12 LTS
    It's time to say goodbye to OpenNebula 5.12 LTS. This long-term supported version has come to its expiration date last week, 10th February 2023.
  25. [25]
    New OpenNebula Release Process
    OpenNebula Releases will occur every three months. Prior to the official release date there will be a beta (two weeks before) and a candidate release (a week ...
  26. [26]
    OpenNebula 7.0 “Phoenix” Released: Enabling the Next Generation ...
    Jul 3, 2025 · This major upgrade marks a significant milestone for organizations building sovereign, AI-ready, and edge-capable cloud infrastructures.Missing: 5000 | Show results with:5000<|separator|>
  27. [27]
    OpenNebula 7.0.1 Released with Enhanced Capabilities for ...
    OpenNebula 7.0.1 introduces key enhancements that empower DevOps teams to manage persistent storage seamlessly across both VMs and containers under a ...
  28. [28]
    Towards a Stronger OpenNebula Community - Engage
    Jun 4, 2020 · The Community Edition will be a full-featured version of OpenNebula released under Apache License 2.0. The Enterprise Edition, on the other hand ...
  29. [29]
    News from the Cloud-Edge Continuum: October 2025 - OpenNebula
    Oct 1, 2025 · Cloud-native / container integration: We now ship a CSI driver for Kubernetes, enabling dynamic volume provisioning and smoother hybrid VM/ ...Missing: 5G | Show results with:5G
  30. [30]
    News from the Cloud-Edge Continuum: September 2025
    Oct 1, 2025 · With deployment times reduced by up to 90%, this innovation shows how OpenNebula can deliver real-world telco workloads to the edge efficiently, ...Missing: roadmap | Show results with:roadmap
  31. [31]
    Contribute to OpenNebula
    Contributions and feedback from our User Community is fundamental. Learn the several ways to contribute to OpenNebula. Go Now.Missing: channels adoption
  32. [32]
    Contributors to the Open-source Community - OpenNebula
    The following organizations have contributed to the OpenNebula ecosystem: CloudScaling, Telefonica I+D, C12G Labs, SARA Supercomputing Center, University of ...
  33. [33]
    Community Champions - OpenNebula
    An OpenNebula Champion is someone with exemplary contributions to, and outstanding engagement with, the OpenNebula Community. Champions are passionate.Missing: channels | Show results with:channels
  34. [34]
    OpenNebula Systems joins Linux Foundation Europe
    Mar 8, 2023 · We are glad to announce that OpenNebula Systems has joined Linux Foundation Europe. This partnership will promote innovation in European openMissing: X EOSC
  35. [35]
    OpenNebula Joins the Linux Foundation Edge Initiative
    We are delighted to join the LF Edge community as part of our ONEedge initiative, said Tino Vazquez, Chief Operations Officer at OpenNebula.Missing: X EOSC
  36. [36]
    OpenNebula Systems joins the Linux Foundation's Project Sylva
    Apr 1, 2024 · We are excited to welcome OpenNebula Systems to Project Sylva and look forward to collaborating with them as we build vital telco cloud ...Missing: CNCF X EOSC
  37. [37]
    OpenNebula Connect Partner Program
    The OpenNebula Partner Programs offer our partners the help they need to build new solutions, reach new markets and develop new business opportunities.Missing: channels adoption
  38. [38]
    OpenNebula Community Forum
    Official news and updates from the OpenNebula team and ecosystem. Stay informed about new releases, blog articles, events, job openings, and other important ...
  39. [39]
    Engage with OpenNebula
    Here are several ways in which you can get involved and contribute to the OpenNebula Community—from giving advice and technical support to other users ...Missing: contributors channels adoption
  40. [40]
    OpenNebula Case Studies | OpenNebula Cloud & Edge
    Learn more from some of our users' experiences about how they are putting OpenNebula to work! With Edge, Kubernetes, Docker, Ansible, and Terraform ...Missing: metrics deployments
  41. [41]
    Kubernetes on OpenNebula
    OpenNebula brings support for the deployment of Kubernetes clusters through the OneKE virtual appliance, available from the OpenNebula Public Marketplace.Missing: extension | Show results with:extension
  42. [42]
    OpenNebula Feature Comparison
    ### Summary of OpenNebula Core Capabilities
  43. [43]
    Multitenancy | - docs - OpenNebula
    Documentation. Learn basics, installation, configuration and use of OpenNebula. Quick Start · Understand OpenNebula · OpenNebula Concepts.
  44. [44]
    Open Source Cloud & Edge Computing - OpenNebula
    OpenNebula provides a carrier-grade cloud platform that supports both virtual and cloud-native network functions, delivering low latency, simplified management.
  45. [45]
    Maximizing GPU Power with OpenNebula: Passthrough and vGPU
    Nov 27, 2024 · OpenNebula offers robust support for GPU passthrough and Nvidia's vGPU technologies, providing flexibility in order to suit your cloud orchestration needs.
  46. [46]
    XML-RPC API | - docs - OpenNebula
    This reference documentation describes the xml-rpc methods exposed by OpenNebula. Each description consists of the method name and the input and output values.Missing: JSON | Show results with:JSON
  47. [47]
    [PDF] OpenNebula 5.10 Integration Guide
    Sep 21, 2020 · Through the XML-RPC interface you can control and manage any OpenNebula resource, including VMs, Virtual Networks, Images, Users, Hosts and ...
  48. [48]
    FireEdge Configuration | - docs - OpenNebula
    The Sunstone server configuration file can be found in /etc/one/fireedge/sunstone/sunstone-server.conf on your Front-end. It uses the YAML syntax, with the ...
  49. [49]
    OpenNebula + Firecracker: Building the Future of On-Premises ...
    Mixed hypervisor environments with KVM and VMware. OpenNebula Firecracker 1. By reducing the overhead gap between VMs and containers, microVMs provide users ...Missing: LXC | Show results with:LXC
  50. [50]
    OpenNebula 5.12 “Firework”
    AWS Firecracker. Full support for Firecracker microVMs, the amazing new virtualization technology developed by AWS for its Fargate and Lambda services.Missing: LXC | Show results with:LXC
  51. [51]
    Overview | - docs - OpenNebula
    Start by reading the Federation Configuration section to learn how to set-up the OpenNebula Federation, and continue with Federation Usage to learn how to use ...Missing: orchestration extensibility standards compliance
  52. [52]
    OpenNebula's New Microsoft Azure Driver for Hybrid Clouds
    Cloud Bursting, or Hybrid Cloud, is a model in which the local resources of a Private Cloud are combined with resources from Public Cloud providers such as ...Missing: Google | Show results with:Google
  53. [53]
    [PDF] OpenNebula 5.10 Introduction and Release Notes
    Cloud bursting gives support to build a hybrid cloud, an extension of a private cloud to combine local resources with resources from remote cloud providers. A ...
  54. [54]
    Presenting the new OpenNebula Kubernetes Engine (OneKE)
    Discover the new OpenNebula Kubernetes Engine (OneKE), our enterprise-grade, CNCF-certified Kubernetes distribution based on SUSE Rancher's RKE2.
  55. [55]
    Overview
    ### Summary on Container Orchestration
  56. [56]
    Run a Kubernetes Cluster on OpenNebula | - docs
    To deploy the Kubernetes cluster, we'll follow these high-level steps: Download the OneKE Service from the OpenNebula Marketplace. Instantiate a private network ...
  57. [57]
    Overview | - docs - OpenNebula
    Overview. The interactions between OpenNebula and the Cloud infrastructure are performed by specific drivers. Each one addresses a particular area: Storage.
  58. [58]
    Overview
    ### Summary of OpenNebula Marketplace for Apps and Blueprints
  59. [59]
    OCCI 1.1 for OpenNebula
    The goal of the project is to develop a complete, robust and interoperable implementation of OCCI 1.1 for OpenNebula. Although the project is still in an early ...Missing: standards compliance TOSCA
  60. [60]
    Cloud apps to‐go: Cloud portability with TOSCA and MiCADO
    Nov 29, 2020 · The OCCI is a standardization approach toward a common API for the IaaS providers. Using TOSCA and OCCI, the authors proposed a model driven ...
  61. [61]
    Hosts | - docs - OpenNebula
    In order to use your existing physical nodes, you have to add them to OpenNebula as Hosts. To add a Host only its hostname and type is needed.
  62. [62]
    Virtual Machine Templates
    ### Definition, Role, and Key Concepts of Virtual Machine Templates in OpenNebula
  63. [63]
    Virtual Machine Images
    ### Definition, Role, and Key Concepts of Images in OpenNebula
  64. [64]
    Virtual Machine Instances
    ### Lifecycle States of Virtual Machine Instances in OpenNebula
  65. [65]
    Overview | - docs - OpenNebula
    When a new Virtual Machine is launched, OpenNebula will connect its virtual network interfaces (defined by NIC attributes) to hypervisor network link ...<|control11|><|separator|>
  66. [66]
    Cloud Architecture Design | - docs - OpenNebula
    This page describes the high-level steps to design and deploy an OpenNebula cloud. To familiarize yourself with deployment and daily operations, ...
  67. [67]
    Front-end HA | - docs - OpenNebula
    OpenNebula provides a built-in mechanism to ensure high availability (HA) of the core Front-end daemon, oned. Services need to be deployed and configured ...
  68. [68]
    Edge Cloud Reference Architecture
    ### Summary of Hybrid and Edge Deployment Models in OpenNebula
  69. [69]
    ONEedge5G - OpenNebula
    OpenNebula Telco Cloud Model and Architecture. Learn how OpenNebula enables flexible, vendor-neutral deployment of 5G networks and edge workloads. This white ...
  70. [70]
    Accelerating Telco Edge Deployments with OpenNebula
    OpenNebula comes with an integration with Prometheus and Grafana with metrics and alarms tailored for the optimal observability of an OpenNebula edge cloud.
  71. [71]
    Open Cloud Reference Architecture
    ### Summary of Telco Cloud Model and Related Concepts in OpenNebula Reference Architecture
  72. [72]
  73. [73]
    Single Front-end Installation | - docs - OpenNebula
    This page describes how to install a complete OpenNebula Front-end from binary packages available in the software repositories configured in the previous ...<|control11|><|separator|>
  74. [74]
    OpenNebula Configuration | - docs
    The default TM_MAD driver includes plugins for all supported storage modes. You may need to modify the TM_MAD to add custom plugins. EXECUTABLE : path of ...
  75. [75]
    Sunstone | - docs - OpenNebula
    Sunstone is the new generation OpenNebula web interface, fully featured for VM and VM Template management and with other sections ready covering most ...
  76. [76]
    Rank Scheduler | - docs - OpenNebula
    The OpenNebula Rank Scheduler is responsible for allocating pending Virtual Machines to available hypervisor nodes.Missing: core | Show results with:core
  77. [77]
    Using Hooks | - docs - OpenNebula
    The Hook subsystem enables the execution of custom scripts tied to a change in state in a particular resource or API call.
  78. [78]
    Monitoring Driver | - docs - OpenNebula
    The Monitoring Drivers (or IM drivers) collect Host and Virtual Machine monitoring data by executing a monitoring agent in the Hosts. The agent periodically ...
  79. [79]
    KVM Node Installation | - docs - OpenNebula
    Step 1. Add OpenNebula Repositories · Step 2. Installing the Software · Step 3. Host OS Security Configuration (Optional) · Step 4. Configure Passwordless SSH.<|control11|><|separator|>
  80. [80]
    Overview | - docs - OpenNebula
    OpenNebula distinguishes between three different datastore types: Images Datastore, which stores the base operating system images, persistent data volumes, CD- ...
  81. [81]
    NFS/NAS Datastore | - docs - OpenNebula
    Once Host and Front-end storage have been is set up, the OpenNebula configuration comprises the creation of the Image and System Datastores. Create System ...
  82. [82]
    Ceph Datastore | - docs - OpenNebula
    The Ceph Datastore driver allows the use of Ceph storage for images and disks of Virtual Machines. Warning This driver requires the OpenNebula nodes using ...<|control11|><|separator|>
  83. [83]
    SAN Storage Configuration
    ### Summary of LVM Implementation in OpenNebula for Datastores
  84. [84]
    Virtual Machine Images | - docs - OpenNebula
    An OpenNebula Image represents a VM disk. Images can have multiple formats (e.g., filesystem or block device) and can store OS installations, ...
  85. [85]
    Usage Quotas — OpenNebula 6.8.3 documentation
    ### Summary of Datastore Quotas, Allocation, and Dynamic Provisioning in OpenNebula
  86. [86]
    Overview | - docs - OpenNebula
    Private Marketplaces provide their users with an easy way of privately publishing, downloading, and sharing their own custom Appliances.Missing: plugin | Show results with:plugin
  87. [87]
    Datastores | - docs - OpenNebula
    OpenNebula features three different datastore types: The Image Datastore stores the Image repository. The System Datastore holds disk for running Virtual ...
  88. [88]
    Defining and Managing Virtual Networks | - docs - OpenNebula
    Defining and Managing Virtual Networks. Commonly a Host is connected to one or more networks that are available to the VMs through bridges.
  89. [89]
    VXLAN Networks | - docs - OpenNebula
    This guide describes how to enable Network isolation provided through the VXLAN encapsulation protocol. This driver will create a bridge for each OpenNebula ...
  90. [90]
    Bridged Networking | - docs - OpenNebula
    This guide describes how to deploy Bridged networks. In this mode, the Virtual Machine traffic is directly bridged through the Linux bridge on the hypervisor ...
  91. [91]
    Open vSwitch Networks | - docs - OpenNebula
    This guide describes how to use the Open vSwitch network drivers. They provide network isolation using VLANs by tagging ports and basic network filtering ...
  92. [92]
    Virtual Network Templates | - docs - OpenNebula
    The Virtual Network Templates allow the end user to create Virtual Networks without knowing the details of the underlying infrastructure.
  93. [93]
    VNF and Virtual Router | - docs - OpenNebula
    The Virtual Router (VR) is a solution to common problems regarding management of VNETs and routing. It consists of the Service Virtual Router, which is deployed ...