Fact-checked by Grok 2 weeks ago

System virtual machine

A system virtual machine, also known as a virtual machine, is an efficient, isolated duplicate of a real computer system that emulates the underlying to enable multiple distinct operating systems to execute concurrently on a single physical host machine. This virtualization is facilitated by a virtual machine monitor (VMM), or , which intercepts and manages access requests from guest operating systems, ensuring resource isolation, , and controlled sharing of the host's CPU, memory, storage, and I/O devices. Unlike process virtual machines, which abstract the execution environment for individual applications within a single host OS (such as the ), system virtual machines provide full , allowing unmodified guest OSes to run as if on dedicated physical . The concept originated in the 1960s amid the shift toward and multiprogramming systems, with IBM's CP-40 and Cambridge Monitor System () in the mid-1960s representing an early implementation on a modified Model 40, which supported to enable isolated user environments. This evolved into the more robust IBM VM/370 in 1972, a production-ready VMM that supported multiple virtual machines running diverse operating systems like OS/360 and , achieving low overhead of about 10-15% through direct execution of most instructions. Formal requirements for such were established in 1974 by Gerald J. Popek and Robert P. Goldberg, who defined conditions for "virtualizable" architectures where sensitive instructions are privileged, enabling efficient trapping by the VMM without excessive emulation. System virtual machines saw a resurgence in the late and driven by server consolidation needs and the rise of x86 architectures, which initially lacked native support. Pioneering work included the project in 1997, which applied to scalable multiprocessors for commodity OSes, and VMware's 1999 release of a hosted using to overcome x86 limitations. Subsequent innovations like in 2003 introduced , where guest OSes are modified for better performance, and hardware-assisted via extensions such as VT-x (2005) and AMD-V, which trap sensitive operations natively to reduce overhead. These advancements enabled widespread adoption in , data centers, and high-availability environments, supporting applications like workload isolation, , and multi-tenancy.

Overview

Definition and characteristics

A virtual machine (SVM), also known as a virtual machine, is a software implementation that emulates or virtualizes an entire physical computer , enabling multiple independent operating systems to execute concurrently on a single host machine as though each were running on dedicated . This complete emulation provides a full supporting an operating system along with its applications, in contrast to process virtual machines that support only individual processes. Key characteristics of system virtual machines include strong between guest operating systems (OSes), where each guest runs in a protected environment that prevents interference from others, akin to separate physical machines but with enhanced fault containment. They facilitate resource sharing across CPU, memory, storage, and I/O devices among multiple guests, allowing efficient utilization of the host's while abstracting the underlying physical components to present a uniform interface to each guest OS. This abstraction supports running guest OSes either natively, with minimal modifications, or through , promoting portability and workload for tasks like server consolidation, where multiple underutilized servers are combined onto one physical host to reduce costs and improve efficiency. Operationally, a system virtual machine relies on a , or virtual machine monitor (VMM), a thin software layer that sits between the guest OSes and the host hardware, intercepting and managing privileged instructions or hardware calls from the guests to ensure secure and controlled access. The VMM emulates hardware operations as needed, partitioning resources dynamically while maintaining through techniques such as sensitive instructions. Common types include , which operates transparently to unmodified guest OSes by completely emulating hardware, and , which requires minor guest OS modifications for direct hypervisor communication to optimize and . Modern processors often incorporate features like extended instruction sets to facilitate efficient and in the VMM layer.

Distinction from process virtual machines

System virtual machines (SVMs) virtualize the entire underlying hardware platform, enabling the execution of complete guest operating systems as if they were running on dedicated physical hardware, such as hosting Windows on a Linux-based host. In contrast, process virtual machines (PVMs) virtualize only the runtime environment for individual applications or processes within a single host operating system, without emulating a full OS, as exemplified by the Java Virtual Machine (JVM) or the .NET Common Language Runtime (CLR). The scope of SVMs encompasses system-level , where multiple independent OS instances can run concurrently with emulated resources like CPUs, , and I/O devices, providing strong separation between guests. PVMs, however, operate at the , abstracting the host OS's execution environment to ensure portability across different underlying systems without needing or OS . This distinction arises because SVMs simulate a complete machine , while PVMs focus on - or application-specific instruction sets. SVMs are primarily designed for infrastructure consolidation, allowing multiple OS environments to share physical resources efficiently, and for hosting diverse operating systems on the same . PVMs, by comparison, aim to enable cross-platform application execution, managed code safety, and sandboxing to prevent interference with system. For instance, SVMs like KVM rely on a to manage guest OS interactions with , whereas PVMs such as the Virtual Machine employ interpreters or just-in-time (JIT) compilers to execute within OS.

History

Early developments (1960s–1980s)

The concept of the system virtual machine originated in the mid-1960s at Scientific Center, where engineers developed CP/ (Control Program/Conversational Monitor System) as an experimental system for the mainframe family. Conceived in 1964 to enable interactive and reduce operational overhead in batch-oriented environments, CP/CMS allowed multiple users to share a single physical machine by creating isolated virtual environments that simulated independent System/360 instances. This approach was driven by the need to maximize mainframe efficiency amid rising demand for , drawing inspiration from earlier systems like MIT's CTSS while introducing virtual machine partitioning to support diverse workloads such as OS research and application development. In 1966, prototypes like CP-40/ became operational on a modified System/360 Model 40, followed by CP-67/ on the System/360 Model 67, which incorporated hardware support through paging and segmentation. By 1967, CP/ entered production use, marking the first practical implementation of system for commercial and enabling up to hundreds of concurrent on a single host without compromising isolation. These early systems emphasized resource partitioning, where the Control Program (CP) managed hardware allocation and the Conversational Monitor System () provided a lightweight, interactive OS for each virtual machine, laying the groundwork for concepts like virtual storage that abstracted physical memory limitations. The 1970s saw the formal commercialization and expansion of these ideas with the release of VM/370 in August 1972 for the IBM System/370 architecture, which introduced standardized virtual memory hardware to all models, supporting full OS isolation and up to 4 KB page sizes for efficient memory management. VM/370 became IBM's first official virtual machine product, widely adopted in enterprise mainframes for running multiple guest operating systems like OS/360 or MVS in isolated partitions, thereby enhancing system utilization and stability in high-volume data processing environments. This era solidified virtualization as a tool for mainframe efficiency, though its adoption remained largely confined to IBM ecosystems due to the high cost of compatible hardware. Refinements in the focused on and architectural compatibility, with VM/SP (System Product) released in December 1980 to integrate advanced features like the editor and improved I/O handling for larger-scale deployments. In 1982, the VM/SP High Option (HPO) enhanced throughput for interactive workloads, while VM/XA (Extended Architecture) in 1988 extended support to 31-bit addressing on System/370-XA hardware, allowing virtual machines to utilize up to 2 of contiguous virtual storage — a significant leap from the prior 24-bit limit of 16 MB. These developments maintained VM's role as a foundational platform for resource partitioning and , prioritizing conceptual isolation over the abstractions that would emerge later, and ensuring with earlier System/360 and System/370 software.

Modern advancements (1990s–present)

The marked a pivotal shift in system virtual machines (SVMs) from proprietary mainframe environments to accessible solutions on commodity x86 hardware, driven by the need for cost-effective server consolidation amid growing demands following the dot-com boom. In 1998, was founded, and in 1999, it released , the first product to enable full through dynamic , allowing multiple operating systems to run on standard PCs without specialized hardware. This innovation democratized SVMs, addressing the inefficiencies of physical servers in expanding by enabling resource sharing and workload isolation on affordable hardware. The 2000s saw accelerated adoption through open-source initiatives and commercial integrations, fostering an ecosystem that responded to surging requirements for scalability and efficiency. The , released in 2003 by the , introduced as an open-source Type 1 under the GNU General Public License (GPL), optimizing performance by modifying guest OSes for better cooperation while enabling precise resource metering. In 2006, (AWS) launched EC2 in beta, initially leveraging to provide on-demand virtualized compute resources, which spurred cloud-based SVM deployments. KVM, integrated into the in 2007 as a GPL-licensed module, combined hardware-assisted with for device , broadening open-source accessibility. Microsoft followed with in 2008, embedded in , offering options alongside full to support enterprise Windows environments. This era's open-source shift, exemplified by GPL licensing for and KVM, reduced barriers to entry and fueled widespread adoption. From the onward, SVMs evolved into cloud-native paradigms, integrating with and advancing features to meet hyperscale demands. AWS transitioned EC2 instances to KVM-based by the late , enhancing performance for massive-scale deployments, while platforms like (launched 2014) enabled seamless of containers atop SVMs, blending lightweight isolation with virtualized infrastructure for hybrid workloads. Advancements in nested , supported by hardware extensions like VT-x with EPT since 2010, allowed hypervisors to run within VMs, facilitating development and cloud migration without refactoring. In , introduced Secure Encrypted (SEV) in 2017 with processors, encrypting VM memory to protect against hypervisor and host attacks, bolstering multi-tenant cloud . In 2025, advancements include AI-driven VM scheduling for better resource optimization and increased focus on diversification to mitigate risks. These developments propelled the market to reach approximately $86 billion in 2024 and $99 billion in 2025, underscoring SVMs' role in efficient, secure operations.

Core Techniques

Hardware-assisted virtualization

Hardware-assisted virtualization leverages specialized CPU extensions to enable efficient execution of virtual machines by directly supporting the trapping and emulation of sensitive instructions, thereby avoiding the need for complete software-based of the underlying . These extensions, such as Intel's Virtualization Technology (VT-x) and AMD's Secure Virtual Machine (SVM), allow the operating system to run in a dedicated non-root mode where most instructions execute natively on the host CPU, while privileged operations automatically trigger exits to the for controlled . This approach facilitates or direct execution of unmodified code, maintaining isolation without guest modifications. The technique addresses fundamental challenges in the x86 architecture, where the traditional design compresses privilege rings, making it difficult for a guest kernel to operate in the most privileged ring (ring 0) without risking host security or requiring extensive software workarounds. Hardware extensions introduce mechanisms like the Virtual Machine Control Structure (VMCS) in VT-x, which manages guest state and controls VM entries and exits, effectively deprivileging the guest to run in a simulated ring 0 that traps sensitive instructions to the in true ring 0. This ring deprivileging ensures that the guest perceives full privilege while the hypervisor intercepts operations needing host resources, such as modifications or I/O accesses. Central to this model is the trap-and-emulate paradigm, where hardware automatically detects and traps privileged or sensitive operations—such as attempts to access control registers or execute I/O instructions—routing them to the for emulation on behalf of the guest. For , shadow page tables serve as a hypervisor-maintained mapping layer that translates guest physical addresses to host physical addresses, trapping guest writes to page tables to update shadows synchronously and preserve . To further optimize , features like Intel's Extended Page Tables (EPT) introduce hardware-accelerated second-level address translation, combining guest and host page walks into a single process to eliminate many traps associated with shadow tables. Similarly, Intel VT-d and AMD's equivalent IOMMU extensions support by enabling direct device assignment to guests, handling DMA remapping and interrupt virtualization in hardware to minimize hypervisor involvement in data transfers. These hardware capabilities, first commercialized around 2005-2006, dramatically lower virtualization overhead by reducing the frequency and cost of hypervisor interventions compared to pure software methods, often achieving under 5% performance degradation in CPU-bound workloads relative to native execution, in contrast to 10-30% or higher in software emulation scenarios. For instance, VMware ESXi employs Intel VT-x and EPT to deliver near-native guest performance across diverse applications, leveraging these extensions for efficient trap handling and memory management.

Full emulation

Full emulation in system virtual machines involves software-based simulation of an entire non-native architecture, enabling the execution of guest operating systems without requiring similarity between the host and guest . In this approach, a or , such as , dynamically translates guest instructions into equivalent host instructions, providing complete abstraction from the underlying physical . This technique allows unmodified guest operating systems to run in , treating the emulated environment as a real machine. Unlike methods that leverage host features, full emulation operates purely in software, offering high portability across architectures but at the expense of performance due to the intensive translation process. Key mechanisms in full emulation include dynamic binary translation, where the emulator decodes guest instructions on-the-fly and generates optimized host code for execution. For instance, employs its Tiny Code Generator (TCG), a portable dynamic translator that breaks down guest instructions into micro-operations, which are then compiled into host-specific code blocks using TCG's internal code generation backend. These translation blocks are cached—with a default size of 32 MB—to avoid redundant translation during repeated execution, improving efficiency through direct block chaining. Instruction decoding simulates the guest CPU's behavior cycle by cycle, while device emulation relies on software models that mimic components such as disks, interfaces, and peripherals without direct hardware access. This process ensures faithful replication of the guest system's behavior, including interrupts and I/O operations, all handled in user space on the host. Full emulation provides exceptional flexibility for running legacy or foreign architectures, such as emulating ARM-based systems on an x86 host, which is particularly useful for development, testing, and migration scenarios where hardware diversity is a barrier. Tools like Bochs exemplify this by offering a portable x86 emulator that simulates every instruction and PC device, supporting operating systems like Linux and Windows without host hardware dependencies. Early versions of VMware Workstation also incorporated full emulation elements, using dynamic binary translation for CPU virtualization and software emulation for I/O devices to achieve compatibility on non-virtualizable x86 hardware. Performance overhead arises primarily from the translation and simulation layers; for example, QEMU's full system emulation incurs approximately a 2x slowdown from software memory management on top of user-mode translation costs, resulting in overall speeds that can be 5-10 times slower than native execution for integer workloads, though optimizations like caching mitigate this to varying degrees. The core concept of full emphasizes complete , creating a self-contained isolated from the host's specifics, which makes it ideal for scenarios requiring precise control or cross-platform , such as software or prototyping on dissimilar . However, its software-only nature limits for workloads, where the emulation overhead can hinder performance, though techniques can partially alleviate this in hybrid setups.

Paravirtualization

is a technique that enhances efficiency in system virtual machines by modifying the guest operating system to interact directly with the , replacing hardware-sensitive instructions with explicit hypercalls. This approach avoids the traps and s inherent in , thereby reducing context switches, emulation overhead, and overall performance penalties. In , the guest operates in a manner, aware of its virtualized environment, which allows the hypervisor to validate and execute operations more efficiently without simulating underlying hardware. The technique was pioneered by the in 2003, where guest OSes such as (modified with approximately 3,000 lines of code) and were adapted to run on an idealized virtual . Key mechanisms include paravirtualized operations (paravirt ops), which provide kernel-level interfaces for issuing hypercalls to the for tasks like updates and privilege level changes. For I/O virtualization, implements a split driver model: front-end drivers in the guest domain communicate with back-end drivers in a privileged driver domain using asynchronous shared-memory rings and event channels, enabling high-throughput device access without . Additionally, dynamic memory allocation is handled via ballooning, where a guest driver adjusts its memory footprint by inflating a pseudo-device "balloon" to relinquish or reclaim pages from the , supporting efficient resource sharing across domains. Paravirtualization delivers performance close to native execution, with benchmarks showing negligible overhead; for instance, achieved SPEC WEB99 throughput within 1% of native , and TCP bandwidth of 897 Mb/s compared to 291 Mb/s in Workstation's full virtualization. This cooperative model trades the complete transparency of unmodified guest OS support for substantial speed gains, but it necessitates open-source or modifiable guests, such as kernels with built-in PV drivers, limiting adoption for closed-source systems. With the advancement of -assisted virtualization features in modern processors, paravirtualization's role has diminished, as these hardware extensions enable efficient operation of unmodified guests without requiring OS modifications.

Hardware Support

Processor-level features

Processor-level features encompass specialized extensions in (CPU) architectures that support by enabling controlled execution of guest operating systems, managing state transitions, and minimizing overhead through targeted trapping of sensitive instructions. These features expand the traditional x86 privilege ring model—typically limited to rings 0 through 3—by introducing virtualized modes that allow the to operate at a higher effective privilege level, conceptually akin to ring -1, thereby reducing the frequency of interventions for non-privileged guest code. This design traps only virtualization-sensitive instructions, allowing the majority of guest operations to execute directly on hardware without involvement. Intel's Virtualization Technology (VT-x), introduced in 2005 with the Pentium 4 processors (models 662 and 672), provides core support through the Virtual Machine Extensions (VMX) instruction set, which includes VMXON for entering VMX operation, VMLAUNCH/VMRESUME for VM entry, and VMEXIT for returning control to the hypervisor. The VMX architecture defines two operational modes: VMX root mode for the hypervisor and VMX non-root mode for guests, with the Virtual Machine Control Structure (VMCS) serving as a configurable data structure to manage processor state, including registers and controls for VM entry and exit. VT-x evolved through versions: the basic implementation in 2005 supported fundamental trapping and mode switches; second-generation enhancements in 2008 with the Nehalem microarchitecture incorporated Extended Page Tables (EPT) for efficient address translation; and nested virtualization support arrived in 2010 with Westmere processors, allowing VMs to host further VMs. Subsequent developments include 5-level paging support in EPT since 2017 for larger virtual address spaces. By the 2020s, VT-x and similar features are integrated into nearly all modern Intel CPUs, enabling capabilities like live migration of VMs across hosts with compatible hardware. AMD's counterpart, AMD-V (initially termed Secure Virtual Machine or SVM), debuted in 2006 and employs similar mechanisms via SVM instructions for VM entry (VMRUN) and exit (VMRUN-induced traps), with the Virtual Machine Control Block (VMCB) analogous to Intel's VMCS for . AMD-V includes Rapid Virtualization Indexing (RVI), introduced in 2007 with the K10 () architecture, which functions like EPT by using nested page tables to directly translate guest physical addresses to host physical addresses, eliminating the need for shadow page tables and reducing overhead. These extensions similarly expand privilege handling, running guests in a less-privileged mode while the maintains control, thereby minimizing traps for benign operations. In architectures, Extensions, first introduced in the Armv7-A profile around 2011, add support for a dedicated mode (EL2 in Armv8 and later, equivalent to Hyp mode in Armv7), which sits above the EL1 and EL0 levels to manage guest execution and traps. EL2 enables the to intercept and emulate privileged instructions from guests running in non-secure EL1, while allowing direct hardware access for non-sensitive code, thus reducing intervention similar to x86 approaches. These processor-level aids are foundational to hypervisors like KVM and , where they streamline VM management without delving into memory or I/O details. Armv9-A, introduced in 2022, further enhances with features like nested virtualization and memory tagging for improved security.

Memory and I/O virtualization aids

Hardware features for memory virtualization, known as second-level address translation (SLAT), provide efficient mapping between guest-physical and host-physical addresses, alleviating the need for software-managed shadow page tables in the hypervisor. Intel's Extended Page Tables (EPT), introduced in 2008 with the Nehalem microarchitecture, implement SLAT through a secondary page table hierarchy that hardware walks on TLB misses, reducing hypervisor overhead and VM exits for page faults. This mechanism supports up to 48-bit guest-physical addressing in its four-level paging mode, enabling large memory virtualization without excessive translation latency. EPT has demonstrated performance gains of up to 48% in MMU-intensive benchmarks like Apache compilation and up to 600% in microbenchmarks, primarily by minimizing TLB miss handling in software. AMD's Nested Page Tables (NPT), introduced in 2007 as part of AMD-V enhancements, offer analogous SLAT functionality with a two-dimensional paging structure that maps guest-physical pages directly to host-physical pages, further reducing translation overhead and improving scalability for memory-intensive virtual machines. Both EPT and NPT employ nested paging to isolate guest memory spaces while allowing the to manage host resources efficiently, with hardware caching of translations to mitigate increased TLB miss latency from the additional paging level. For I/O virtualization, Intel's Virtualization Technology for Directed I/O (VT-d), specified in 2006 and integrated into platforms starting with Nehalem in 2009, incorporates an I/O memory management unit (IOMMU) for remapping and interrupt virtualization. VT-d translates -initiated requests from guest-physical to host-physical addresses using context-entry tables, enabling secure direct assignment where peripherals like GPUs or devices bypass the for near-native performance. This is particularly vital for high-IOPS workloads in and networking, as it isolates I/O traffic per and supports interrupt remapping to reduce latency in posted interrupt modes. AMD's Virtualization I/O (AMD-Vi), also known as IOMMU and introduced in 2007, provides similar capabilities with DMA address translation and device isolation through domain-based protection tables, facilitating and GPU passthrough in virtualized environments. Complementing these, the PCI-SIG's Single Root (SR-IOV) standard, revised to version 1.1 in , allows a single physical device—such as a network interface or storage controller—to appear as multiple virtual functions (VFs) assignable to different guests. SR-IOV, when paired with VT-d or AMD-Vi, enables direct device assignment without full emulation, delivering low-overhead I/O sharing essential for high-throughput applications like virtualized databases and . These aids have been standard in server processors since Nehalem, enhancing overall efficiency for I/O-bound scenarios.

Implementations

Type 1 hypervisors

Type 1 hypervisors, also known as bare-metal or native hypervisors, run directly on the physical of a host system, replacing the role of a traditional host operating system and managing guest virtual machines () by partitioning underlying resources such as processors, memory, and I/O devices. This architecture enables multiple isolated guest operating systems to share the same physical efficiently, without the intermediary layer of a host OS. Examples of Type 1 hypervisors include , KVM, Microsoft Hyper-V, and , each designed to support robust in production settings. These hypervisors offer key advantages in and compared to hosted alternatives. By accessing directly, they minimize overhead and deliver near-native execution speeds for workloads. The absence of a host OS reduces the potential , enhancing isolation and protection against vulnerabilities that could compromise the entire system. Management typically involves a privileged control domain, such as Domain 0 (dom0) in for overseeing operations, or integrated tools for configuration and monitoring across other implementations. Xen, an open-source Type 1 developed initially at the , achieved its first public release in 2003 and has since become a foundation for secure, multi-tenant . It excels in resource pooling, allowing multiple to share physical while supporting both full and paravirtualized guest modes for optimized in cloud and server environments. KVM (), announced in 2006 and merged into the mainline in 2007, enables any modern to function as a Type 1 by treating as processes. This integration leverages Linux's mature scheduler and for efficient VM handling, with broad hardware compatibility including x86, , and architectures, making it suitable for scalable deployments. Microsoft Hyper-V, introduced in 2008 as part of , operates as a Type 1 to virtualize processors and memory directly on hardware, providing robust isolation for guest OSes like Windows, , and . It supports advanced enterprise functionalities, including shielded VMs for enhanced security and integration with for hybrid cloud scenarios. VMware ESXi, the core Type 1 in VMware's ecosystem, traces its origins to the ESX platform released in 2001 and transitioned to the ESXi form factor with version 3.5 in 2007, eliminating the need for a separate service console. As the foundation of vSphere, it enables clustering for , dynamic , and seamless integration in virtualized data centers. In enterprise and data center contexts, Type 1 hypervisors dominate due to their reliability for consolidating servers and running mission-critical applications at scale. They facilitate of running VMs between hosts without interruption—via features like VMware vMotion, Live Migration, and KVM's libvirt-based migration—supporting maintenance, load balancing, and . As of the early , , , and KVM were among the leading solutions in server for large organizations. However, following Broadcom's acquisition of in November 2023, significant pricing changes and licensing shifts have prompted many organizations to migrate to alternatives such as KVM-based solutions and Proxmox, with VMware's projected to decline from approximately 70% in 2024 to 40% by 2029.

Type 2 hypervisors

Type 2 hypervisors, also known as hosted hypervisors, operate as software applications installed on top of an existing host operating system, such as Windows, macOS, or , rather than directly on the . This architecture allows them to leverage the host OS's drivers and resources for access, including processors, memory, and peripherals, simplifying integration with the host environment. Unlike bare-metal hypervisors, which provide direct control for superior efficiency, Type 2 hypervisors introduce mediation through the host OS, resulting in higher and . Key attributes of Type 2 hypervisors include their ease of and user-friendly interfaces, making them accessible without specialized or dedicated setups. They offer strong portability, enabling the same software to run across different operating systems with minimal adjustments, which supports flexible testing and multi-OS environments on personal desktops. However, this hosted model incurs overhead due to the additional layer of , typically ranging from 5% to 10% for workloads and higher for I/O-intensive tasks, as the host OS handles resource scheduling and device interactions. These hypervisors are particularly suited for non-production scenarios, such as , software development, and educational purposes, where simplicity and quick setup outweigh the need for maximal . Prominent examples include , first released in January 2007 as an open-source solution for . VirtualBox supports features like VM snapshots for state preservation and USB passthrough for direct device access within guests, enhancing its utility for isolated testing. , introduced in 1999 as one of the earliest commercial desktop hypervisors, runs on Windows and hosts and includes advanced capabilities such as snapshots for rollback and USB 3.1 passthrough for peripheral connectivity. For macOS users, provides similar hosted functionality, while Parallels Desktop, launched in June 2006, specializes in running Windows and guests on Apple with seamless integration features like shared folders and mode. In practice, Type 2 hypervisors like and are widely adopted in for operating systems and networking , as well as in workflows for local environment simulation and application testing without disrupting the host system. Their open-source and free tiers further contribute to broad accessibility, with alone attracting hundreds of thousands of monthly users for such non-enterprise applications.

Applications and Implications

Primary use cases

System virtual machines enable consolidation by allowing multiple isolated operating environments to run on a single physical host, thereby optimizing resource utilization and reducing hardware requirements in data centers. This approach can increase utilization rates from typical levels of 5-15% to over 70%, leading to substantial savings on physical . For instance, organizations using have reported reductions in server counts by up to 36%, equivalent to millions in annual savings. In software development and testing, system virtual machines provide isolated environments that facilitate quality assurance, debugging, and compatibility checks across different operating systems without interfering with the host system. Developers commonly deploy virtual machines to simulate diverse runtime conditions, such as cross-platform builds for applications targeting Windows, Linux, or macOS, ensuring reliable testing workflows. Disaster recovery represents another key application, where virtual machine snapshots and replication technologies enable rapid and restoration of entire systems in the event of hardware failure or site outages. Tools like Site Recovery Manager automate the orchestration of recovery plans, minimizing downtime through coordinated replication and non-disruptive testing of scenarios. System virtual machines form the backbone of infrastructure, with around 80% of x86 workloads virtualized across enterprise environments as of 2024, including major providers like AWS where EC2 instances predominantly rely on this technology. In for deployments, virtual machines on devices like Stack Edge support localized processing of data, reducing for real-time applications such as industrial . Additionally, they facilitate the of applications by encapsulating outdated software in virtualized environments, allowing continued operation on modern without refactoring. Recent applications include and workloads, where virtual machines enable GPU passthrough and isolated training environments to support scalable model development as of 2025.

Advantages and limitations

System virtual machines offer significant advantages in by enabling overcommitment of CPU and memory resources across multiple virtualized environments on a single physical host, allowing higher utilization rates compared to dedicated hardware setups. This efficiency stems from techniques like memory ballooning and page sharing, which dynamically allocate resources based on demand, reducing idle capacity in data centers. Additionally, they provide strong between virtual machines, ensuring that failures or compromises in one VM do not propagate to others or the host system, thereby enhancing overall system security through sandboxed execution. Scalability is another key benefit, as system virtual machines facilitate rapid provisioning and of workloads without physical hardware changes, supporting dynamic in large-scale environments like clusters. This leads to cost savings by consolidating multiple workloads onto fewer servers, minimizing hardware acquisition, maintenance, and energy expenses associated with underutilized physical machines. Despite these benefits, system virtual machines introduce overhead, typically ranging from 5% to 20% for CPU and I/O operations even with hardware-assisted , due to the abstraction layers required for and scheduling. complexity arises from issues like VM sprawl, where uncontrolled proliferation of virtual machines leads to resource inefficiency and administrative burdens. can occur during overcommitment, causing unpredictable when multiple VMs compete for shared . Furthermore, the serves as a ; if compromised, it can expose all hosted VMs to risks, as demonstrated by the 2015 vulnerability (CVE-2015-3456) in , which allowed guest-to-host escapes via a buffer overflow in the floppy disk controller . Features like Single Root I/O Virtualization (SR-IOV) mitigate I/O-related overhead by enabling direct device access for , significantly reducing CPU utilization in high-throughput scenarios. Overall, system virtual machines trade native performance for flexibility, a balance evolving with advancements such as confidential virtual machines using Trust Domain Extensions (TDX), introduced in 2022, which enhance against privileged attacks without relying on the for all enforcement.

References

  1. [1]
    Formal requirements for virtualizable third generation architectures
    Formal requirements for virtualizable third generation architectures. Authors: Gerald J. Popek. Gerald J. Popek. Univ. of Calfifornia, Los Angeles. View Profile.
  2. [2]
    Revisiting the History of Virtual Machines and Containers
    Virtual machine: This survey uses the term virtual machine for multitenant deployment techniques involving the replication/emulation of real hardware ...<|control11|><|separator|>
  3. [3]
    The Reincarnation of Virtual Machines - ACM Queue
    Aug 31, 2004 · The term virtual machine initially described a 1960s operating system concept: a software abstraction with the looks of a computer system's hardware (real ...
  4. [4]
    [PDF] Hypervisors and Virtual Machines - USENIX
    Hypervisor and virtualization technology is used to drive cloud computing, server consolidation, clustering, and high availability solutions .
  5. [5]
    [PDF] Introduction to Virtual Machines - Computer Science (CS)
    Nov 14, 2004 · In contrast, a system virtual machine provides a complete system environment. This environment can support an operating system along with its ...
  6. [6]
  7. [7]
    Virtual machines - PDOS-MIT
    What is a virtual machine? IBM definition: a fully protected and isolated copy of the underlying machine's hardware. Another view is that it provides another ...
  8. [8]
    The Virtualization Reality - ACM Queue
    Dec 28, 2006 · Operating system virtualization is achieved by inserting a layer of system software—often called the hypervisor or VMM (virtual machine monitor) ...
  9. [9]
    [PDF] The Architecture of Virtual Machines
    Process and system VMs. (a) In a process VM, virtualizing software translates a set of OS and user-level instructions composing one platform to those of ...
  10. [10]
    [PDF] Introduction to Virtual Machines - UTK-EECS
    • Process virtual machines. • System virtual machines. 3. Page 4. Abstraction ... • Original meaning of the term virtual machine. • All guest and host ...
  11. [11]
    [PDF] An Overview of Virtual Machine Architectures
    Key Concepts: Process vs. System x86. Linux. Java. VM. Native. App. Native. App ... •There are two kinds of virtual machines: process and system. • Process ...<|control11|><|separator|>
  12. [12]
    [PDF] The Origin of the VM/370 Time-sharing System - cs.wisc.edu
    Originally called a pseudo-machine time-sharing system, CP/CMS was named a virtual machine system from the description of similar but independent ideas [9] ...
  13. [13]
    z/VM History: Timeline
    1972 - VM/370. On August 2, 1972 VM/370 was announced for the IBM System/370. This introductory to VM/370 video was shown to IBM Field Engineers (FE) in 1972.Missing: milestones | Show results with:milestones
  14. [14]
    [PDF] A Brief Review of Its 40 Year History - IBM z/VM
    The late 1980s and early 1990s saw the rise of applications in the CMS environment on VM/ESA creating a very diverse workload environment. Millions of ...
  15. [15]
    IBM: VM 50th Anniversary
    Aug 3, 2022 · On August 2, 1972 VM/370 was announced by IBM along with the first System/370 mainframes that supported virtual memory. This date is considered VM's ...
  16. [16]
    Virtualization Trends Series Part 1: A Brief History of Virtualization
    Jan 18, 2023 · In 1998, VMware was founded and presented Workstation 1.0 in 1999 which allowed a user to run multiple operating systems as virtual machines on ...
  17. [17]
    The history of virtualization and its mark on data center management
    Oct 24, 2019 · You can trace the history of virtualization back to the 1960s when the idea of a time-sharing computer first emerged.
  18. [18]
    [PDF] Cloud Computing, Containers and VMs
    Dec 5, 2023 · Amazon EC2 was originally based on Xen. Hardware extensions. 2005/2006 ... Integrated with container orchestration systems (e.g., Kubernetes).
  19. [19]
    Hardware VM Isolation in the Cloud - ACM Queue
    Sep 7, 2023 · AMD began offering support for confidential computing with the first generation of SEV (Secure Encrypted Virtualization)12 technology in 2017.
  20. [20]
    Server Virtualization Software Market Growth And Demand By 2034
    In stockThe virtualization software market size has grown rapidly in recent years. It will grow from $85.83 billion in 2024 to $99.42 billion in 2025 at a compound ...Missing: 2020s credible<|separator|>
  21. [21]
    A comparison of software and hardware techniques for x86 ...
    We compare an existing software VMM with a new VMM designed for the emerging hardware support. Surprisingly, the hardware VMM often suffers lower performance ...
  22. [22]
  23. [23]
    [PDF] Background and Virtualization Basics - UCSD CSE
    Popek and Goldberg's Theorem (1974). – A machine can be virtualized (using trap-and-emulate) if every sensitive instruction is privileged. ... • Hardware assisted ...
  24. [24]
    [PDF] Virtualization Technology for Directed I/O - Intel
    Intel technologies may require enabled hardware, software or service activation. ... Extensions (Chapter 10 and Section 11.4.13). • Added Enhanced Command ...
  25. [25]
    Empirical study of performance benefits of hardware assisted ...
    The results indicate that hardware assistance indeed eliminates most overheads, especially those relating to network I/O, but non-negligible CPU overheads still ...
  26. [26]
    Performance overhead analysis of virtualisation on ARM
    Jan 1, 2018 · Our result shows that the average performance overhead of Xen and KVM virtual machine is between 3% and 4% when the host is lightly loaded, ...<|separator|>
  27. [27]
    [PDF] QEMU, a Fast and Portable Dynamic Translator - USENIX
    QEMU supports full system emulation in which a complete and unmodified operating system is run in a virtual machine and Linux user mode emulation where a Linux ...
  28. [28]
    Translator Internals — QEMU documentation
    QEMU uses a dynamic translator (TCG) to convert code, optimizes CPU state, uses Translation Blocks, and uses direct block chaining for execution.
  29. [29]
    Running Arm Binaries on x86 with QEMU-User - Azeria Labs
    Dec 29, 2020 · This blog post is a quick and straight-forward way to compile, debug, and run Arm 32- and 64-bit binaries directly on your x86_64 Linux host system.
  30. [30]
    bochs: The Open Source IA-32 Emulation Project (Home Page)
    Feb 17, 2025 · Bochs IA-32 Emulator provides a virtual PC that can run operating systems such as Windows, Linux, and BSD.Bochs 3.0 · Disk Images · News... · Tech Specs Pages
  31. [31]
    [PDF] Bringing Virtualization to the x86 Architecture with the Original ...
    This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization ...
  32. [32]
    1.9. FAQ - Bochs - SourceForge
    Because Bochs emulates every x86 instruction and all the devices in a PC system, it does not reach high emulation speeds. Users who have an x86 processor and ...Missing: overhead | Show results with:overhead
  33. [33]
    [PDF] Xen and the Art of Virtualization
    This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and ...
  34. [34]
    The Paravirtualization Spectrum, part 1: The Ends of the Spectrum
    Oct 23, 2012 · The first part will give a general introduction to virtualization, and to paravirtualization, Xen's unique contribution to the field, as well as ...Missing: definition | Show results with:definition
  35. [35]
    [PDF] Intel® 64 and IA-32 Architectures Software Developer's Manual
    This chapter describes the basics of virtual machine architecture and an overview of the virtual-machine extensions. (VMX) that support virtualization of ...
  36. [36]
    An overview of hardware support for virtualization | TechTarget
    Jun 23, 2022 · In 2005, Intel first introduced hardware support for virtualization with Intel VT-x on two models of the Pentium 4 processor. VT-x added 10 ...
  37. [37]
    [PDF] Intel® Virtualization Technology FlexMigration Application Note
    Intel VT-x is defined so that any execution of the CPUID instruction in VMX non-root operation causes a transition to the VMM. These transitions are called.
  38. [38]
    What is AMD Virtualization (AMD-V)? – TechTarget Definition
    Mar 16, 2023 · First announced in 2004 and introduced in 2006, AMD-V technology added VM capability via VM instructions in AMD's x86 CPU chips. The technology ...
  39. [39]
    Secure virtualization - Arm Developer
    Virtualization was introduced in Armv7-A. At that time, Hyp mode, which is the equivalent to EL2 in AArch32, was only available in Non-secure state.
  40. [40]
    Virtualization host extensions - Arm Developer
    Traditionally, kernels run at EL1, but the virtualization controls are in EL2. This means that most of the Host OS is at EL1, with some stub code running in EL2 ...
  41. [41]
    [PDF] Next Generation Intel® Microarchitecture (Nehalem)
    Now in 2008, a new microarchitecture codenamed Nehalem stands to further build on these microarchitectural marvels, rewriting the book on processor energy ...
  42. [42]
    [PDF] 5-Level Paging and 5-Level EPT - Intel
    May 1, 2017 · Most Intel 64 processors supporting VMX also support an additional layer of address translation called extended page tables (EPT). VM entry can ...
  43. [43]
    [PDF] Performance Evaluation of Intel EPT Hardware Assist - VMware
    In 2006, both vendors introduced their first-generation hardware support for x86 virtualization with AMD-Virtualization™. (AMD-V™) and Intel® VT-x technologies.Missing: Flexible | Show results with:Flexible
  44. [44]
    [PDF] AMD I/O Virtualization Technology (IOMMU) Specification - kib.kiev.ua
    Mar 24, 2011 · This document is the AMD I/O Virtualization Technology (IOMMU) Specification, a legal agreement for planning and designing products to ...
  45. [45]
    [PDF] vt-directed-io-spec.pdf - Intel
    The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications.
  46. [46]
    [PDF] Achieving Fast, Scalable I/O for Virtualized Servers - Dell
    By supporting Intel VT-d and the SR-IOV specification, indepen- dent hardware vendors (IHVs) can design PCIe cards that deliver near-native I/O performance for ...
  47. [47]
    Single Root I/O Virtualization and Sharing Specification Revision 1.1
    Jan 20, 2010 · The purpose of this document is to specify PCI™ I/O virtualization and sharing technology. The specification is focused on single root ...
  48. [48]
    What is a hypervisor? - Red Hat
    Jan 3, 2023 · A type 1 hypervisor, also referred to as a native or bare metal hypervisor, runs directly on the host's hardware to manage guest operating ...
  49. [49]
    What Are Hypervisors? | IBM
    A type 1 hypervisor takes the place of the host operating system. Type 1 hypervisors are highly efficient because they directly access physical hardware. This ...What are hypervisors? · Why are hypervisors important?
  50. [50]
    What is a Hypervisor? - VMware
    A type 1 hypervisor acts like a lightweight operating system and runs directly on the host's hardware, while a type 2 hypervisor runs as a software layer on an ...
  51. [51]
    Hypervisors and virtualization in a Cloud environment - IBM Developer
    May 19, 2024 · Xen is a type 1 hypervisor that creates logical pools of system resources so that many virtual machines can share the same physical resources.
  52. [52]
    Celebrating 15 Years of the Xen Project and Our Future
    Oct 23, 2018 · It's been an incredible journey from Xen's early beginnings in the University, to making our first open source release in 2003, to building a ...
  53. [53]
    What is KVM? - Red Hat
    Nov 1, 2024 · With KVM, Linux can function as a hypervisor that runs multiple, isolated virtual machines (VMs). KVM was announced in 2006 and merged into the ...
  54. [54]
    Hyper-V virtualization in Windows Server and Windows
    Aug 5, 2025 · As a type-1 hypervisor, Hyper-V runs directly on computing hardware, delivering near-native performance and robust isolation for virtualized ...
  55. [55]
    Build numbers and versions of VMware ESXi/ESX
    Oct 2, 2025 · The tables below lists the ESXi build numbers and versions in order of their release dates. Table of Contents. VMware Cloud Foundation ESX 9.0 ...
  56. [56]
    How to choose a virtualization platform - Red Hat
    Nov 13, 2024 · A type 1 hypervisor assumes the role of a host operating system (OS), scheduling and managing resources for each VM.Important Virtualization... · Type 1 Or Type 2 Hypervisors · What To Look For In A...
  57. [57]
    State of the Hypervisor Market - ShapeBlue
    In-depth analytics of the hypervisors market from cloud and virtualization experts. Review of the leading hypervisors - VMware vSphere, KVM, ...
  58. [58]
    What is a Type 2 hypervisor (hosted hypervisor)? - TechTarget
    Jun 24, 2024 · A Type 2 hypervisor is a virtual machine (VM) manager that is installed as a software application on an existing operating system (OS).
  59. [59]
    What's the difference between type 1 and type 2 hypervisors? - IONOS
    Jul 11, 2024 · Type 2 hypervisors access hardware resources through the host OS, which means they have to share physical resources with the host system.What Is A Hypervisor? · Type 1 Hypervisor · Type 2 Hypervisor
  60. [60]
    What are hypervisors? A complete guide - Nutanix
    Aug 19, 2025 · Type 2 hypervisors differ in that they run as applications on a physical server's preexisting OS. Because they run on the host OS, which sits ...What Are The Benefits Of... · Hypervisors Vs Containers · Open Source Hypervisors
  61. [61]
    What is a Hypervisor? - AWS
    Type 2 hypervisors are easier to install, configure, and use than bare-metal hypervisors. It is similar to installing and using other desktop applications.Missing: devops | Show results with:devops
  62. [62]
    Type 1 vs Type 2 Hypervisors: Key Differences Explained - StorMagic
    May 20, 2025 · A type 1 hypervisor runs on bare metal, and a type 2 hypervisor runs as software on top of an operating system.
  63. [63]
    How much overhead does x86/x64 virtualization have? - Server Fault
    Apr 20, 2011 · What is the the purpose of using paravirtualization if there is a hardware assisted virtualization? Hot Network Questions · Meaning of "ou bien ...
  64. [64]
    What Is a Hypervisor? A Complete Guide to Virtualization
    Dec 11, 2024 · Type 2 hypervisors are typically used in non-production settings, such as software development, application testing, and education.
  65. [65]
    News (older entries) - Oracle VirtualBox
    June 21st, 2013. VirtualBox 4.2.14 released! Oracle today released VirtualBox 4.2.14, a maintenance release of VirtualBox 4.2 which improves stability and fixes ...
  66. [66]
    Chapter 1. First Steps - Oracle VirtualBox
    USB device support. Oracle VM VirtualBox implements a virtual USB controller and enables you to connect arbitrary USB devices to your virtual machines without ...<|separator|>
  67. [67]
    VMware Workstation Release and Build Number History - virten.net
    VMware Workstation was the first product introduced by VMware back in 1999. The name "Workstation" was introduced in Version 3.
  68. [68]
    Fusion and Workstation | VMware
    VMware Desktop Hypervisor Features · Virtual Machines · High-Performance 3D Graphics · Containers and Kubernetes Clusters · Powerful Virtual Networking.
  69. [69]
    Parallels celebrates its 10 years of innovation with its virtualization ...
    Jun 13, 2016 · June 15th, 2006: Parallels Desktop launches and becomes the first software to mainstream virtualization software on the Mac. Customers are now ...
  70. [70]
    Run Windows on Mac with a virtual machine | Parallels Desktop
    ### Summary of Parallels Desktop Release History and Features
  71. [71]
    virtualbox.org Website Traffic, Rankings & Analytics Report ...
    How does virtualbox.org rank #16367 in the US with 555.5K monthly visitors? Get the full traffic analysis and competitive benchmarking data.
  72. [72]
    [PDF] Using IBM Virtualization to Manage Cost and Efficiency
    Consolidating to a virtual infrastructure can enable you to increase server utilization rates from 5% to 15% to over 70%, thus helping improve ROI. In addition, ...<|control11|><|separator|>
  73. [73]
    Virtualization Saves Microsoft Customers Nearly a Half-Million ...
    Feb 9, 2009 · Windows Server 2008 Hyper-V allowed the bank to reduce the number of servers needed by 36 percent and realize savings equivalent to $1 million ( ...
  74. [74]
    What is a Virtual Machine? | Microsoft Azure
    What are VMs used for · Software development and testing · Education and training · Cloud computing · Disaster recovery · Server consolidation · Running legacy ...
  75. [75]
    [PDF] VMware Site Recovery Manager and VMware vSphere Replication
    Site Recovery Manager automates and orchestrates failover and failback, ensuring minimal downtime in case of a disaster. Built-in nondisruptive testing ensures ...
  76. [76]
    [PDF] Common Platform Architecture for Network Function Virtualization ...
    Gartner [1] specifies that about 75 percent of x86 server workloads are virtualized today. Work done by early adopters like Mobile Cloud Networking [2] and ...
  77. [77]
    Overview of VMs on your Azure Stack Edge device - Microsoft Learn
    Jul 1, 2022 · This article provides a brief overview of virtual machines (VMs) running on your Azure Stack Edge devices, supported VM sizes, and summarizes the various ways ...
  78. [78]
    [PDF] Containers and Virtual Machines at Scale: A Comparative Study
    Both multi-tenancy and overcommitment are used to in- crease consolidation and reduce the operating costs in clouds and data centers.Missing: savings | Show results with:savings
  79. [79]
    [PDF] Memory Resource Management in VMware ESX Server - USENIX
    VMware ESX Server uses ballooning, idle memory tax, content-based page sharing, and hot I/O page remapping to manage memory efficiently.
  80. [80]
    [PDF] Hardware and Software Support for Virtualization
    Design and implementation of nested virtualization. In Proc. of the 9th Symposium on. Operating System Design and Implementation (OSDI), pages 423–436, 2010.<|separator|>
  81. [81]
    [PDF] Cellular Disco: resource management using virtual clusters on ...
    Cellular Disco turns a large-scale multiprocessor into a virtual cluster, supporting fault containment and heterogeneity, while avoiding OS scalability issues.
  82. [82]
    [PDF] System-wide Performance Analysis for Virtualization - PDXScholar
    In order to maximize resources, IT data centers overcommit the resources, with the hope that multiple virtual guest machines do not need all resources ...Missing: advantages | Show results with:advantages
  83. [83]
    [PDF] A Quantitative Analysis of the Xen Virtualization Overhead - UFMG
    We compare the performance of the bench- marks on a single virtual machine (VM) against a Linux system and study the performance interference among. VMs scaling ...
  84. [84]
    VENOM: QEMU vulnerability (CVE-2015-3456)
    Aug 25, 2016 · Red Hat Product Security has been made aware of a 'buffer overflow' vulnerability affecting the Floppy Disk Controller (FDC) emulation implemented in the QEMU ...
  85. [85]
    Energy efficiency in cloud computing data centers: a survey on ...
    Aug 30, 2022 · Virtualization consolidates multiple clients on a single physical machine making it a Single Point of Failure (SPOF). However, along with ...
  86. [86]
    Energy Performance Assessment of Virtualization Technologies ...
    This means that the Virtualization layer is able to share resources such as the NIC, CPU, RAM and DISK among the VMs whilst avoiding the unnecessary overhead ...
  87. [87]
    High performance network virtualization with SR-IOV - IEEE Xplore
    The results show SR-IOV can achieve line rate (9.48 Gbps) and scale network up to 60 VMs at the cost of only 1.76% additional CPU overhead per VM, without ...
  88. [88]
    Intel® Trust Domain Extensions (Intel® TDX)
    This hardware-based trusted execution environment helps deploy trust domains to protect sensitive data and applications from unauthorized access.