Fact-checked by Grok 2 weeks ago

Kernel-based Virtual Machine

The Kernel-based Virtual Machine (KVM) is an open-source module integrated into the that enables the kernel to operate as a type-1 , allowing multiple isolated machines (VMs) to run on a single physical host with near-native performance. It provides full hardware-assisted primarily for x86 architectures equipped with extensions such as VT-x or AMD-V, treating each VM as a standard process while allocating virtualized resources like CPU, , and I/O devices. KVM was originally developed by Avi Kivity at Qumranet and announced in 2006, with its core patch set merged into the mainline as part of version 2.6.20 in February 2007. This integration marked a significant advancement in open-source , building on the growing availability of support in processors from and . Over the years, KVM has evolved through contributions from a global community of over 1,000 developers, celebrating its 10-year anniversary in 2016 and continuing to receive updates in subsequent kernel releases. At its core, KVM consists of kernel modules—including the generic kvm.ko and architecture-specific ones like kvm-intel.ko or kvm-amd.ko—that leverage the host 's existing components, such as the memory manager and process scheduler, to handle VM operations efficiently. It is typically paired with userspace tools like for device emulation and VM management, enabling the launch of unmodified guest operating systems, including and Windows, as isolated processes on the host. This architecture supports a range of hardware platforms beyond x86, including and Systems, and facilitates features like resource pooling across VMs. Key advantages of KVM include its high performance through , which minimizes overhead and supports low-latency workloads; enhanced security via integrations like SELinux and sVirt for isolating VMs and protecting host resources; and cost efficiency as a mature, free technology backed by an active open-source community. It enables practical use cases such as scaling infrastructure, live VM without , rapid deployment in centers, and running applications on modern . Widely adopted in enterprise environments by providers like , AWS, and , KVM powers much of today's virtualized computing landscape.

Introduction and Background

Overview

The Kernel-based Virtual Machine (KVM) is an open-source virtualization module integrated into the Linux kernel, enabling it to function as a type-1 hypervisor for creating and managing virtual machines (VMs). First merged into the mainline Linux kernel in version 2.6.20, released on February 4, 2007, KVM leverages hardware virtualization extensions such as Intel VT-x or AMD-V to support full virtualization on x86 processors, allowing unmodified guest operating systems to run with near-native performance. This integration transforms the host Linux kernel into a bare-metal hypervisor, providing efficient resource isolation and management without requiring a separate hosted hypervisor layer. KVM operates by loading as a module, which exposes a character at /dev/kvm to facilitate interaction between the kernel and user-space applications. In this architecture, KVM manages core tasks in kernel space, including CPU emulation, , and VM scheduling, while user-space components—typically —handle peripheral emulation, I/O operations, and the execution of guest operating systems. This division ensures that performance-critical operations remain in the kernel for low overhead, with user-space tools providing flexibility for modeling and VM configuration. KVM supports multiple processor architectures, including , ARM64, PowerPC, IBM z/Architecture (s390), and , allowing deployment across diverse hardware platforms. For enhanced I/O performance, it incorporates through the VirtIO framework, which provides semi-virtualized drivers that reduce overhead by enabling direct communication between and . As of 2025, KVM remains a cornerstone of enterprise , powering server environments in distributions such as and , where it supports scalable VM deployments for cloud and data center applications.

Historical Development

The Kernel-based Virtual Machine (KVM) originated from work begun in mid-2006 by Avi Kivity at Qumranet, an Israeli virtualization company, leveraging Intel VT and AMD-V hardware extensions to enable Linux kernel-based virtualization. Kivity announced the initial version of KVM on October 19, 2006, via a post to the , marking its first public release as an out-of-tree module. KVM's code was merged into the mainline Linux kernel as part of version 2.6.20, released on February 4, 2007, transitioning it from an external module to a core kernel component. In September 2008, Red Hat acquired Qumranet for $107 million, integrating KVM further into its virtualization ecosystem and accelerating its development under open-source governance. Avi Kivity served as the primary maintainer initially, with Paolo Bonzini taking over as the lead maintainer around 2012, guiding KVM's evolution through contributions from Red Hat and the broader Linux community. Key early milestones included the introduction of live migration capabilities in 2007, allowing seamless transfer of running virtual machines between hosts to minimize downtime. Around 2008–2010, VirtIO emerged as a paravirtualized I/O standard for KVM, with its drivers merged into the in 2008 and formalized through the OASIS VirtIO specification by 2016, enhancing device performance in virtual environments. Support for ARM64 architectures arrived in 3.9, released in April 2013, enabling KVM on mobile and embedded systems. RISC-V support followed in 5.16, released in December 2021, broadening KVM's applicability to open hardware architectures. In recent years, KVM has seen enhancements focused on security and performance. The Linux 6.18 kernel, with stable release expected in late November 2025, introduced support for AMD Secure Encrypted Virtualization with Secure Nested Paging (SEV-SNP) features like CipherText Hiding, bolstering confidential computing protections against host-side attacks. Throughout 2025, Ubuntu 22.04 LTS received multiple security patches for KVM, addressing vulnerabilities in subsystems such as the x86 architecture and block layer via updates like linux-kvm 5.15.0-1077.86. Discussions at the KVM Forum 2025 in Milan explored techniques for applying kernel updates without requiring VM migration, using mechanisms like kexec to reboot the host kernel while preserving running guests. The KVM Forum, an annual community event, has convened developers since 2009 to advance topics; the 2024 edition in , , emphasized performance optimizations and security hardening through sessions on automated testing, nested , and migration efficiency.

Technical Architecture

Core Components and Internals

KVM operates as a named kvm.ko, which provides the core infrastructure for the , enabling it to function as a type-1 . This module is supplemented by architecture-specific variants, such as kvm-intel.ko for processors supporting VT-x and kvm-amd.ko for processors supporting Secure Virtual Machine (SVM), which handle hardware-assisted extensions. These modules are dynamically loaded and integrate seamlessly with the , leveraging its process and subsystems without requiring a separate kernel. The interface between user-space applications and the KVM kernel module is exposed through the /dev/kvm character device file, which supports a range of ioctls for managing virtual machines (VMs) and virtual CPUs (vCPUs). Key ioctls include KVM_CREATE_VM, invoked on /dev/kvm to create a new VM and return a VM-specific ; KVM_CREATE_VCPU, applied to the VM file descriptor to add a vCPU and obtain a vCPU ; KVM_SET_USER_MEMORY_REGION, used on the VM file descriptor to define memory slots guest physical addresses to ; and KVM_RUN, executed on the vCPU file descriptor to start or resume code execution. These ioctls form a hierarchical structure: system-level operations on /dev/kvm, VM-level on the descriptor, and vCPU-level on individual vCPU descriptors, ensuring isolated control over resources. At its core, KVM employs a trap-and-emulate model for , where guest code executes directly on the host CPU in a restricted mode—VMX non-root mode for VT-x or SVM guest mode for SVM—allowing near-native performance for non-privileged s. When the guest attempts a privileged instruction, access violation, or other sensitive , it triggers a VM , trapping control to the host in VMX root mode () or host mode (), where KVM emulates the operation or forwards it to user-space as needed. This mechanism relies on extensions to minimize overhead, with VM exits handled efficiently by the KVM scheduler to resume guest execution promptly. KVM integrates closely with user-space components like for comprehensive VM management, where handles device emulation and I/O while KVM focuses on CPU and . Specifically, KVM implements virtual CPUs as regular threads, enabling the host's (CFS) to manage vCPU scheduling alongside native processes, which simplifies resource allocation and ensures fair CPU time distribution. For , KVM supports both shadow paging—where the host maintains a shadow copy of the guest's page tables—and hardware-accelerated two-dimensional paging via Extended Page Tables (EPT) on or Nested Page Tables (NPT) on , which map guest physical addresses directly to host physical addresses to reduce translation overhead. Address space management in KVM involves mapping guest physical addresses (GPA) to host virtual addresses (HVA), typically through memory slots configured via KVM_SET_USER_MEMORY_REGION, where user-space allocates host memory and registers it with KVM for pinning and access control. This mapping supports features like dirty logging, enabled by the KVM_MEM_LOG_DIRTY_PAGES flag, which tracks modified pages via a bitmap retrieved through KVM_GET_DIRTY_LOG, facilitating live migration and memory inspection without full emulation overhead. Additionally, KVM accommodates memory ballooning through protocols like virtio-balloon, allowing dynamic adjustment of guest memory allocation by inflating or deflating a balloon device to reclaim or return host memory pages, optimizing overcommitment scenarios. In Linux kernel 6.18, KVM introduced optimizations for hardware, including enabling Secure Advanced (AVIC) support, which accelerates and reduces VM entry/exit for workloads with frequent . These enhancements build on prior SVM improvements, improving overall VM performance in nested and high- environments.

Hardware Support and Emulation

KVM relies on specific hardware virtualization extensions provided by the host CPU to enable efficient execution. Mandatory requirements include VT-x for basic on x86 processors, augmented by Extended Page Tables (EPT) for accelerated , or the equivalent AMD-V (Secure ) with Rapid Virtualization Indexing (RVI), also known as Nested Page Tables (NPT), on AMD platforms. These extensions allow the to trap and sensitive instructions while minimizing overhead. Optional but recommended features include VT-d or AMD-Vi for Input-Output Unit (IOMMU) support, which facilitates secure direct device assignment ( passthrough) by isolating device traffic. Without these core extensions, KVM cannot operate in hardware-accelerated mode and falls back to software , which is significantly slower. The primary supported architecture for KVM is , where it leverages mature capabilities for broad compatibility. Support extends to ARM64 () processors with virtualization extensions (EL2), including emulation of the Generic Interrupt Controller (GIC) versions 2 and 3 to handle guest interrupts efficiently. () platforms, particularly those based on 's pSeries, are supported via KVM on distributions like those from . The s390 architecture, used in mainframes, integrates KVM for z/VM-like with architecture-specific optimizations. support was introduced in 5.18 in 2022, enabling KVM on 64-bit (RV64) implementations with hypervisor extensions (H-extension), initially focusing on basic CPU and . KVM's emulation strategy emphasizes minimal intervention in the kernel, confining most device and peripheral emulation to user-space processes for modularity and security. The kernel module handles core VM lifecycle, CPU scheduling, and , while offloading I/O emulation to tools like , which provides comprehensive device models through dynamic or . For lighter-weight scenarios, alternatives such as crosvm (a Rust-based VMM from for OS) or (Amazon's microVM for serverless workloads) integrate with KVM to emulate only essential peripherals, reducing and boot times to under 125 milliseconds. Firmware emulation is managed via for legacy compatibility or OVMF (an open-source implementation) for modern boot processes, ensuring guests can initialize hardware as if on physical systems. This hybrid approach allows KVM to scale from full-system emulation to paravirtualized environments. In terms of emulated hardware, KVM supports virtual CPUs (vCPUs) scaled up to the host's physical core count, enabling multi-threaded guest workloads with features like CPU hotplug. Memory allocation mirrors host RAM limits, with virtio-balloon drivers allowing dynamic resizing to optimize resource sharing across VMs. Basic I/O subsystems, including PCI buses for expansion cards and USB controllers (e.g., UHCI or EHCI models), are emulated primarily through QEMU's device backends, providing guests with standardized interfaces for peripherals like keyboards, storage, and graphics. These emulations ensure compatibility but introduce latency compared to native hardware. To mitigate emulation overhead, KVM distinguishes paravirtualization techniques, particularly through the VirtIO standard, which exposes semi-virtualized devices to guests via simple ring buffers and . VirtIO drivers for block storage (virtio-blk), networking (virtio-net), and serial consoles (virtio-console) bypass full by allowing direct kernel-to-kernel communication, achieving near-native I/O throughput—up to 10 Gbps for networking on modern hosts. Guests must install these drivers (available in and Windows) to benefit, reducing CPU cycles spent on trap-and-emulate cycles by orders of magnitude. Recent advancements include RISC-V vector extension emulation in Linux kernel 6.10 (released July 2024), enabling KVM to support the RVA22 profile's scalable vector processing for guests on hosts lacking native hardware vectors, through software fallback mechanisms. On ARM64, improvements in 2025 introduced support for the Arm Confidential Compute Architecture (CCA) in KVM, allowing protected VMs (realms) with features like granule protection faults, realm entry/exit handling, and enhanced VGIC/timer support, based on the RMM v1.0 specification for end-to-end memory encryption and attestation. These updates expand KVM's applicability to emerging secure and vector-accelerated workloads.

Features and Capabilities

Key Virtualization Features

KVM provides robust CPU virtualization capabilities, allowing the creation of multiple virtual CPUs (vCPUs) to emulate (SMP) environments for guest operating systems. This enables guests to leverage multi-core processing for enhanced performance in parallel workloads. KVM also supports vCPU overcommitment, where the total number of vCPUs across all guests can exceed the host's physical CPU cores, with the scheduler managing to maintain efficiency. Dynamic vCPU hotplug, which permits adding or removing vCPUs during guest runtime without rebooting, was introduced in version 3.10, released in 2013. Memory management in KVM emphasizes flexibility and efficiency through dynamic allocation, where host memory can be adjusted for guests on demand to optimize resource utilization. A key mechanism is memory ballooning, implemented via a paravirtualized balloon driver in the guest that inflates to reclaim memory for the host or deflates to return it, facilitating sharing without significant performance degradation. Additionally, KVM supports huge pages (typically 2 MB or 1 GB), which reduce (TLB) misses and improve memory access speeds in I/O-intensive or large-memory scenarios. Live migration enables seamless transfer of running virtual machines between hosts with minimal interruption, using pre-copy and post-copy methods to handle memory transfer. In pre-copy, iteratively copied pages are tracked for changes (dirty pages) until the working set stabilizes, introduced alongside early KVM development in 2007; post-copy, which suspends the guest briefly, resumes execution on the destination host, and fetches remaining pages on fault, was proposed for KVM in 2009 and enhanced with better fault handling and recovery in 2015. These techniques achieve downtimes of under a second for typical workloads, supporting high-availability environments. KVM includes in-kernel mechanisms for snapshotting and checkpointing, allowing the saving and restoring of complete VM states, including CPU registers, , and contexts, through ioctls like KVM_SET/GET_VCPU_EVENTS and memory slot management. This facilitates , , and rapid without full guest reboots. Guest OS compatibility is broad, with native support for distributions, Windows (optimized via VirtIO drivers for storage and networking), BSD variants, and , ensuring near-native performance across diverse environments. Nested virtualization, enabling hypervisors to run inside guest VMs, has been available since , supporting advanced testing and development scenarios. As of Linux kernel 6.13 (released January 2025), KVM added support for Arm's Confidential Compute Architecture (), enabling protected virtual machines with enhanced security isolation. Further enhancements in kernel 6.14 (released March 2025) include improved guest support for extensions like Zabha, Svvptc, and Ziccrse.

Device Emulation and Paravirtualization

KVM handles guest device input/output primarily through full provided by , which simulates hardware components in user space to support legacy devices incompatible with modern techniques. For instance, emulates disk controllers for compatibility with older operating systems, standard VGA graphics adapters for basic display output, and sound devices such as or SB16 for audio playback. This approach involves trapping guest I/O instructions into the host kernel via KVM, where interprets and emulates the operations, resulting in significant performance overhead from frequent VM exits and context switches. To mitigate the limitations of full , KVM employs via the VirtIO standard, introduced in 2008 as a for efficient virtual I/O across hypervisors. VirtIO devices present a semi-virtualized to the , requiring paravirtualized drivers in the OS that communicate with the host using shared ring buffers known as vrings, which enable batched, data transfers and reduce frequency. Key examples include virtio-blk for block storage, which uses a single queue for read/write operations with sector-based addressing; virtio-net for networking, supporting transmit/receive queues with offload features like and TSO; and virtio-gpu for accelerated , providing 2D/ via . These drivers minimize needs by handling device-specific logic, achieving high performance close to native I/O while maintaining broad compatibility. For scenarios demanding near-native performance, KVM supports passthrough, allowing direct assignment of host devices to guests via the VFIO framework, which provides IOMMU-protected access without emulation overhead. Introduced in 3.6 in 2012, VFIO enables safe userspace binding of devices like GPUs or NICs, isolating them in IOMMU groups to prevent unauthorized and deliver bare-metal driver performance to the guest. USB and input device handling in KVM relies on QEMU's emulation of USB controllers, supporting USB 2.0 via EHCI and USB 3.0 via XHCI, alongside virtual USB devices for peripherals like keyboards and mice. These emulations allow guests to interact with virtual or passed-through USB hardware, with PS/2 or USB tablet models ensuring seamless input capture. For remote access, the SPICE protocol integrates with QEMU to stream display output and relay input events over dedicated channels, supporting keyboard, mouse, and multi-monitor setups with low-latency client-side rendering. Recent advancements enhance VirtIO efficiency in KVM, including full support for the VirtIO 1.1 specification in Linux kernel 5.15 released in 2021, which introduces features like packed virtqueues for reduced descriptor overhead and improved live migration compatibility.

Management and Tools

Command-Line and API Interfaces

The primary command-line interface for managing KVM virtual machines is virsh, a shell provided by the libvirt library, which enables administrators to handle the full lifecycle of guest domains. Common operations include listing active domains with virsh list, starting a domain via virsh start <domain>, stopping it with virsh shutdown <domain>, and pausing or resuming execution as needed. This tool abstracts interactions with the underlying hypervisor, allowing domain creation from XML definitions and configuration edits without direct kernel access. For lower-level control, the qemu-kvm binary serves as the direct invocation point, integrating KVM acceleration when specified. It is typically launched with options such as -enable-kvm to activate hardware-assisted and -m <size> to allocate guest memory, for example, qemu-kvm -enable-kvm -m 2048 -drive file=disk.img. This approach bypasses higher-level management layers for custom or debugging scenarios, though it requires manual handling of device emulation and networking. Libvirt provides a stable API for programmatic access, facilitating connections to the KVM device at /dev/kvm through processes managed by the library. Developers can use functions like virConnectOpen to establish a connection and virDomainCreate to launch , enabling embedded integration in applications. For scripting, the libvirt-python bindings offer Pythonic wrappers around this API, supporting tasks such as domain monitoring and resource allocation via modules like libvirt and libvirt.qemu. These bindings, available through PyPI, allow scripts to interact with KVM guests in a cross-platform manner. Disk management in KVM environments often leverages qemu-img, a utility for creating, resizing, and converting virtual disk images in formats like or raw. For instance, qemu-img create -f qcow2 disk.img 20G initializes a 20 GB sparse image, while qemu-img resize disk.img +10G expands an existing one for guest use. Automated VM provisioning is streamlined with virt-install, part of the virt-install package, which defines and deploys guests from command-line arguments, including ISO installation media and network bridges, as in virt-install --name guest --ram 1024 --disk path=disk.img --cdrom install.iso --os-variant rhel8. KVM interfaces integrate with orchestration platforms through libvirt hooks, such as those used by for compute node operations, where Nova's libvirt driver provisions and migrates VMs via KVM. Similarly, KubeVirt extends to run container-native VMs on KVM, encapsulating processes in pods for unified workload management. Libvirt 10.0, released in January 2024, includes improvements such as postcopy-preempt migration for faster VM migrations. Subsequent releases, like 10.5.0 in July 2024, introduced support for AMD SEV-SNP . As of November 2025, libvirt 11.9.0 adds features like host-model mode, enhancing cross-architecture compatibility and security isolation in API-driven deployments.

Graphical and Enterprise Tools

Standalone graphical user interfaces for KVM simplify virtualization tasks for desktop users by providing intuitive tools for creating, accessing, and monitoring virtual machines. , built on the libvirt library, offers a comprehensive for VM lifecycle , including creation via wizards, VNC/ console access, and real-time performance monitoring such as CPU, memory, and disk usage metrics. GNOME Boxes serves as a lightweight alternative, leveraging libvirt and to enable quick VM setup and remote system access with minimal configuration, ideal for end-users testing operating systems or connecting to distant machines without advanced administrative overhead. Enterprise platforms extend KVM management to large-scale data centers by integrating resource pooling and clustering capabilities. , developed by , provides a centralized web-based for managing KVM-based virtual infrastructures, supporting features like shared storage pools, network configuration, and high-availability clustering to streamline operations across multiple hosts. Proxmox VE delivers a solution that combines KVM virtualization with software-defined storage and networking, allowing seamless VM migration, backup orchestration, and cluster management through its dedicated web console. Web-based tools further enhance accessibility by enabling browser-driven administration without dedicated client software. offers an HTML5 interface for libvirt-managed KVM environments, facilitating straightforward VM provisioning, template-based deployments, and storage configuration directly from a . , with its virt-machines module, integrates KVM oversight into a broader management dashboard, supporting VM creation, runtime controls, and performance dashboards via secure web sessions. In cloud environments, KVM integrates with orchestration platforms for automated scaling and deployment. OpenStack's service employs the KVM driver as its default hypervisor, enabling elastic compute provisioning with support for and volume attachment in multi-tenant setups. AWX, the open-source upstream of Ansible Automation Platform, automates KVM workflows through playbooks and the community.libvirt collection, handling tasks like VM instantiation and inventory synchronization across distributed infrastructures. As of November 2025, recent advancements in these tools continue to improve usability and compatibility. Proxmox VE 8.4, released in April 2025, builds on prior versions with further optimizations for storage and networking in hyper-converged environments. GNOME Boxes, as part of GNOME 48 released in March 2025, incorporates additional Wayland protocol enhancements for improved graphical console rendering and input handling on modern Linux desktops.

Adoption and Advanced Topics

Use Cases and Ecosystem Integration

Kernel-based Virtual Machine (KVM) serves as a foundational for , enabling efficient workload consolidation in s by allowing multiple virtual machines to run on a single physical host. This capability supports resource optimization and scalability for enterprise environments. Virtualization, an enterprise platform, leverages KVM to manage large-scale deployments of virtualized s. Similarly, Virtualization utilizes KVM as its core for consolidation and high-availability configurations in settings. In cloud computing, KVM underpins major platforms as a reliable backend for infrastructure-as-a-service (IaaS) offerings. OpenStack, an open-source cloud operating system, employs KVM as one of its primary hypervisors to orchestrate virtual machine lifecycles and across distributed environments. Google Cloud's Compute Engine partially relies on a hardened KVM implementation to provision and isolate virtual machines, ensuring secure multi-tenant operations. For private clouds, KVM integrates seamlessly to support customized stacks. Additionally, KubeVirt extends KVM's functionality by enabling hybrid workloads on clusters, allowing virtual machines to coexist with containers for unified management; this integration has been available since 2018. On desktops and in development workflows, KVM facilitates running Windows guests on hosts, providing near-native performance through hardware-accelerated for tasks like and cross-platform development. Tools such as Vagrant, when paired with the KVM provider, streamline the creation of disposable testing environments, enabling developers to spin up consistent virtual machines for pipelines and prototyping. For edge and embedded systems, KVM's support for architectures enables in resource-constrained and scenarios, where it virtualizes workloads on low-power devices for improved isolation and efficiency. Emerging support further extends KVM to open-hardware platforms, facilitating secure execution of embedded applications in diverse ecosystems. , a KVM-based microVM technology inspired by , optimizes at the edge by launching lightweight, secure instances in milliseconds for high-density deployments. Recent trends from 2023 to 2025 highlight KVM's growing adoption as a cost-effective alternative to proprietary solutions following Broadcom's 2023 acquisition of , which prompted many organizations to migrate toward open-source hypervisors for reduced licensing expenses and risks. has emphasized KVM in its 2024 updates, promoting it for both on-premises and hybrid cloud deployments to enhance interoperability and . continues to drive innovations in KVM for hybrid cloud environments, integrating it with container orchestration to support seamless transitions between virtualized and cloud-native architectures.

Security and Performance Enhancements

KVM incorporates several security features to protect virtual machines from common threats. Input-Output Memory Management Unit (IOMMU) support enables protection against (DMA) attacks by isolating device access to guest memory, preventing malicious peripherals from compromising or other guests. sVirt, introduced in 2009, integrates SELinux with libvirt to enforce (MAC) for fine-grained VM isolation, labeling files and processes to prevent cross-VM data leaks. For , KVM supports AMD Secure Encrypted Virtualization, with basic SEV since Linux kernel 4.17 (2018), SEV-ES since 5.19 (2020), and SEV-SNP since 6.11 (2024), which encrypt guest memory to shield against hypervisor-level attacks, and Intel Trust Domain Extensions (TDX) since kernel 5.19 in 2022, providing hardware-enforced memory integrity and attestation. To mitigate hardware vulnerabilities, KVM includes ongoing patches for and Meltdown variants, with updates through 2025 enhancing side-channel defenses via kernel mitigations like retpoline and isolation. Secure boot is facilitated through the Open Virtual Machine Firmware (OVMF), a implementation that verifies VM integrity during startup, reducing risks from tampered images. On the performance side, KVM offers optimizations for efficient resource utilization. The dirty ring mechanism, added in kernel 5.13 in 2021, accelerates by tracking dirty memory pages more efficiently than bitmap-based methods, reducing downtime in data centers. Low-latency I/O is achieved via irqfd for direct interrupt injection to guests and iodevicefd for eventfd-based device notifications, minimizing overhead in high-throughput scenarios like networking. efficiency is improved with hugepages and Transparent Huge Pages (THP), which reduce TLB misses and overhead by using larger page sizes, boosting VM throughput by up to 20% in memory-intensive workloads. Tuning techniques further enhance performance: CPU pinning assigns specific host cores to vCPUs to minimize context switching, NUMA awareness optimizes memory allocation across nodes to cut latency, and overcommit ratios allow safe oversubscription of CPU and memory based on workload profiles. Recent enhancements include 6.18's improvements for processors in 2025, such as optimized SEV handling for better performance. Discussions at KVM 2025 highlighted live kernel updates without VM migration, enabling zero-downtime patching via techniques like kpatch integration. Additionally, Ubuntu's 2024 security updates bolstered KVM environments with enhanced profiles and IOMMU defaults for cloud deployments.

Licensing and Community

License Terms

The Kernel-based Virtual Machine (KVM) kernel module and core code are licensed under the GNU General Public License version 2 (GPLv2), which is the same license governing the broader into which KVM is integrated. This licensing ensures users' freedom to study, modify, and redistribute the software, fostering an open-source ecosystem for development. Historically, KVM employed dual licensing, with some userspace components initially released under the GNU Lesser General Public License version 2 (LGPLv2) to facilitate integration with non-GPL projects, but the licensing has since unified under GPLv2 for consistency across the kernel and primary components. The integration with , a common userspace companion for KVM, maintains GPLv2 for most elements, while specific subsystems like the Tiny Code Generator (TCG) operate under the permissive BSD license, enabling flexible reuse in diverse environments. Under GPLv2, KVM permits extensive modification and redistribution without royalties, making it suitable for both individual and enterprise deployments; this compatibility extends to running proprietary guest operating systems, such as Windows, on a GPL-licensed host without licensing conflicts for the guests themselves. For commercial applications, there are no inherent restrictions on use—companies like provide enterprise support for KVM-based solutions—but any derivative works must comply with GPLv2 by disclosing modifications. As of 2025, KVM's licensing terms remain unchanged, fully aligned with the 's GPLv2 framework and supporting ongoing innovation in open .

Maintenance and Contributions

The maintenance of the Kernel-based Virtual Machine (KVM) is primarily led by Paolo Bonzini, a Distinguished Engineer at , who serves as the main maintainer for the core KVM subsystem. Coordination occurs through the KVM mailing list at kvm@vger., where developers discuss patches, bugs, and features, and via the official tree hosted at ./pub/scm/virt/kvm/kvm.git, which tracks changes aligned with the development process. Contributions to KVM follow the standard Linux kernel patch submission process, with developers sending patches to the mailing list for review before integration into the upstream tree, as outlined in the kernel's MAINTAINERS file. The annual KVM Forum facilitates planning and collaboration, with the 2025 event held on September 4–5 in Milan, Italy, featuring keynotes and sessions on ongoing development priorities. Major contributors include , which provides the bulk of maintenance effort particularly for x86 support; and , focusing on hardware-specific enhancements like virtualization extensions; and , contributing to scalability and cloud integrations. For architecture-specific work, sub-maintainers handle ports such as , where Arm Ltd. leads development with contributions from engineers like Suzuki K. Poulose on features including support. The development roadmap for 2024–2025 emphasizes advancements in , with KVM integrating support for technologies like TDX, AMD , and Arm's to enable secure enclaves for sensitive workloads. Efforts toward RISC-V maturity include MMU improvements and mode enhancements to broaden KVM's applicability on architectures. Integration with emerging technologies like is progressing for monitoring and optimization, such as tracing hypercalls and enhancing paravirtualized I/O efficiency in KVM environments. KVM releases are synchronized with Linux kernel cycles, with major updates in version 6.18 (released in late 2025) delivering performance enhancements, including optimized SEV-SNP handling, improved nested on platforms, and better guest support to establish greater scale for enterprise deployments.

References

  1. [1]
    What is KVM? - Red Hat
    Nov 1, 2024 · Kernel-based virtual machines (KVM) are an open source virtualization technology that turns Linux into a hypervisor.
  2. [2]
    Linux KVM
    KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V).Downloads · Howto · Management Tools · Code
  3. [3]
    What is KVM (Kernel-Based Virtual Machine)? - Amazon AWS
    Kernel-based Virtual Machine (KVM) is a software feature that you can install on physical Linux machines to create virtual machines.
  4. [4]
    Ten years of KVM - LWN.net
    Nov 2, 2016 · The KVM patch set was merged in the upstream kernel in December 2006, and was released as part of the 2.6.20 kernel in February 2007. Background.
  5. [5]
    How Did KVM Virtualization Get Into the Linux Kernel? - eWeek
    KVM was first integrated into the mainline Linux kernel at the beginning of 2007 with the 2.6.20 kernel.
  6. [6]
    KVM hypervisor: a beginners' guide - Ubuntu
    Sep 8, 2021 · KVM (Kernel-based Virtual Machine) is the leading open source virtualisation technology for Linux. It installs natively on all Linux ...
  7. [7]
    Linux_2_6_20 - Linux Kernel Newbies
    This release adds two virtualization implementations: A full-virtualization implementation that uses Intel/AMD hardware virtualization capabilities called KVM.
  8. [8]
    Kernel-Based Virtualization: A Beginners' Guide to SDS with KVM
    Oct 26, 2024 · KVM is an open-source virtualization software that turns the Linux kernel into a hypervisor, enabling it to manage multiple VMs to run on the ...<|control11|><|separator|>
  9. [9]
    The Definitive KVM (Kernel-based Virtual Machine) API ...
    Supported X86 VM types can be queried via KVM_CAP_VM_TYPES. S390:¶. In order to create user controlled virtual machines on S390, check KVM_CAP_S390_UCONTROL and ...
  10. [10]
    How KVM + QEMU Actually Works A Real Deep Dive - Rajesh Kumar
    Sep 28, 2025 · KVM (in kernel): Handles CPU virtualization, memory translation, and VM scheduling; QEMU (userspace): Handles device emulation and I/O ...
  11. [11]
    Chapter 5. KVM Paravirtualized (virtio) Drivers
    Virtio drivers are KVM's paravirtualized device drivers, available for guest virtual machines running on KVM hosts. These drivers are included in the virtio ...
  12. [12]
    [PDF] Oracle Linux 9 - KVM User's Guide
    Oct 7, 2025 · This guide provides information on how to install, configure, and use Oracle Linux KVM packages to run guest systems on a bare metal system, ...
  13. [13]
    Virtualization — CS312 1.0 documentation
    Kernel-Based Virtual Machine; Kernel module that turns Linux into a virtual ... Avi Kivity began the development of KVM at Qumranet in the mid-2000s ...
  14. [14]
    [PDF] Virtualization with KVM - Fosdem
    Avi Kivity <avi@qumranet.com>. To: linux-kernel ... A KVM maintainer's life is a tough one ... ○ KVM support. Page 10. PAOLO BONZINI. 10. KVM's killer feature. ○ ...
  15. [15]
    The Evolution of Network Virtualization Technologies in Linux - Ænix
    Sep 22, 2023 · VirtIO was merged into the mainline Linux kernel in 2008. VirtIO does not use drivers for real devices; instead, it defines its own drivers ...
  16. [16]
    Virtio: An I/O virtualization framework for Linux - IBM Developer
    Jan 29, 2010 · In a nutshell, virtio is an abstraction layer over devices in a paravirtualized hypervisor. virtio was developed by Rusty Russell in support of ...Full virtualization vs... · An abstraction for Linux guests · Virtio architecture
  17. [17]
    B.3. Using KVM Virtualization on ARM Systems
    KVM virtualization is provided in Red Hat Enterprise Linux 7.5 and later for the 64-bit ARM architecture. As such, KVM virtualization on ARM systems is not ...
  18. [18]
    Linux 5.16 KVM To Land RISC-V Hypervisor Support - Phoronix
    Oct 5, 2021 · Coming with the Linux 5.16 kernel cycle will be support for RISC-V virtualization with the Kernel-based Virtual Machine (KVM).
  19. [19]
    KVM Virtualization Sees Several Exciting Improvements For AMD ...
    Oct 8, 2025 · The Linux 6.18 KVM feature additions include: - Support for virtualizing Control-flow Enforcement Technology on x86/x86_64 KVM. This allows ...
  20. [20]
    USN-7654-4: Linux kernel (KVM) vulnerabilities - Security - Ubuntu
    Jul 22, 2025 · This update corrects flaws in the following subsystems: PA-RISC architecture;; PowerPC architecture;; x86 architecture;; Block layer subsystem; ...
  21. [21]
    KVM Forum 2025 - Pretalx
    Typically, updating a host kernel requires live-migrating virtual machines (VMs) to other hosts. ... without being reset while the host kernel undergoes a Kexec- ...
  22. [22]
    KVM Forum 2024: Call for presentations - QEMU
    May 6, 2024 · The KVM Forum 2024 conference will take place in Brno, Czech Republic on September 22-23, 2024. KVM Forum brings together the Linux virtualization community.
  23. [23]
    KVM Forum 2024 - QEMU
    KVM Forum 2024 was held in Brno, Czech Republic on September 22–23, 2024. List of presentations. Presentations. KVM Keynote - Oliver Upton (slides, video) ...Missing: security | Show results with:security
  24. [24]
    [PDF] kvm: the Linux Virtual Machine Monitor
    Jun 30, 2007 · The Kernel-based Virtual Machine, or kvm, is a new. Linux subsystem which leverages these virtualization extensions to add a virtual machine ...Missing: history origins
  25. [25]
    [PDF] KVM: Linux-based Virtualization
    • A virtual CPU is implemented using a Linux thread. • The Linux scheduler is responsible for scheduling a virtual cpu, as it is a normal thread. Page 10. 10.
  26. [26]
    Memory - KVM
    The kvm/qemu guest can then remove the page from the shadow page tables or the NPT/EPT structures. After the kvm/qemu guest has done this, the host kernel is ...
  27. [27]
    1.2. KVM Hypervisor Requirements | Red Hat Enterprise Linux | 7
    The KVM hypervisor requires: Virtualization extensions (Intel VT-x or AMD-V) are required for full virtualization.
  28. [28]
    2.2. Hypervisor Requirements | Red Hat Virtualization | 4.0
    CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ...
  29. [29]
    Supporting KVM on the ARM architecture - LWN.net
    Jul 3, 2013 · While KVM is already supported on i386 and x86/64, PowerPC, and s390 ... The relative overhead of KVM/ARM is comparable to KVM on x86.Missing: RISC- V
  30. [30]
    api.rst - The Linux Kernel Archives
    Jun 2, 2020 · Note that on s390 the KVM_CAP_S390_IRQCHIP vm capability needs to be enabled before KVM_CREATE_IRQCHIP can be used. 4.25 KVM_IRQ_LINE ---------- ...
  31. [31]
    All you need to know about KVM userspace - Red Hat
    Oct 24, 2019 · This post will explore the userspace side of the KVM virtualization stack, what alternatives exist to QEMU and libvirt, and how our work on QEMU and libvirt ...
  32. [32]
  33. [33]
  34. [34]
    Qemu/KVM Virtual Machines - Proxmox VE
    Nov 22, 2022 · QEMU is a user program which has access to a number of local resources like partitions, files, network cards which are then passed to an emulated computer.
  35. [35]
    3.4. Virtualized Hardware Devices | Red Hat Enterprise Linux | 7
    KVM provides paravirtualized devices to virtual machines using the virtio API as a layer between the hypervisor and guest. Some paravirtualized devices decrease ...
  36. [36]
    Linux_6.10 - Linux Kernel Newbies
    Summary of the changes and new features merged in the Linux kernel during the 6.10 development cycle.
  37. [37]
    arm64: Support for Arm CCA in KVM - LWN.net
    Jun 11, 2025 · This series adds support for running protected VMs using KVM under the Arm Confidential Compute Architecture (CCA).
  38. [38]
    QEMU/KVM Virtual Machines - Proxmox VE
    QEMU can emulate ... When passing through a GPU, the best compatibility is reached when using q35 as machine type, OVMF (UEFI for VMs) instead of SeaBIOS and PCIe ...Missing: crosvm Firecracker<|separator|>
  39. [39]
    20.19. Memory Balloon Device | Virtualization Administration Guide
    A virtual memory balloon device is added to all Xen and KVM/QEMU guest virtual machines. It will be seen as <memballoon> element.
  40. [40]
    Virtualization Administration Guide | Red Hat Enterprise Linux | 6
    KVM guests can be deployed with huge page memory support in order to improve performance by increasing CPU cache hits against the Transaction Lookaside Buffer ( ...
  41. [41]
    Configuring and managing virtualization | Red Hat Enterprise Linux | 8
    To fix the problem, increase the CMA memory pool available for the guest's hashed page table (HPT) by adding kvm_cma_resv_ratio=memory to the host's kernel ...
  42. [42]
    Live Migrating QEMU-KVM Virtual Machines - Red Hat Developer
    Mar 24, 2015 · This discussion will go through the simple design from the early days of live migration in the QEMU/KVM hypervisor, how it has been tweaked and optimized.Missing: enhanced | Show results with:enhanced
  43. [43]
    [PDF] Post-Copy Live Migration of Virtual Machines - Kartik Gopalan
    Pre-copy is the predominant ap- proach for live VM migration. These include hypervisor- based approaches from VMware [19], Xen [3], and KVM [14],. OS-level ...
  44. [44]
    [PDF] Nested Virtualization - KVM
    – Nesting introduces I/O bottlenecks. ○ Are we spec compliant ? Nested ... – Full in-kernel IOMMU model? => ARM SMMU model by Will Deacon, see Linux ...<|control11|><|separator|>
  45. [45]
    Linux_6.12 - Linux Kernel Newbies
    Linux 6.12 was released on Sunday, 17 Nov 2024. Summary: This release includes realtime support (PREEMPT_RT), a feature that has been in the works for 20 years.
  46. [46]
    System Emulation — QEMU documentation
    QEMU system emulation is for full system emulation, not user-mode, and includes working with hypervisors like KVM, Xen, or Hypervisor.Framework.Network emulation · CanoKey QEMU · QEMU Barrier Client · QEMU VM templating
  47. [47]
    [PDF] Evaluating and Optimizing I/O Virtualization in Kernel ... - Hal-Inria
    Aug 11, 2014 · This paper evaluates KVM I/O performance and optimizes it by reducing VM Exits, simplifying the Guest OS, and changing NIC driver configuration.
  48. [48]
    [PDF] virtio: Towards a De-Facto Standard For Virtual I/O Devices - OzLabs
    Virtio is a series of efficient Linux drivers for virtual I/O, aiming to address the lack of standard drivers for different hypervisors.
  49. [49]
    Virtio on Linux - The Linux Kernel documentation
    Virtio is an open standard for communication between drivers and devices, exposed as physical devices using shared memory and virtqueues.
  50. [50]
    Virtual I/O Device (VIRTIO) Version 1.1 - OASIS Open
    This document describes the specifications of the “virtio” family of devices. These devices are found in virtual environments, yet by design they look like ...Missing: KVM | Show results with:KVM<|separator|>
  51. [51]
    VFIO - “Virtual Function I/O” - The Linux Kernel documentation
    The VFIO driver is an IOMMU/device agnostic framework for exposing direct device access to userspace, in a secure, IOMMU protected environment.Vfio - ``virtual Function... · Vfio Usage Example · Vfio Bus Driver Api
  52. [52]
    Introduction to VFIO - Insu Jang
    Apr 27, 2017 · Virtual Function I/O (VFIO) # Introduced to replace the old-fashioned KVM PCI device assignment (virtio). Userspace driver interface Use ...
  53. [53]
    Spice User Manual
    Spice is an open remote computing solution, providing client access to remote displays and devices (eg keyboard, mouse, audio).
  54. [54]
    Linux_5.15 - Linux Kernel Newbies
    This release adds a new NTFS read-write implementation; support for putting all the processes within a cgroup in the SCHED_IDLE scheduling class.
  55. [55]
    33. Vhost Sample Application - Documentation
    The vhost sample application demonstrates integration of the Data Plane Development Kit (DPDK) with the Linux* KVM hypervisor by implementing the vhost-net ...33.1. Testing Steps · 33.1. 4. Run Testpmd Inside... · 33.3. ParametersMissing: 2024 | Show results with:2024
  56. [56]
    virsh - Libvirt
    The virsh program is the main interface for managing virsh guest domains. The program can be used to create, pause, and shutdown domains.
  57. [57]
    Libvirt - Ubuntu Server documentation
    The virsh utility can be used from the command line. Some examples: To list running virtual machines: virsh list. To start a virtual machine: virsh start < ...
  58. [58]
    Chapter 20. Managing Guest Virtual Machines with virsh
    The virsh command-line tool is built on the libvirt management API, and can be used to create, deploy, and manage guest virtual machines. The virsh utility ...
  59. [59]
    QEMU/KVM/HVF hypervisor driver - Libvirt
    The libvirt KVM/QEMU driver can manage any QEMU emulator from version 6.2.0 or later. It supports multiple QEMU accelerators.Missing: tools | Show results with:tools
  60. [60]
    The libvirt API concepts
    The libvirt API is designed to expose all the resources needed to manage the virtualization support of recent operating systems.
  61. [61]
    Python API bindings - Libvirt
    Python API bindings. The Python binding should be complete and are mostly automatically generated from the formal description of the API in xml.
  62. [62]
    libvirt-python - PyPI
    This package provides a module that permits applications written in the Python 3.x programming language to call the interface supplied by the libvirt library.
  63. [63]
    KVM — nova 32.1.0.dev61 documentation
    Feb 22, 2021 · The following sections outline how to enable KVM based hardware virtualization on different architectures and platforms.Missing: integration | Show results with:integration
  64. [64]
    Architecture - KubeVirt user guide
    Scheduling, networking, and storage are all delegated to Kubernetes, while KubeVirt provides the virtualization functionality.Additional Services · Virtualmachine · Example
  65. [65]
    libvirt releases
    Therefore, libvirt support for these releases is dropped. Improvements. qemu: Use PCI by default for RISC-V guests. PCI support for RISC-V guests was already ...Missing: confidential | Show results with:confidential
  66. [66]
    Chapter 19. Managing Guests with the Virtual Machine Manager (virt ...
    This chapter describes the Virtual Machine Manager ( virt-manager ) windows, dialog boxes, and various GUI controls. virt-manager provides a graphical view of ...
  67. [67]
    22.3. GNOME Boxes | Red Hat Enterprise Linux | 7
    GNOME Boxes is a lightweight tool to view, access, create, and configure virtual machines and remote systems, similar to virt-manager but easier to use.
  68. [68]
    Apps/Boxes – GNOME Wiki Archive
    Boxes is a GNOME app to view, access, and manage remote/virtual systems, targeted towards end-users for trying new OS or connecting to remote machines.Missing: KVM | Show results with:KVM
  69. [69]
    oVirt | oVirt is a free open-source virtualization solution for your ...
    oVirt is an open-source distributed virtualization solution, designed to manage your entire enterprise infrastructure. oVirt uses the trusted KVM hypervisor ...Documentation · Download · oVirt List Archives · oVirt 4.5.5 Release Notes
  70. [70]
    Proxmox Virtual Environment
    Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor ...Missing: converged | Show results with:converged
  71. [71]
    Hyper-converged Infrastructure - Proxmox VE
    Nov 22, 2022 · Proxmox VE is a virtualization platform that tightly integrates compute, storage and networking resources, manages highly available clusters, backup/restore as ...Missing: KVM | Show results with:KVM
  72. [72]
    kimchi-project/kimchi: An HTML5 management interface for KVM ...
    Kimchi is an HTML5 based management tool for KVM. It is designed to make it as easy as possible to get started with KVM and create your first guest.Wok · Kimchi-project · Kimchi for Ubuntu 20 #1318 · Issues 330
  73. [73]
    How to manage virtual machines in Cockpit - Red Hat
    Dec 13, 2021 · To create and manage virtual machines with Cockpit, you must install the cockpit-machines module on the computer you run Cockpit on.
  74. [74]
    Applications - Cockpit Project
    Create, run, and manage virtual machines in your browser. ... Upgrade systems based on transactional-update from OpenSUSE with tukit in your browser.
  75. [75]
    community.libvirt.virt module – Manages virtual machines supported ...
    This is the latest (stable) Ansible community documentation. For Red Hat Ansible Automation Platform subscriptions, see Life Cycle for version details.
  76. [76]
    Understanding Ansible, AWX, and Ansible Automation Platform
    Community Ansible is a free, unsupported tool. AWX is a free, unsupported GUI/API tool. Ansible Automation Platform is a paid, supported enterprise product.Missing: KVM | Show results with:KVM
  77. [77]
    What's new in Proxmox VE 8.2
    Proxmox Virtual Environment 8.2 (released on April 24, 2024) includes multiple enhancements. View the detailed release notes.
  78. [78]
    Proxmox VE 8.2 released!
    Apr 24, 2024 · We are excited to announce that our latest software version 8.2 for Proxmox Virtual Environment is now available for download.
  79. [79]
    Introducing GNOME 46, “Kathmandu” - GNOME Release Notes
    The GNOME project is excited to present the latest GNOME release, version 46. This latest version is the result of 6 month's hard work by the GNOME community.
  80. [80]
    Oracle Virtualization
    Oracle Virtualization is a proven enterprise-grade server virtualization solution that provides KVM virtualization and management capabilities and built-in ...Oracle VirtualBox · What's New in Oracle VM Server · Oracle VM VirtualBox · Blogs
  81. [81]
    OpenStack Docs: KVM
    May 24, 2022 · The following sections outline how to enable KVM based hardware virtualization on different architectures and platforms.
  82. [82]
    What is KubeVirt? - Red Hat
    Oct 25, 2024 · KubeVirt enables container-native virtualization by packaging those virtual machines inside containers and managing both workloads from a single ...What is KubeVirt? · How does KubeVirt work? · What can you do with Kubevirt?
  83. [83]
    KubeVirt.io
    A unified development platform where developers can build, modify, and deploy applications residing in both Application Containers as well as Virtual Machines.Cloud providers · Use KubeVirt · KubeVirt user guide · KubeVirt quickstart with kind
  84. [84]
    Chapter 18. Installing and managing Windows virtual machines
    Configure KVM virtio drivers in the Windows guest OS. For details, see Installing KVM paravirtualized drivers for Windows virtual machines. Additional ...
  85. [85]
    Installing and running Vagrant using qemu-kvm - Fedora Magazine
    Sep 21, 2020 · A simple guide showing how to install and start using Vagrant for virtualization on Fedora using qemu-kvm.
  86. [86]
    Processor support - KVM
    ARM: Virtualization support for ARM was initially added to ARMv7-A processors starting with Cortex-A15 and including Cortex-A7 and Cortex-A17.
  87. [87]
    Firecracker microVM
    Firecracker microVMs use KVM-based virtualizations that provide enhanced security over traditional VMs. This ensures that workloads from different end customers ...
  88. [88]
    Secure and Fast microVM for Serverless Computing - Amazon AWS
    Nov 27, 2018 · Meet Firecracker, an open source virtual machine monitor (VMM) that uses the Linux Kernel-based Virtual Machine (KVM).
  89. [89]
    [PDF] Why KVM is Winning Over VMware vSphere – Updated 2024 - Oracle
    Dec 20, 2024 · KVM has several advantages including being open source, performance, scalability, hardware resources utilized, and total cost of ownership (TCO ...Missing: adoption trends 2023-2025
  90. [90]
    Virtualization in 2025 and beyond - Red Hat
    Feb 13, 2025 · This blog looks at where the virtualization market is headed in 2025, and how Red Hat is working with our partner ecosystem to meet new ...Missing: 2023-2025 VMware Broadcom acquisition Oracle 2024
  91. [91]
    Linux kernel licensing rules
    The Linux kernel is primarily under GPL-2.0, with individual files using compatible licenses, and dual licenses. UAPI files have a syscall exception.Missing: based Virtual KVM
  92. [92]
    GNU General Public License, version 2
    The GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users.GNU GPL FAQ · Violations of the GNU Licenses · Translations of GPLv2
  93. [93]
    Kernel-Based Virtual Machine (KVM) Is A | PDF - Scribd
    Jun 30, 2020 · KVM is maintained by Paolo Bonzini.[21]. Internals KVM provides device abstraction but no processor emulation. It exposes the /dev/kvm interface ...
  94. [94]
    License - QEMU
    Nov 24, 2014 · The following points clarify the QEMU licenses: QEMU as a whole is released under the GNU General Public License, version 2.
  95. [95]
    Program committee - KVM Forum - QEMU
    Paolo Bonzini works on virtualization for Red Hat, where he is a Distinguished Engineer. He is currently the maintainer of the KVM hypervisor and a contributor ...Missing: primary | Show results with:primary
  96. [96]
    Code - KVM
    The kvm kernel code is available through a git tree (like the kernel itself). To create a repository using git, type git clone git://git.kernel.org/pub/scm/ ...
  97. [97]
    KVM Forum 2025 - QEMU
    KVM Forum 2025 was held in Milan, Italy on September 4–5 2025. List of presentations. Presentations. Introduction and KVM Keynote - Paolo Bonzini, ...
  98. [98]
    KVM: MMU changes for confidential computing - LWN.net
    Apr 12, 2024 · This includes the MMU parts of "TDX/SNP part 1 of n"[1] while the rest was posted as "KVM: guest_memfd: New hooks and functionality for ...
  99. [99]
    Arm Confidential Compute Architecture open-source enablement
    This talk describes the latest open-source project developments (Trusted Firmware, Linux, KVM, EDK2) to enable Arm CCA, including current status and next steps.
  100. [100]
    MMU related improvements for KVM RISC-V - LWN.net
    Jun 13, 2025 · MMU related improvements for KVM RISC-V. From: Anup Patel <apatel ... Copyright © 2025, Eklektix, Inc. Comments and public postings are ...
  101. [101]
    Analyzing KVM Hypercalls with eBPF Tracing - Tuxology
    Mar 31, 2017 · Here is a small demo of eBPF/BCC script that allows us to hook onto the 3 aforementioned tracepoints in the Linux kernel and conditionally record the trace ...Missing: integration | Show results with:integration
  102. [102]
    A Tale of Two Paths: Optimizing Paravirtualized Storage I/O with eBPF
    Aug 13, 2025 · This paper presents EXO, an extension of virtio-blk for efficient KVM/QEMU-based storage paravirtualization. The insight is that no matter how ...