Fact-checked by Grok 2 weeks ago

Virtual device

A virtual device is a software-based emulation or simulation of a physical hardware component, such as a processor, network interface, or storage drive, that allows an operating system or virtual machine to interact with it as if it were actual hardware. These devices abstract underlying physical resources, enabling efficient sharing and management within virtualized environments without requiring dedicated hardware. Virtual devices play a central role in virtualization technologies, where hypervisors allocate portions of host hardware—such as CPU cycles, memory, and I/O bandwidth—to create isolated computing instances. They come in several forms: fully emulated devices that mimic legacy hardware for broad compatibility (e.g., emulated disks or PS/2 mice); paravirtualized devices that use optimized interfaces like virtio for higher performance in I/O operations (e.g., virtio-net for networking or virtio-blk for block storage); and passthrough devices that grant direct access to physical hardware for low-latency needs (e.g., via or SR-IOV). This architecture supports up to 4096 virtual CPUs per guest in modern systems (as of 9.6 and later) and facilitates features like device hotplugging and migration. The use of virtual devices enhances scalability, cost-efficiency, and flexibility across applications, including server consolidation in data centers, without physical prototypes, via emulators (e.g., Virtual Devices), and simulations for ecosystems. By decoupling software from specific , they reduce dependency on proprietary components and enable seamless resource pooling in environments.

Fundamentals

Definition

A virtual device is a software-based or that replicates the functionality, interfaces, and behaviors of a physical device, allowing systems to interact with it as if the hardware were present without the need for actual physical components. This approach enables the of device operations entirely through , where the host system's and handle the underlying computations to mimic hardware responses. Key characteristics of virtual devices include their operation via intermediary software layers, such as device drivers, hypervisors, or engines, which translate guest requests into host-executable instructions. These layers provide standardized interfaces—often adhering to protocols like or virtio—for applications and operating systems to access the device seamlessly, while supporting features like resource sharing among multiple virtual environments and to prevent between them. For instance, software can simulate device registers, signaling, and data flows, making the device indistinguishable from physical hardware at the software level. Unlike physical , virtual devices involve no tangible components and depend solely on the host's computational resources, such as CPU cycles for emulated operations or allocated for buffering simulated data transfers. This distinction allows for flexible and deployment in environments like virtual machines, where multiple virtual devices can coexist without hardware constraints.

History

The concept of virtual devices emerged in the with the development of mainframe systems, particularly IBM's CP/CMS, which simulated hardware peripherals to enable multiple users to share a single physical machine efficiently. CP-67/CMS, introduced in 1967 for the Model 67, was the first widely available architecture that abstracted physical devices into virtual ones, allowing isolated execution environments without direct hardware access. This approach laid the groundwork for resource sharing in multi-user computing. In the 1970s and 1980s, systems advanced device abstraction through the /dev directory, where files represented devices as part of the , enabling uniform I/O operations via software interfaces. This model, present from the earliest Unix versions like the 1971 Edition 1, treated devices as files to simplify programming and system management. Concurrently, introduced Virtual Device Drivers (VxDs) with in 1990, providing protected-mode for devices in 386 enhanced mode to support multitasking without compromising stability. The 1990s marked a boom in virtualization software, exemplified by VMware's founding in 1998 and the release of VMware Workstation in 1999, which extended virtual device support to x86 platforms through paravirtualized interfaces for improved performance in guest operating systems. From the 2000s onward, virtual devices integrated deeply into cloud and mobile ecosystems; Intel's VT-x technology, launched in 2005, provided hardware-assisted virtualization to accelerate device emulation on x86 processors. Amazon Web Services (AWS) followed with EC2 in 2006, offering instances with virtual network interface cards (NICs) for scalable cloud computing. In 2007, the PCI-SIG introduced Single Root I/O Virtualization (SR-IOV), enabling direct assignment of virtual functions from physical devices like NICs to virtual machines for near-native performance. Mobile emulation advanced with the Android SDK's emulator in 2008, simulating devices for app development. More recently, containerization evolved with Docker's launch in 2013, using kernel namespaces and cgroups to virtualize device access in lightweight, isolated environments.

Types

Virtual Storage Devices

Virtual storage devices are software-based emulations of physical storage , such as hard disk drives (HDDs), solid-state drives (SSDs), or tape drives, that provide virtual machines (VMs) with the appearance of dedicated storage resources. These devices abstract underlying host storage by mapping virtual block or file-based interfaces to physical storage pools, enabling efficient resource sharing and management without direct access. Common formats include Virtual Hard Disk (VHD) for fixed or dynamically expanding disks and Virtual Machine Disk (VMDK) for environments, both of which encapsulate disk images as files on the host . Key technologies in virtual storage devices include , which dynamically allocates storage space rather than pre-allocating the full capacity, allowing over-commitment of host resources to multiple while minimizing . Snapshotting enables the creation of point-in-time copies of virtual disks, facilitating backups, testing, and rollback without duplicating entire volumes, often implemented through layered image formats that store changes differentially. Software emulation further enhances reliability by combining multiple virtual or physical disks into redundant arrays (e.g., 1 mirroring or 5 striping with parity) managed entirely in software, providing without dedicated hardware controllers. Prominent examples of virtual storage devices include the VHD and VHDX formats used in Microsoft Hyper-V, which support both fixed-size and dynamically expanding disks for VM boot and data volumes. In -based virtualization, the QCOW2 ( version 2) format serves as a versatile virtual disk image that integrates , snapshots, and compression for efficient storage. initiators, such as those in or host-level configurations, simulate remote block storage by presenting networked targets as local virtual devices, allowing VMs to access shared storage arrays over networks. Performance in virtual storage devices incurs overhead from translation layers between guest I/O requests and host storage, potentially reducing throughput due to emulation and buffering. This is often mitigated through caching mechanisms, such as I/O filters that use local flash devices to accelerate reads and writes, or direct I/O passthrough techniques that bypass hypervisor mediation for near-native speeds on supported hardware.

Virtual Network Devices

Virtual network devices emulate to enable isolated or shared connectivity among virtual machines (VMs), allowing them to transmit and receive data without direct reliance on physical network interface cards (NICs). These devices abstract the underlying physical infrastructure, facilitating scalable and flexible network topologies in hypervisors like KVM and . By simulating components such as interfaces and switches, they support features like traffic isolation, migration of VMs across hosts, and integration with (SDN) paradigms. Core components of virtual network devices include emulated network cards (vNICs), which serve as virtual endpoints for and map to the host's physical NICs for external . vNICs handle packet encapsulation and decapsulation, providing each VM with a dedicated logical that appears as a standard Ethernet to the guest operating . Virtual switches and bridges complement vNICs by managing internal traffic flow; virtual switches operate as software-based Layer 2 devices that forward packets between vNICs using flow tables, while bridges connect disparate virtual segments for Layer 3 routing without physical hardware involvement. For example, in multi-VM setups, these components ensure efficient routing within a , minimizing through kernel-level . Key technologies underpinning virtual network devices include VLAN tagging, which segments traffic by embedding tags into Ethernet frames, allowing multiple isolated virtual LANs to coexist on the same physical uplink. SDN integration enhances programmability, with (OVS) serving as a prominent example of a multilayer virtual switch that supports protocols for controller-driven flow management and dynamic reconfiguration in virtualized environments. via virtio-net further optimizes performance by deploying lightweight drivers in the guest OS that communicate directly with the host's backend, reducing I/O overhead for guest-host data exchange compared to fully emulated alternatives. These technologies collectively enable features like overlay networks (e.g., VXLAN) and automated policy enforcement. Practical examples illustrate the versatility of virtual network devices. In Linux-based systems, TUN/TAP interfaces provide user-space networking capabilities, where TUN devices process packets and TAP devices handle full Ethernet frames, enabling applications to inject or extract traffic as if connected to a virtual point-to-point link or bridge. VMware NSX environments employ virtual routers, such as the Distributed Logical Router (DLR), which distribute routing logic across hosts to process inter-VM and VM-to-physical traffic at line rate using logical interfaces, obviating the need for centralized physical routers. Security features are integral, with prevention achieved through host-level filters that validate and restrict changes on vNICs, blocking unauthorized impersonation attempts. Additionally, virtual firewall rules operate at the switch layer, enforcing policies like access control lists (ACLs) via in OVS to inspect and drop packets based on predefined criteria, enhancing isolation without hardware dependencies.

Virtual Input/Output Devices

Virtual input/output (I/O) devices emulate physical peripherals such as keyboards, mice, displays, USB ports, and printers through software interfaces, enabling input capture and output rendering within virtualized environments without dedicated hardware. These devices facilitate user interaction in virtual machines (VMs) by translating host system events into guest-operating-system-compatible signals, supporting seamless operation in hypervisors like and KVM. For instance, virtual keyboards and mice simulate (HID) protocols to inject keystrokes and cursor movements directly into the guest OS. Input mechanisms primarily rely on event injection via virtual HID devices, where software on the host captures actions—such as key presses or movements—and forwards them as standardized HID reports to the VM. This approach ensures low-latency interaction without physical passthrough, commonly implemented in frameworks like Microsoft's Virtual HID Framework, which allows drivers to report synthetic input data mimicking real devices. For output, emulation captures the guest's graphical buffer and streams it to the host or remote client; protocols like VNC provide basic pixel-based remote access, while offers enhanced features including multi-monitor support, audio, and USB redirection for richer experiences. Specific examples include QEMU's virtual USB support, which emulates USB hubs and controllers to connect peripherals like printers or , allowing device passthrough where a host USB device is directly assigned to the VM for native performance. Emulated GPUs, such as virtio-gpu, accelerate rendering by paravirtualizing graphics operations between guest and host, supporting 2D/3D acceleration without full hardware emulation. In accessibility contexts, virtual forms enable support for braille displays through QEMU's BrlAPI integration, which creates a USB-braille device to relay output from guest screen readers to a host-connected physical , and speech synthesizers via software like running within the VM, outputting audio through virtual sound devices. These implementations prioritize compatibility and efficiency, often leveraging virtio standards for standardized virtual I/O across hypervisors.

Implementation

In Operating Systems

In Unix-like operating systems such as , virtual devices are abstracted at the kernel level through device files located in the /dev directory, which provide a uniform interface for user-space processes to interact with device drivers. These special files, created dynamically by the kernel or tools like , represent both physical and virtual hardware, allowing operations like read, write, and control via standard system calls. For instance, virtual storage devices may appear as block device files (e.g., /dev/vda), while some virtual network interfaces, such as TUN/TAP, use character device files (e.g., /dev/net/tun) for user-space access, though primary virtual NICs are managed via the networking stack. Kernel drivers for virtual devices, such as those implementing the standard, are typically loaded dynamically as loadable modules to extend functionality without recompiling the . The module, for example, detects virtual devices during bus enumeration (using vendor 0x1af4 and specific device ) and exposes them via —shared memory ring buffers that facilitate efficient communication between the and . This modular approach enables on-demand loading for paravirtualized devices like virtual disks or consoles, improving system flexibility. User-space implementations further enhance virtual device management by allowing non-privileged processes to create and handle abstractions without kernel modifications. (FUSE) enables the development of file systems in user space, where a module (fuse.ko) forwards operations to a user-space daemon, effectively presenting storage devices or overlaid file systems as mountable entities in /dev or mount points. Similarly, extended () supports programmable networking by loading sandboxed bytecode into the kernel from user space, allowing custom packet processing for interfaces without altering kernel code. Examples include programs for traffic filtering on NICs, executed via hooks like XDP or . Management tools in facilitate the integration and configuration of devices. , a userspace daemon, handles hotplugging events for virtual devices by monitoring kernel uevents and creating/removing device files in /dev accordingly, often triggered by module loading or bus changes. For configuration exposure, (/sys) provides a hierarchical virtual filesystem that exports object attributes, including those for virtual devices under /sys/devices or /sys/class, allowing read/write access to parameters like device state or queue sizes via simple file operations. Cross-operating system comparisons highlight variations in virtual device handling. In Windows, the (WMI) framework enumerates and manages virtual devices through a query-based , using classes like Win32_PnPEntity to devices and providers for operations, often integrated with tools like for scripting. In contrast, relies on system calls for fine-grained control of virtual devices, where applications pass commands to drivers via file descriptors on /dev nodes, defined by subsystem-specific macros for type-safe interactions.

In Virtualization and Emulation

In environments, hypervisors play a central role in provisioning virtual devices to virtual machines () by abstracting and allocating physical hardware resources such as CPUs, memory, and I/O devices. Type 1 hypervisors, which run directly on bare-metal hardware without an underlying host operating system, enable efficient device provisioning by creating logical resource pools and directly mapping physical devices to VMs, as seen in examples like . In contrast, Type 2 hypervisors operate on top of a host OS, leveraging its services for I/O and to provision virtual devices, which introduces an additional layer but offers greater flexibility; exemplifies this hosted approach. This distinction affects performance, with Type 1 hypervisors providing closer-to-native device access due to minimal overhead. Device models in hypervisors often employ a backend-frontend architecture to separate guest-facing emulation from host resource management, particularly in frameworks like QEMU. The frontend, visible to the guest OS, emulates device interfaces such as VirtIO network cards or SCSI disks, while the backend handles actual host interactions, such as block storage via LVM or user-mode networking with port forwarding. This model allows dynamic addition or removal of devices during VM runtime via protocols like QEMU Monitor Protocol (QMP). Emulation techniques further differentiate approaches: full emulation provides cycle-accurate simulation of hardware, enabling unmodified guest OSes to run on emulated devices without awareness of virtualization, as in VMware ESXi. Paravirtualization, however, optimizes performance by using modified guest drivers that communicate directly with the hypervisor through hypercalls, reducing emulation overhead; VirtIO devices in Xen or KVM exemplify this, though it requires OS adaptations. Hardware acceleration techniques like GPU passthrough enhance this by assigning physical GPUs directly to VMs via mechanisms such as Hyper-V's Direct Device Assignment (DDA), bypassing emulation for near-native performance in graphics-intensive tasks while requiring IOMMU support for isolation. Specific frameworks integrate these concepts deeply with host systems. KVM, embedded as a module in the , turns the kernel into a hypervisor by utilizing its memory manager, scheduler, and device drivers to virtualize hardware like network cards and disks, treating each VM as a kernel process for efficient resource sharing. Similarly, the Android Emulator uses Android Virtual Devices (AVDs) to simulate mobile hardware, defining profiles for components like screens, RAM, sensors (e.g., GPS), and storage, which run on a virtualized Android OS image to mimic real device behavior for app testing. Performance optimizations ensure virtual devices operate scalably in dynamic environments. Memory ballooning, for instance, employs a virtio-balloon device in the guest OS to dynamically inflate or deflate allocated memory, allowing the host to reclaim idle pages from overcommitted VMs without downtime, provided balloon drivers are installed. For mobility, live migration transfers VM state—including memory and network—between hosts while preserving access to persistent virtual devices through shared storage systems, ensuring continuity for disks and other stateful components during the process. Guest OS drivers briefly interface with these virtual devices to facilitate such optimizations, maintaining compatibility across the virtualization stack.

Applications

In Software Development and Testing

In software development and testing, virtual devices enable the emulation of hardware environments, allowing developers to assess application compatibility across diverse configurations without procuring physical hardware. This approach is essential for simulating rare or discontinued devices that may still be in use by end-users. For example, the Android Emulator, through Android Virtual Devices (AVDs), permits testing of mobile applications on a variety of virtualized Android devices, including different screen sizes, hardware sensors, and API levels. Virtual devices also support load simulation by replicating network conditions, battery levels, and resource constraints to evaluate application performance under stress. Tools such as provide high-performance emulators tailored for mobile testing, offering features like dynamic rooting and parallel execution to accelerate development cycles. serves as a versatile tool for cross-platform binary testing, emulating various architectures and devices to verify software behavior on non-native . Its QTest framework facilitates device model testing and control over virtualized components, ensuring portability across and Windows hosts. In and () pipelines, virtual devices allow for the automated provisioning of testing environments, enabling efficient without manual setup. Genymotion integrates seamlessly with CI servers and testing frameworks based on ADB, supporting runs of scripts across multiple virtual instances for consistent validation. Debugging is enhanced through virtual traces and logs generated by these emulators, providing detailed insights into application execution that mirror physical device behavior. For development, the emulator's integration with allows inspection of method traces and system events via the CPU Profiler. Specific scenarios include development, where virtual sensors simulate environmental data using models trained on physical inputs, reducing the need for extensive hardware deployments during prototyping and testing. One such approach involves deploying ML-based virtual sensors on edge devices like to estimate temperature and humidity in unmonitored areas, achieving high accuracy with models like random forests (MAE ~0.4–1.3°C). In game development, emulated controllers provide a means to test input handling without physical peripherals. Apple's GCVirtualController, for instance, allows developers to create and customize on-screen gamepads that integrate with the Game Controller framework, facilitating and compatibility verification.

In Cloud and Enterprise Computing

In and , devices enable scalable infrastructure by abstracting physical hardware into software-managed resources, supporting large-scale deployments in (IaaS) platforms. (EBS) provides block storage devices that attach directly to Elastic Compute Cloud (EC2) instances, delivering persistent, high-performance storage volumes for databases and applications with low-latency access and snapshot capabilities for data protection. Similarly, Elastic Network Interfaces (ENIs) act as network interface cards (NICs) within (VPC), allowing EC2 instances to be configured with multiple private IP addresses, security groups, and elastic IPs for flexible networking without hardware reconfiguration. In , Virtual Network (VNet) isolates workloads using NICs attached to virtual machines, providing scalable connectivity with features like segmentation and integration with Azure Load Balancer for traffic distribution. network devices, as key components, ensure reliable connectivity by mimicking physical interfaces in software. Auto-scaling mechanisms further enhance virtual device utilization in cloud environments, particularly for compute-intensive tasks. Cloud providers like Google Cloud support virtual GPUs (vGPUs) on machine types such as A3 and A4, where GPUs are virtualized and provisioned dynamically to instances, enabling automatic scaling based on demand for AI training and graphics workloads. This allows enterprises to adjust GPU resources elastically without over-provisioning physical hardware, optimizing costs through pay-as-you-go models integrated with orchestration tools like . Enterprise features leverage devices for advanced network and storage . Network Function Virtualization (NFV) decouples functions like virtual routers and firewalls from dedicated appliances, running them as virtual network functions (VNFs) on commodity servers to improve scalability and reduce operational costs through resource sharing. VMware vSAN implements software-defined storage by pooling local disks across into a shared virtual datastore, supporting policy-based for virtual machines with features like replication and , achieving up to 300,000 per node for enterprise-grade performance. Real-world deployments highlight virtual devices' role in operational efficiency. In one case, Company migrated over 2,200 applications from on-premises data centers to AWS using virtual block and network devices, completing the process in 24 months and enabling seamless workload portability across environments. Hybrid cloud setups often employ device pooling to unify resources, as demonstrated in virtual cloud pool architectures that integrate on-premises virtual devices with public cloud services, allowing dynamic allocation for bursty workloads while maintaining compliance and low . Standards ensure interoperability and reliability in these environments. The Open Virtualization Format (OVF), developed by the Distributed Management Task Force (DMTF), standardizes portable virtual device configurations through XML-based packaging of virtual systems, including hardware descriptions and disk images, facilitating migration across platforms without vendor lock-in. Compliance with PCI Special Interest Group (PCI-SIG) specifications, notably Single Root I/O Virtualization (SR-IOV), enables a single physical PCIe device to present as multiple virtual functions, bypassing hypervisor overhead for direct I/O access in virtualized data centers.

Advantages and Limitations

Advantages

Virtual devices offer significant cost efficiency compared to physical counterparts by reducing the need for dedicated purchases and ongoing maintenance expenses. Through server consolidation and resource , organizations can increase utilization from typical 5-15% to 60-80%, lowering expenditures (CapEx) and operational costs associated with physical device proliferation. For instance, virtual storage devices abstract physical arrays into a unified , eliminating the costs of underutilized silos and enabling pay-as-you-grow models. The flexibility and scalability of virtual devices allow for rapid provisioning, cloning, and resizing without physical reconfiguration, supporting dynamic workloads in multi-tenant environments. network devices, such as software-defined virtual switches (vSwitches), enable seamless resource reallocation and , preventing interference between tenants while network capacity . Similarly, virtual I/O devices facilitate load balancing across multiple physical channels, aggregating resources like disks into larger logical units to adapt to fluctuating demands efficiently. Portability is a key advantage, as virtual devices support across hosts without downtime or reconfiguration, enhancing through snapshots and replication. This decoupling of logical devices from physical hardware allows virtual machines () to move between heterogeneous systems, maintaining consistent and . In storage virtualization, occurs in the background while preserving logical addresses, ensuring business continuity during failures or upgrades. Enhanced management is achieved via centralized control and automated orchestration, optimizing resource utilization through techniques like overcommitment, where allocated virtual resources exceed physical capacity without performance degradation. Virtual devices simplify administration with uniform interfaces, such as hypervisor-managed virtual NICs (vNICs), reducing complexity in large-scale deployments and enabling features like transparent encryption or intrusion detection. This streamlined approach lowers administrative overhead and improves overall efficiency in cloud environments.

Limitations and Challenges

Virtual devices introduce significant performance overhead due to the layers required to abstract physical , often resulting in increased I/O and CPU utilization. For instance, I/O operations must traverse both and stacks, leading to bottlenecks in high-throughput scenarios such as or access, where early systems exhibited penalties exceeding a factor of two compared to native performance. As reported in a 2018 study, implementations like those in KVM showed disk I/O throughput overheads of up to 69% and increases of 256% for sequential workloads, alongside CPU taxes of approximately 7% at full utilization and higher in multi-VM environments. These inefficiencies are exacerbated in dense deployments, where shared resources amplify contention. Security risks associated with virtual devices stem primarily from hypervisor vulnerabilities and shared resource exposures, potentially enabling attacks that compromise isolation. VM escape attacks, for example, allow malicious code within a guest to break out and access the host or other VMs via flaws in virtual device emulation, such as buffer overflows in I/O handlers. Additionally, (DMA) capabilities in virtualized I/O devices pose risks by permitting arbitrary memory access if not properly isolated, with side-channel leaks possible through shared caches or timing variations in emulated interfaces. bugs further widen the , as a single can expose all virtual devices across the . Compatibility issues arise because not all hardware features can be fully in virtual devices, particularly proprietary drivers and complex interfaces designed without virtualization in mind. Full device emulation sacrifices to maintain , but it often fails to replicate specialized functions like advanced graphics acceleration or custom firmware behaviors, leading to incomplete support. addresses some inefficiencies by using simplified interfaces but requires guest modifications, reducing broad compatibility and contributing to with specific ecosystems. Management complexities in virtual device environments include faults that span layers and handling in multi-tenant setups. Virtual faults, such as I/O errors, are harder to trace due to the abstraction, often requiring specialized tools to correlate and logs. , particularly for storage I/O, causes unpredictable performance degradation when multiple exceed limits like capacity, demanding ongoing monitoring and QoS policies to mitigate scheduling delays. Historical advancements in , such as IOMMU for secure , have helped alleviate some DMA-related risks but do not eliminate broader management overheads.

References

  1. [1]
    Virtualization Terms You Should Know | Global Knowledge
    Virtual Hardware Device (Virtual Device). A software component that resembles and behaves like a specific hardware device. The guest OS and software ...
  2. [2]
    What Is a Virtual Device? - Computer Hope
    Jun 12, 2024 · A virtual device mimics a physical hardware device by tricking the computer into thinking something exists when it really doesn't.
  3. [3]
    What is virtualization? - Red Hat
    Dec 9, 2024 · Virtualization is a technology that allows you to create virtual, simulated environments from a single, physical machine.
  4. [4]
    Introduction to Virtual Devices - Cycle.io
    A virtual device is a software-based simulation of a physical device, performing similar functions through code and virtualized resources rather than physical ...
  5. [5]
    3.4. Virtualized Hardware Devices | Red Hat Enterprise Linux | 7
    Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts. Note. For more information ...
  6. [6]
    Device Virtualization | Microsoft Learn
    Feb 5, 2024 · The Host Compute APIs allow applications to extend the Hyper-V platform with virtualization support for generic PCI devices.
  7. [7]
    Introduction — QEMU documentation
    QEMU's system emulation provides a virtual model of a machine (CPU, memory and emulated devices) to run a guest OS.
  8. [8]
    Device Emulation — QEMU documentation
    QEMU supports the emulation of a large number of devices from peripherals such network cards and USB devices to integrated systems on a chip (SoCs).USB emulation · NVMe Emulation · Disk Images · InvocationMissing: definition | Show results with:definition
  9. [9]
    [PDF] The Origin of the VM/370 Time-sharing System - cs.wisc.edu
    VM/370 became available in 1972 for the IBM System/370 com- puter family whose members all included virtual memory hardware. CP and CMS have seen continuous ...
  10. [10]
    [PDF] A Brief Review of Its 40 Year History - IBM z/VM
    While earlier CP/CMS software was made available as source informally to customers, the first official release of the technology was VM/370 in August 2, 1972, ...
  11. [11]
    [PDF] The Evolution of the Unix Time-sharing System*
    This paper presents a brief history of the early development of the Unix operating system. It concentrates on the evolution of the file system, the process- ...
  12. [12]
    A History of Windows Device Drivers - Summit Soft Consulting
    Oct 27, 2015 · VxD drivers are traditionally written in assembly language since this was the call interface to the MSDOS and BIOS system services in Windows 3.
  13. [13]
    VMware Virtual Platform Technology - USENIX
    VMware Virtual Platform is a software system that allows multiple operating system environments to run concurrently on a standard x86-based PC.Missing: devices | Show results with:devices
  14. [14]
    [PDF] Intel Virtualization Technology - UT Computer Science
    VT-x includes two controls that support inter- rupt virtualization. When the external interrupt exiting control is set, all external interrupts cause. VM ...
  15. [15]
    Announcing Amazon Elastic Compute Cloud (Amazon EC2) - beta
    Amazon EC2 is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.Missing: virtual | Show results with:virtual
  16. [16]
    Google Releases Android SDK and Emulator - OSnews
    Nov 12, 2007 · The SDK is available for Linux, Mac and Win and it includes an emulator. Video here. Update: The WebKit browser failed to render the desktop ...
  17. [17]
    What is a Container? - Docker
    Docker container technology was launched in 2013 as an open source Docker Engine. It leveraged existing computing concepts around containers and ...
  18. [18]
  19. [19]
    Disk Image Formats (VHD, VHDX, AVHDX, and VMDK) | XenCenter®
    Format, Description. Virtual Hard Disk (VHD), VHD is a group of virtual disk image formats specified by Microsoft as part of their Open Specification Promise.
  20. [20]
    Chapter 13. Managing Storage for Virtual Machines | 7
    Virtual storage is abstracted from the physical storage allocated to a virtual machine connection. The storage is attached to the virtual machine using ...
  21. [21]
    The Guide to VMware Thin Provisioning | CloudBolt Software
    Thin provisioning is a method of efficiently allocating storage resources by presenting a right-sized virtual storage device to a host.
  22. [22]
    Disk Images — QEMU documentation
    In order to use VM snapshots, you must have at least one non removable and writable block device using the qcow2 disk image format. Normally this device is the ...
  23. [23]
    3 Working With Software RAID - Oracle Linux
    Oracle Linux kernel uses the multidisk (MD) driver to support software RAID by creating virtual devices from two or more physical storage devices.
  24. [24]
    Running virtual machines with qemu-system-ARCH | SLES 15 SP7
    This allows QEMU to access iSCSI resources directly and use them as virtual machine block devices. This feature does not require any host iSCSI initiator ...
  25. [25]
    Hyper-V storage I/O performance - Microsoft Learn
    Dec 14, 2023 · This article explores different options and considerations for tuning storage input/output (I/O) performance in a virtual machine (VM).
  26. [26]
    Using Flash Storage Devices with Cache I/O Filters - TechDocs
    Apr 22, 2025 · A cache I/O filter can use a local flash device to cache virtual machine data. If your caching I/O filter uses local flash devices, you need to configure a ...
  27. [27]
    [PDF] Performance Best Practices for VMware vSphere 8.0
    This feature (strictly speaking, a function of the chipset, rather than the CPU) can allow virtual machines to have direct access to hardware I/O devices, such ...
  28. [28]
    3.8. Virtual Network Interface Cards - Red Hat Documentation
    Virtual network interface cards (vNICs) are virtual network interfaces that are based on the physical NICs of a host. Each host can have multiple NICs, and each ...Missing: definition | Show results with:definition
  29. [29]
    1.5. Network | Technical Reference | Red Hat Virtualization | 4.3
    Virtual NICs (VNICs) are logical NICs that operate using the host's physical NICs. They provide network connectivity to virtual machines. Bonds bind multiple ...Missing: vNIC | Show results with:vNIC
  30. [30]
    [PDF] The Design and Implementation of Open vSwitch - USENIX
    May 4, 2015 · We describe the design and implementation of Open. vSwitch, a multi-layer, open source virtual switch for all major hypervisor platforms.
  31. [31]
    Chapter 4. Configuring VLAN tagging | Red Hat Enterprise Linux | 10
    VLAN tagging can be configured using `nmcli`, the RHEL web console, `nmtui`, `nmstatectl`, or RHEL system roles. `nmcli` can be used via command line.
  32. [32]
    Chapter 5. KVM Paravirtualized (virtio) Drivers
    Virtio drivers are KVM's paravirtualized device drivers that enhance guest performance by decreasing I/O latency and increasing throughput.<|separator|>
  33. [33]
    Universal TUN/TAP device driver - The Linux Kernel documentation
    TUN/TAP provides packet reception/transmission for user space programs, acting as a point-to-point device. TUN works with IP frames, and TAP with Ethernet ...
  34. [34]
  35. [35]
    17.14. Applying Network Filtering | Red Hat Enterprise Linux | 7
    CTRL_IP_LEARNING=dhcp (DHCP snooping) provides additional anti-spoofing security ... prevents a guest virtual machine's interface from MAC, IP, and ARP spoofing.
  36. [36]
    Write a HID Source Driver by Using Virtual HID Framework (VHF)
    Apr 22, 2025 · Learn about writing a HID source driver that reports HID data to the operating system. A HID input device, such as – a keyboard, mouse, pen ...
  37. [37]
    Invocation — QEMU documentation
    Jun 17, 2006 · If this option is set, VNC client may receive lossy framebuffer updates depending on its encoding settings. Enabling this option can save a ...Standard Options · Block Device Options · Network Options
  38. [38]
    Spice User Manual
    Spice is an open remote computing solution providing client access to remote displays and devices, mainly for virtual machines.
  39. [39]
    devices.txt - The Linux Kernel Archives
    This device is used by the user-mode virtual kernel port. 99 char Raw parallel ports 0 = /dev/parport0 First parallel port 1 = /dev/parport1 Second parallel ...
  40. [40]
    Virtio on Linux - The Linux Kernel documentation
    Virtio is an open standard for communication between drivers and devices, exposed as physical devices using shared memory and virtqueues.Missing: loading | Show results with:loading
  41. [41]
    404 Not Found
    Insufficient relevant content.
  42. [42]
  43. [43]
    [PDF] udev – A Userspace Implementation of devfs
    Starting with the 2.5 kernel, all physical and virtual devices in a system are visible to userspace in a hierarchal fashion through sysfs. /sbin/hotplug ...
  44. [44]
    sysfs - _The_ filesystem for exporting kernel objects
    Aug 16, 2011 · sysfs is a RAM-based filesystem initially based on ramfs. It provides a means to export kernel data structures, their attributes, and the linkages between them ...Missing: exposure | Show results with:exposure
  45. [45]
    Windows Management Instrumentation - Win32 apps
    ### Summary: WMI for Virtual Device Enumeration
  46. [46]
    ioctl based interfaces — The Linux Kernel documentation
    ioctl() is the most common way for applications to interface with device drivers, using a command number as the second argument.
  47. [47]
    Hypervisors and virtualization in a Cloud environment - IBM Developer
    May 19, 2024 · Type 1 hypervisors run directly on the system hardware. Type 2 hypervisors run on a host operating system that provides virtualization services, ...
  48. [48]
    Full virtualization vs. paravirtualization: Key differences - TechTarget
    Mar 11, 2024 · The difference between full virtualization and paravirtualization is the level of isolation between the OS and hypervisor.
  49. [49]
    Hyper-V GPU Passthrough: A Beginner's Guide - NAKIVO
    Jul 16, 2024 · GPU Passthrough is a feature that allows you to connect a physical video card installed on a physical host to a virtual machine without emulation.
  50. [50]
    What is KVM? - Red Hat
    Nov 1, 2024 · Kernel-based virtual machines (KVM) are an open source virtualization technology that turns Linux into a hypervisor.What does a hypervisor do? · Benefits of virtualization · Features of KVM
  51. [51]
    Create and manage virtual devices | Android Studio
    An Android Virtual Device (AVD) is a configuration that defines the characteristics of an Android phone, tablet, Wear OS, Android TV, or Automotive OS device ...
  52. [52]
    Chapter 17. Optimizing virtual machine performance | 8
    Optional: If you adjusted the current VM memory, you can obtain the memory balloon statistics of the VM to evaluate how effectively it regulates its memory use.
  53. [53]
    What is live migration? - Red Hat
    Oct 22, 2024 · Live migration is the process of moving a virtual machine (VM) from one host to another without interrupting access to the VM.
  54. [54]
    Run apps on the Android Emulator | Android Studio
    Sep 19, 2024 · The Android Emulator lets you test your app on many different devices virtually. The emulator comes with Android Studio, so you don't need to install it ...Create and manage virtual... · Emulator · Configure hardware acceleration
  55. [55]
    Mobile Testing and Development - Genymotion Android Emulator
    Genymotion Cloud solutions let you run any number of virtual devices in parallel, for a fixed cost per minute and per device.
  56. [56]
    Testing in QEMU
    QTest is a device emulation testing framework. It can be very useful to test device models; it could also control certain aspects of QEMU (such as virtual clock ...
  57. [57]
    Inspect traces | Android Studio
    Jan 3, 2024 · The trace view in the CPU Profiler provides several ways to view information from recorded traces. For method traces and function traces, ...
  58. [58]
    Enabling Artificial Intelligent Virtual Sensors in an IoT Environment
    This paper suggests replacing physical sensors with machine learning (ML) models. These software-based artificial intelligence models are called virtual ...
  59. [59]
    GCVirtualController | Apple Developer Documentation
    Use a virtual controller to display software controls that you can customize over your game. You create a virtual controller from a configuration.
  60. [60]
    5 Benefits of Virtualization - IBM
    Simply put, one of the main advantages of virtualization is that it's a more efficient use of the physical computer hardware; this, in turn, provides a greater ...
  61. [61]
    What is Storage Virtualization?
    Jan 7, 2020 · Benefits of Storage Virtualization · Enables dynamic storage utilization and virtual scalability of attached storage resources, both block and ...
  62. [62]
    Network Virtualization and Software Defined Networking for Cloud ...
    Virtual devices are easier to manage because they are soft- ware-based and expose a uniform interface through standard abstractions. VIRTUALIZATION IN COMPUTING.
  63. [63]
    I/O Virtualization - ACM Queue
    Nov 22, 2011 · Benefits. Many of the benefits of virtualized systems depend on the decoupling of a VM's logical I/O devices from its physical implementation. ...
  64. [64]
    [PDF] I/O virtualization - Waldspurger.org
    Although pass-through mode can remove I/O virtualization overheads, it introduces several limitations and implementation challenges that have slowed its ...Missing: risks | Show results with:risks
  65. [65]
    [PDF] Performance, Resource, and Power Usage Overheads in Clouds
    Apr 9, 2018 · As shown in Table 4, the extra CPU overhead imposed by virtual- ization with XEN (46%) was greater than that for the other tested platforms due ...
  66. [66]
    A Review of Virtualization, Hypervisor and VM Allocation Security: Threats, Vulnerabilities, and Countermeasures
    **Summary of Security Threats, Vulnerabilities, and Countermeasures (IEEE Document 8947629)**
  67. [67]
    How resource contention affects VM storage performance - TechTarget
    May 28, 2019 · VM performance issues often occur when resource contention is present. Admins can check the number of IOPS requests and storage array connections to address ...