Fact-checked by Grok 2 weeks ago

Parallel Virtual Machine

The Parallel Virtual Machine (PVM) is a message-passing interface and runtime environment for parallel and that enables a of Unix, Windows, or other computers to function as a single, cohesive parallel computational resource. Developed to leverage existing hardware for solving large-scale scientific and problems, PVM allows programmers to create, deploy, and manage parallel tasks across distributed systems using a of functions for process spawning, communication, and . Its core design supports dynamic reconfiguration of the virtual machine, making it adaptable to varying network topologies and computational loads. PVM originated from a collaborative initiated in the summer of 1989 at , with key contributions from researchers including Al Geist and Vaidy Sunderam, in partnership with the , , and . The first internal prototype (Version 1) emerged shortly thereafter, followed by the public release of Version 2 in March 1991 and Version 3 in February 1993, which introduced enhanced support for and group communications. By the mid-1990s, PVM had become a for heterogeneous network computing, influencing subsequent systems like the (MPI). Key features of PVM include a portable message-passing model supporting asynchronous and synchronous communication via functions like pvm_send and pvm_recv, dynamic task creation with pvm_spawn, and built-in data packing/unpacking for heterogeneous architectures. It provides fault detection and recovery mechanisms, such as pvm_notify for host failure alerts, along with tools like XPVM for and of parallel executions. The system supports languages including C, C++, and through libraries like libpvm, and enables advanced operations such as multicasting, broadcasting, and collective reductions. Although the last major release, Version 3.4.6, occurred in 2009, PVM remains a foundational tool in legacy applications and educational contexts.

History

Origins and Development

The Parallel Virtual Machine (PVM) project originated in the summer of 1989 at (ORNL), where it was conceived as an experimental to enable on heterogeneous Unix systems. The initial prototype, known as PVM 1.0, was developed by Vaidy Sunderam from and Al Geist from ORNL, focusing on creating a portable environment for parallel programming across diverse hardware environments. This effort addressed the growing need in the late 1980s for a unified system to harness computational resources from workstations, multiprocessors, and supercomputers that lacked a single architecture. The primary motivations for PVM stemmed from the limitations of existing parallel computing tools, which were often tied to specific hardware or homogeneous clusters, making them unsuitable for the increasingly networked and varied computing landscapes in research settings. Developers aimed to provide a message-passing interface that allowed programmers to treat a network of heterogeneous machines as a single virtual parallel processor, promoting portability and ease of use for scientific applications. Key challenges targeted included architectural differences, varying operating systems, incompatible network protocols, and discrepancies in data formats and computational speeds, all of which hindered seamless task distribution and communication. In 1991, the collaboration expanded to include the alongside ORNL and , forming the core team for the Heterogeneous Network Computing project and accelerating PVM's refinement. This partnership led to the public release of PVM version 2 in March 1991, marking the system's first widespread availability. By 1990, early versions had already seen initial demonstrations through academic publications and internal testing, fostering adoption in environments for distributed simulations and computations. These foundational efforts laid the groundwork for PVM's into subsequent versions that broadened its applicability.

Major Releases

The Parallel Virtual Machine (PVM) project saw its first major public release with version 2.0 in March 1991, developed by researchers at the . This version introduced core functionalities such as basic between processes and task spawning to create parallel tasks across networked hosts, marking a shift from the earlier experimental prototype at (ORNL). Version 3.0 followed in February 1993, representing a comprehensive redesign focused on enhancing scalability and robustness. Key additions included fault tolerance mechanisms to handle host failures without system-wide crashes, dynamic process migration for load balancing across machines, and improved portability to support various Unix variants and early non-Unix systems. These changes enabled PVM to manage virtual machines with hundreds of heterogeneous hosts more effectively. Licensing also evolved during this period, transitioning from initial proprietary elements in earlier prototypes to open distribution under BSD and GNU GPL terms by version 3, facilitating broader adoption in academic and research environments. Subsequent development emphasized maintenance and platform expansion through minor releases. Version 3.4, first released in 1997 under ORNL's , brought significant enhancements for Windows support, including integration with Windows networking protocols (such as for and ) and compatibility tweaks for mixed Unix-Windows environments, broadening PVM's applicability beyond Unix-dominant clusters. Further adaptations for were provided in 2003. The final stable release, 3.4.6, arrived on February 2, 2009, primarily addressing bug fixes, improved compatibility with evolving distributions and compilers, and refinements for 64-bit architectures to ensure ongoing usability on contemporary hardware. Active development of PVM ceased after the 3.4.6 release, with no further updates issued since ; the project was subsequently archived at ORNL, preserving its codebase and documentation for legacy use.

System Architecture

Core Components

The Parallel Virtual Machine (PVM) system relies on several fundamental software components to enable across heterogeneous networks. At its core is the pvmd (PVM daemon), a central that runs on each participating to orchestrate the virtual machine's operations. The pvmd manages task creation and termination, routes messages between processes, performs conversions for heterogeneous architectures, and monitors across the network. It acts as a coordinator, abstracting underlying hardware differences by maintaining a dynamic and supporting fault detection and recovery mechanisms, such as notifying other daemons of failures. In a typical setup, one pvmd serves as the master daemon, while others operate as slaves, communicating via or sockets to form a scalable, decentralized structure capable of handling up to hundreds of hosts. Complementing the pvmd is the libpvm, a user-level library that provides the primary (API) for integrating parallel programs with the PVM environment. This library includes functions for initializing the (e.g., via pvm_config to query the current configuration), dynamically adding or removing hosts, and managing task execution contexts. Written primarily with bindings for and C++, libpvm ensures portability by separating machine-independent logic from platform-specific implementations, allowing applications to interact seamlessly with the pvmd for resource allocation and coordination. It supports non-blocking operations through wait contexts and enables multiple message buffers for efficient data handling. For visualization and monitoring, PVM incorporates xpvm, a graphical user interface tool introduced in version 3 to aid in and performance analysis. Xpvm displays real-time views of the , including network topologies, task execution timelines (space-time graphs), message flows, and resource utilization metrics, helping users track dynamic behaviors across distributed hosts. Built using X Windows and the Tcl/Tk toolkit, it collects trace data generated by libpvm routines and pvmd events, presenting them in an intuitive format without interrupting application execution. This tool enhances usability by allowing interactive spawning of tasks and hosts directly from the interface. Supporting these primary elements are various utilities that facilitate setup and maintenance. Startup scripts in the PVM automate pvmd initialization, handling master-slave configurations and parsing to launch daemons across the network via commands like rsh or rexec. analyzers process event logs in formats compatible with tools like , enabling detailed post-execution analysis of communication patterns and bottlenecks. Configuration files, such as .pvmrc in the user's , store lists, environment variables (e.g., PVM_ROOT and PVM_ARCH), and default options to customize the virtual machine's behavior and ensure consistent operation across sessions. Together, these components interact through a daemon-centric model where pvmd serves as the , libpvm bridges applications to the , xpvm provides oversight, and utilities streamline administrative tasks, collectively abstracting the complexities of parallel execution on diverse .

Virtual Machine Configuration

The configuration of a Parallel Virtual Machine (PVM) involves assembling a collection of heterogeneous hosts into a unified computational resource, managed primarily through the PVM daemon (pvmd) on each machine. Hosts are added to the virtual machine either statically via a hosts file, which lists machine names and optional parameters such as architecture or working directories, or dynamically using the pvm_addhosts library routine or console command. This routine accepts an array of host identifiers and spawns pvmd instances on remote machines via remote shell mechanisms like rsh or rexec, enabling integration across diverse network environments. PVM supports heterogeneous networks, including TCP/IP over Ethernet, with UDP for inter-daemon communication and TCP for task-to-task messaging, while the External Data Representation (XDR) protocol ensures data portability across differing architectures such as SPARC or Alpha. Resource allocation in PVM is handled by the pvmd, which acts as a local resource manager to balance loads during task spawning by querying CPU utilization and other host metrics before assigning processes. Dynamic addition and removal of hosts occur at runtime without halting the ; pvm_addhosts incorporates new nodes through a multi-phase commit to synchronize the host table across all pvmds, while pvm_delhosts removes them similarly, updating configurations in phases to maintain consistency. For , pvmd detects node failures via communication timeouts in the pvmd-pvmd and invokes hostfailentry to terminate affected tasks, with applications notified through pvm_notify calls for events like host deletion or task exits. PVM's scalability extends to thousands of processors by employing a decentralized, master-slave among pvmd instances, where a master pvmd coordinates initial slave startups and , reducing overhead in large clusters through efficient, non-centralized management. In this setup, pvmd processes assist in remote daemons, and hierarchical optimizes inter-pvmd communication for expansive configurations. is enforced through basic host permissions, relying on trusted remote access via .rhosts files for , and the PVM_ROOT , which specifies the and restricts access to PVM binaries and temporary authentication files in /tmp. At runtime, PVM abstracts the distributed physical hosts as a single , providing a unified illusion for parallel tasks through task identifiers (TIDs) that uniquely address processes, pvmds, and groups across the network. This model allows tasks to spawn, communicate, and terminate as if operating within a cohesive system, with pvmd handling the underlying distribution and heterogeneity transparently.

Programming Model

Message Passing

The in the Parallel Virtual Machine (PVM) enables communication between tasks across environments, supporting both point-to-point and operations to facilitate data exchange in distributed parallel programs. PVM's are designed for portability, using strongly typed buffering to handle data across different architectures, and rely on daemon processes (pvmds) for routing messages between hosts. Point-to-point communication forms the foundation of PVM's messaging, with pvm_send dispatching an asynchronous from the sender's active to a specific task identified by its task (TID), accompanied by a user-defined for matching. The receiving task employs pvm_recv, a blocking call that waits for a matching the specified TID and (using the value -1 to match any (TID) or any ), returning a upon success and storing unmatched messages in a at the destination host. This design ensures reliable delivery via sockets for task-to-task transfers, though it can lead to memory buildup if outstanding messages accumulate without prompt reception. For group-based coordination, PVM provides collective operations, including pvm_bcast to asynchronously broadcast a to all members of a predefined group (excluding the sender), routed efficiently through daemon . pvm_mcast extends this to multicasting a to a user-specified of TIDs, preserving order via daemon-mediated 1:N distribution, which is particularly useful for subsets of tasks without full group membership. Synchronization is achieved with pvm_barrier, which blocks all calling tasks in a group until a specified count of members invoke it, enabling coordinated progress in parallel computations. Data handling in PVM addresses heterogeneity through explicit packing and unpacking routines, initiated by pvm_initsend to prepare the active send buffer with encoding options like XDR for cross-platform compatibility (handling differences in and data representation). The pvm_pk* family of functions (e.g., pvm_pkint for integers, pvm_pkfloat for floats, pvm_pkstr for strings) serializes application data into the buffer, supporting arrays and strides for efficient transfer of complex structures like multidimensional arrays. On the receiving end, pvm_upk* functions (e.g., pvm_upkint, pvm_upkfloat) extract data from the active receive buffer in the exact reverse order, ensuring type-safe deserialization across diverse hardware. To support overlap of communication and computation, PVM includes non-blocking variants such as pvm_nrecv, which probes for a matching without blocking and returns a ID if available (or zero otherwise), allowing the task to proceed with other work until pvm_probe or a repeated pvm_nrecv polls for completion. This asynchronous receive mechanism integrates with native system calls where possible, reducing idle time in latency-bound applications. Performance in PVM's message passing is influenced by its user-space implementation, where pvmd daemons mediate routing, introducing overhead from context switches, data copying, and socket management—typically adding 100-500 microseconds of latency per message on Ethernet-based systems. Direct task-to-task routing, enabled via pvm_setopt, can halve this overhead by bypassing daemons for local or connected hosts, but scalability is constrained by Unix file descriptor limits (e.g., around 64 simultaneous connections). While optimized for local area networks (LANs) like Ethernet or FDDI, where low-latency TCP/UDP transports yield bandwidths up to 10 Mbps with minimal contention, wide area networks (WANs) suffer from amplified daemon routing delays and packet loss retries, making PVM less efficient for geographically distributed setups beyond basic synchronization.

Task and Resource Management

In the Parallel Virtual Machine (PVM), task and resource management enables users to create, oversee, and control parallel tasks across a of , forming the backbone of operations. The system provides a suite of application programming interface () functions that allow for dynamic task spawning, status monitoring, and , ensuring efficient utilization of the 's components. These mechanisms are handled primarily through interactions with the PVM daemon (pvmd) on each and a resource manager task that coordinates task placement and configuration updates. Task spawning is initiated via the pvm_spawn , which launches multiple instances of an program on specified or automatically selected within the . The signature is int pvm_spawn(char *task, char **argv, int [flag](/page/Flag), char *where, int ntask, int *tids), where task specifies the , argv provides arguments, [flag](/page/Flag) controls options such as PvmTaskDefault for automatic host selection, PvmTaskHost for targeting a specific , or PvmTaskArch for matching (e.g., to ensure binary compatibility across heterogeneous systems), where indicates the target host or string, ntask sets the number of tasks to spawn (defaulting to 1), and tids returns an of task identifiers (TIDs) for the spawned tasks or error codes for failures. Spawned tasks inherit the parent's environment variables, which can be explicitly exported using the PVM_EXPORT mechanism to pass custom settings like library paths. This process involves the local pvmd forwarding requests to remote daemons, which execute the tasks and assign TIDs for subsequent management. For example, spawning on "any" host (NULL for where with default ) distributes tasks across available machines, supporting in environments. Monitoring and control of tasks are facilitated by functions that query active processes and allow termination. The pvm_tasks routine, with signature int pvm_tasks(int which, int *ntask, struct pvmtaskinfo **taskp), retrieves a list of active tasks: setting which to -1 lists all tasks in the , to a host TID for -specific tasks, or to a specific TID for details on one task; it returns the count in ntask and a structure array with details like TID, parent TID, status, and . Termination uses pvm_kill(int tid), which sends a SIGTERM signal to the task identified by tid, preventing self-termination (use pvm_exit instead for graceful exits). For complete shutdown, pvm_halt(void) terminates all tasks, kills pvmds across s, and dismantles the virtual machine. Resource queries complement these by providing configuration insights: pvm_config(int *nhost, int *narch, struct pvmhostinfo **hostp) returns the number of s (nhost), architectures (narch), and a structure with details including TID, name, state (e.g., idle or busy), and relative speed; meanwhile, pvm_bufinfo(int bufid, int *bytes, int *msgtag, int *tid) inspects message buffers for size, tag, and TID, aiding in resource tracking during operations. Fault tolerance in PVM relies on event notification to handle task or host failures, with limited support for recovery mechanisms. The pvm_notify(int what, int msgtag, int cnt, int *tids) function registers callbacks for events: what specifies types like PvmTaskExit for task termination, PvmHostDelete for host removal due to failure, or PvmHostAdd for dynamic joins; msgtag assigns a user-defined tag for received notifications, cnt limits the number of monitored entities, and tids lists specific tasks or host TIDs (ignored for host-add events). Upon detection—via periodic pvmd scans or message timeouts—PVM delivers a notification message to the registering task, enabling user-level responses such as restarting failed tasks. PVM's fault tolerance relies on event notifications to handle failures, with pvm_notify enabling user-level responses such as respawning tasks on surviving hosts. Task is not natively supported and requires custom implementation or extensions. This approach ensures applications can adapt to network volatility without full system crashes. Dynamic load balancing is integrated into the spawning process to distribute tasks evenly across hosts, minimizing idle time and optimizing performance. When using pvm_spawn with default flags and no specific where, PVM employs a resource manager or daemon-level heuristics—initially cycling through available , later refined with load metrics from host speed values (set in configuration files and queried via pvm_config)—to select the least loaded or fastest suitable . Users can influence this by specifying priorities or host lists, ensuring tasks are placed on compatible, underutilized resources; for instance, in a heterogeneous , matching avoids spawning incompatible binaries, while speed-relative allocation favors higher-performance nodes. This transparent mechanism supports scalable parallelism without explicit user intervention in host selection algorithms.

Implementation Details

Installation and Setup

Installing and setting up the Parallel Virtual Machine (PVM) requires a manual process tailored to its design for heterogeneous networks, emphasizing user-level deployment without administrative privileges. Alternatively, on supported Linux distributions like Ubuntu or Fedora, PVM can be installed using the system package manager (e.g., sudo apt install pvm on Ubuntu), providing pre-built binaries and simplifying setup for homogeneous environments. PVM primarily supports Unix-like operating systems such as Linux, SunOS, AIX, and OSF/1, with limited support for Windows via a Win32 port in later versions like 3.4. Essential prerequisites include full TCP/IP networking capabilities using sockets for UDP and TCP communication between hosts, as well as standard compilation tools like make and a C compiler such as gcc to build the software from source. No special system privileges are needed, allowing any user with a valid login to install PVM on the target machines. The build process begins with obtaining the PVM source code by downloading the tarball from http://www.netlib.org/pvm3/pvm3.4.6.tgz. Traditional email requests to [email protected] are no longer supported for binary files, though FTP may still be available; web download is recommended. Once downloaded and extracted to a directory such as HOME/pvm3, set the PVM_ROOT [environment variable](/page/Environment_variable) (e.g., `setenv PVM_ROOT HOME/pvm3in csh orexport PVM_ROOT=HOME/pvm3` in [sh](/page/.sh)) and append `PVM_ROOT/lib/cshrc.stuborPVM_ROOT/lib/shrc.stub` to the user's shell [configuration file](/page/Configuration_file) (.cshrc or .profile) to automatically detect and set the PVM_ARCH variable based on the host architecture (e.g., [LINUX](/page/Linux), [SUN4](/page/Sun-4)). Navigate to PVM_ROOT and execute make to compile the core components, including the pvmd3 daemon binary and libpvm3.a library, which are placed in PVM_ROOT/lib/PVM_ARCH; this process typically takes a few minutes on a standard Unix system and supports cross-compilation for specific architectures like by setting PVM_ARCH=PGON before building. For multi-host setups across a , identical PVM versions must be installed on all participating machines to ensure , with binaries built for each host's and stored in architecture-specific directories. Update the environment variable to include PVM_ROOT/bin/PVM_ARCH on every host (e.g., setenv PATH $PVM_ROOT/bin/$PVM_ARCH:$PATH), and a shared like NFS is recommended but not required for distributing executables. The PVM daemon (pvmd) must be started manually on each host using $PVM_ROOT/bin/$PVM_ARCH/pvmd3 (or simply pvmd if PATH is set), optionally with flags like -n [hostname](/page/Hostname) to specify the host name or -d debugmask for ; for automated startup on remote hosts, PVM can use rsh or rexec if enabled, but firewalls often necessitate manual invocation with the -so option to bypass remote execution. To configure the , launch the PVM console on the master host with pvm (or pvm hostfile where hostfile lists remote hostnames, one per line), then use console commands like add hostname to incorporate additional hosts into the virtual machine. Common troubleshooting issues in PVM deployment include restrictions that block rsh, , or dynamic ports used by pvmd for inter-host communication, which can be mitigated by manually starting pvmd on each host and ensuring connectivity without remote login dependencies. Clock synchronization across hosts is advisable for accurate timing in distributed tasks, achievable via network time protocols like NTP, and PVM provides the hostsync command in the console to detect and report clock differences exceeding 10 seconds, which may cause task failures if unaddressed. Handling heterogeneous binaries requires building architecture-specific executables and relying on PVM's (XDR) for data marshaling to ensure portability, though users must avoid modes (PvmDataRaw) in mixed environments to prevent byte-order mismatches. files in /tmp/pvml. on each host offer diagnostic details for resolving startup errors. Post-installation verification involves launching the PVM console with pvm on the master host and executing conf to display the configuration, confirming added hosts and their architectures; mstat can then check host load and status, while ps lists any running tasks. A basic spawn test, such as compiling and executing the example from $PVM_ROOT/examples (e.g., make in the directory, then pvm hello), verifies by spawning tasks across hosts and observing output in the console, ensuring the full setup functions for parallel execution.

Supported Platforms and Languages

The Parallel Virtual Machine (PVM) primarily supports Unix variants as its core operating systems, including Linux, SunOS, HP-UX (up to compatibility with PVM version 3.4), Solaris, and IRIX 5.x on SGI systems. It also extends to multiprocessor environments such as SUNMP and SGIMP, as well as distributed memory systems like Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, and Cray CS640. Limited integration with Windows/NT is available through native Win32 ports or emulation via Cygwin, enabling heterogeneous clusters that include Windows 95/98/NT machines alongside Unix hosts. However, PVM lacks native support for modern macOS versions or ARM architectures, restricting its deployment on contemporary Apple hardware or mobile/embedded systems. In terms of hardware architectures, PVM is designed for heterogeneity, supporting x86 (via ), SPARC (on Sun systems), MIPS (on SGI and others), DEC Alpha, and Cray 64-bit systems. It transparently manages differences in byte order, such as big-endian and little-endian formats, through the use of XDR () for data encoding during , ensuring portability across mixed-architecture clusters without requiring user intervention. PVM provides native application programming interfaces (APIs) in and , allowing developers to integrate message-passing and routines directly into programs. support is available through wrappers that link to the underlying library, facilitating object-oriented extensions while maintaining compatibility. Third-party extensions offer bindings for higher-level languages, including via the pypvm module, which enables scripts to interact with PVM daemons and tasks over networks, and through JPVM, a message-passing library that embeds PVM functionality within applications for distributed MIMD computing. These bindings are not part of the official PVM distribution and may require additional configuration. Key limitations include the absence of GPU acceleration or cloud-native features, as PVM predates widespread adoption of these technologies and focuses on CPU-based heterogeneous networks. The system was last officially tested and released as version 3.4.6 in 2009, necessitating patches for compatibility with modern compilers like versions beyond 4.x due to deprecated features and changes in system libraries. Portability is enhanced by Autoconf-based build scripts, which automate configuration and cross-compilation for new Unix workstations and architectures, minimizing manual adjustments during deployment.

Applications and Legacy

Typical Use Cases

Parallel Virtual Machine (PVM) has been extensively utilized in educational settings to teach parallel programming concepts, particularly the (SPMD) model, in university courses since the 1990s. It provides an accessible framework for students to experiment with on heterogeneous networks of workstations, facilitating hands-on learning of message-passing paradigms without requiring specialized . Many academic programs integrated PVM into curricula for courses on , emphasizing its role in demonstrating load balancing and process in real-world scenarios. In scientific computing, PVM enables the solution of large-scale problems such as physics simulations and on clusters of workstations. For instance, it has been applied to simulations, where tasks are distributed across nodes to model atomic interactions in complex systems like . In physics, PVM supports simulations of heat diffusion through materials, dividing computational domains among processes to accelerate finite-difference calculations on heterogeneous setups. Image processing applications, including and filtering algorithms, leverage PVM to partition large datasets across workstation clusters, improving efficiency for tasks like in . A representative workflow in PVM involves spawning tasks for across multiple hosts, where a master process uses broadcast operations to distribute submatrices to worker tasks and reduce operations to aggregate partial results for the final computation. This approach, exemplified by Cannon's algorithm, initializes data packing with functions like pvm_initsend and employs group communication primitives for , ensuring efficient data flow in distributed environments. Case studies highlight PVM's early adoption at (ORNL) for distributed data analysis in heterogeneous network computing projects, where it facilitated collaborative simulations across diverse UNIX systems. Academic benchmarks have demonstrated PVM's , achieving effective performance on over 100 nodes in workstation clusters for parallel applications like and . PVM's advantages in heterogeneous environments stem from its ability to integrate legacy supercomputers with desktop machines, enabling cost-effective parallelism by transparently managing architectural differences through (XDR) encoding and dynamic host addition. This portability allows users to exploit idle resources in mixed setups, such as combining workstations with processors, without custom recompilation for each platform.

Decline and Successors

The decline of the Parallel Virtual Machine (PVM) began in the mid-1990s with the emergence of the (MPI) standard, which provided superior performance, broader portability, and a more modular design suited to environments. PVM's daemon-based architecture, while enabling dynamic process management across heterogeneous systems, introduced significant communication overhead compared to MPI's direct, lightweight implementation, making PVM less efficient for large-scale applications. Additionally, the lack of ongoing updates— with the final release, PVM 3.4.6, occurring in 2009—contributed to its obsolescence as hardware and operating systems evolved. Today, PVM is an archived project with no active development or , maintained solely through static distributions available via repositories like Netlib. It sees limited use in legacy scientific applications or educational settings, but compatibility issues with , such as stricter security policies and deprecated Unix features, often require significant modifications for deployment. PVM's concepts heavily influenced MPI, which standardized key features like collective operations (e.g., broadcast and reduce) and point-to-point messaging originally popularized by PVM, leading to MPI's dominance in . Implementations like OpenMPI further evolved these ideas for environments, incorporating PVM-inspired dynamic process spawning while addressing its performance limitations through optimized runtime systems. PVM's legacy endures in its pioneering support for heterogeneous networked computing, concepts that underpin modern cloud-based parallel frameworks capable of aggregating diverse resources across distributed environments. The framework has been cited in over 4,000 research publications, reflecting its foundational impact on parallel programming methodologies. Migrating PVM applications to MPI typically involves straightforward substitutions, such as replacing PVM's pvm_spawn and pvm_send with MPI's MPI_Spawn and MPI_Send equivalents, often requiring minimal code changes for basic master-worker paradigms. Tools like the Unify system can automate parts of this process by wrapping PVM calls to emulate MPI semantics, facilitating incremental transitions in legacy codebases.

References

  1. [1]
    Contents - The Netlib
    In this book we describe the Parallel Virtual Machine (PVM) system and how to develop programs using PVM. PVM is a software system that permits a heterogeneous ...Missing: developers | Show results with:developers
  2. [2]
    [PDF] PVM: Parallel Virtual Machine
    Aug 27, 1993 · In this book we describe the Parallel Virtual Machine (PVM) system and how to develop programs using PVM. ... The Parallel Virtual Machine (PVM) ...
  3. [3]
    PVM: A Users' Guide and Tutorial for Network Parallel Computing
    PVM (Parallel Virtual Machine) is a software package that enables the computer user to define a networked heterogeneous collection of serial, parallel, and ...
  4. [4]
    Parallel Virtual Machine (PVM) Version 3 - NetLib.org
    PVM (Parallel Virtual Machine) is a software system that enables a collection of heterogeneous computers to be used as a coherent and flexible concurrent ...
  5. [5]
    A Bit of History - NetLib.org
    Version 2 of PVM was written at the University of Tennessee and released in March 1991. During the following year, PVM began to be used in many scientific ...
  6. [6]
    Oak Ridge National Laboratory 80 Years of Great Science: 1943–2023
    1989. Parallel Virtual Machine software. The first version of the Parallel Virtual Machine software is developed at ORNL, attracting more than 400,000 users ...Missing: origins | Show results with:origins
  7. [7]
    PVM: A framework for parallel distributed computing - Sunderam
    The PVM system is a programming environment for the development and execution of large concurrent or parallel applications that consist of many interacting, ...
  8. [8]
    History of PVM Versions - The Netlib
    This appendix contains a list of all the versions of PVM that have been released from the first one in February 1991 through August 1994. Along with each ...Missing: major | Show results with:major
  9. [9]
    ORNL-RSH Package and Windows '03 PVM 3.4 - SpringerLink
    The first public release of PVM was version 2.0 in February 1991; the first PVM release from Oak Ridge National Laboratory that supported the Windows ...
  10. [10]
    RELEASE_NOTES.txt - The Netlib
    The latest PVM release 3.4.6 is mainly bug fixes and additional support for varying Linux distributions and gcc updates, and of course yet-more-better-64-bit ...
  11. [11]
    [PDF] ORNL/TM-12187 PVM 3 USER'S GUIDE AND REFERENCE MANUAL
    The first variable is PVM ROOT, which is set to the location of the installed pvm3 directory. The second variable is PVM ARCH, which tells PVM the architecture ...Missing: permissions | Show results with:permissions
  12. [12]
    Setup to Use PVM - NetLib.org
    Each PVM user needs to set these two variables to use PVM. The first variable is PVM_ROOT , which is set to the location of the installed pvm3 directory. The ...Missing: security | Show results with:security
  13. [13]
    PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for ...
    Sep 15, 1994 · use Xnetlib and click ``library", click ``pvm3", click ``book", click ``pvm3/pvm-book.ps", click ``download", click ``Get Files Now". (Xnetlib ...
  14. [14]
    Starting Slave Pvmds - The Netlib
    Starting the pvmd via rlogin or telnet with a chat script would allow access even to IP-connected hosts behind firewall machines and would require no special ...
  15. [15]
    PVM terminates after Adding Host - Stack Overflow
    Feb 12, 2010 · Unfortunately, running without a firewall is not an option for that host. By default, pvmd picks a random port number, which is not what I want.
  16. [16]
    pvm_intro(1pvm) — pvm — Debian stretch
    May 24, 2016 · NAME¶. PVM, pvm_intro - Parallel Virtual Machine System Version 3 ... WIN32: Windows 95/98/NT; X86SOL2: 80[345]86 running Solaris 2.x ...
  17. [17]
    Parallel Virtual Machine (PVM) Version 3
    ### Summary of PVM Supported Platforms, Operating Systems, Architectures, and Languages
  18. [18]
    pypvm home page
    pypvm is a Python module which allows interaction with the Parallel Virtual Machine (PVM) package. PVM allows a collection of computers connected by a network
  19. [19]
    JPVM: Network Parallel Computing in Java | Guide books
    Dec 8, 1997 · The JPVM library is a software system for explicit message-passing based distributed memory MIMD parallel programming in Java.
  20. [20]
    pvm-3.4.6-alt1.1.qa1 - Parallel Virtual Machine - ALT Linux - p7
    Package pvm: Information ; Source package: pvm ; Version: 3.4.6-alt1.1.qa1 ; Build time: Apr 16, 2013, 08:42 AM ; Category: System/Base ; Home page: http://www.epm.
  21. [21]
    PVM - Software Search - zbMATH Open
    ... PVM's use as an educational tool to teach parallel programming. With tens of thousands of users, PVM has become the de facto standard for distributed ...
  22. [22]
  23. [23]
    [PDF] An Innovative Course in Parallel Computing
    Most schools teaching the course use low-level message passing standards such as MPI or PVM and have not yet adopted high- level parallel language directives ...<|separator|>
  24. [24]
    (PDF) Molecular dynamics simulations on Cray clusters using the ...
    Aug 7, 2025 · We will explain the main features of the Sciddle middle-ware package for communication and PVM, as well as the architectural characteristics of ...
  25. [25]
    Parallel image compression on a workstationcluster using PVM
    Jul 8, 2005 · Parallel image compression on a workstation-cluster is discussed. We introduce a parallel meta-algorithm for image compression which is well ...
  26. [26]
    (PDF) Parallel Processing Implementation Using PVM for Image ...
    Aug 6, 2025 · 1- User configured host group: All the application tasks execute on the user selected machines. · 2- Transparent contact to the hardware: Either ...
  27. [27]
    PVM 3 beyond network computing - Oak Ridge National Laboratory
    PVM (Parallel Virtual Machine) is a byproduct of the heterogeneous network ... This paper describes the features of the latest release of PVM (version ...
  28. [28]
    [PDF] Goals Guiding Design: PVM and MPI
    In Sections 5, 6, 7, and 8, we show how these sources have influenced differences between PVM and MPI in the areas of dynamic processes, contexts, nonblocking.
  29. [29]
    PVM - CSM - Oak Ridge National Laboratory
    No information is available for this page. · Learn why
  30. [30]
    Rollup of ALL FAQ categories and questions - Open MPI
    May 20, 2019 · Why does my legacy MPI application fail to compile with Open MPI v4.0.0 (and beyond)?; What prerequisites are necessary for running an Open MPI ...
  31. [31]
    MPI: The Spawn of MPI | MPI | Columns - ClusterMonkey
    Apr 9, 2006 · Indeed, this became the de facto method of launching parallel applications in PVM. MPI's design was strongly influenced by PVM (among others).<|separator|>
  32. [32]
    FAQ: Building Open MPI
    Dec 4, 2024 · Note, however, that the Open MPI team has proposed Fortran '03 bindings for MPI in a paper that was presented at the Euro PVM/MPI'05 conference.Missing: legacy | Show results with:legacy
  33. [33]
    [PDF] Integrated PVM Framework Supports Heterogeneous Network ...
    Jan 3, 1993 · PVM was one of the rst software systems to enable machines with widely di erent architectures and oating-point representa- tions to work ...
  34. [34]
    A scientific approach to workload-aware computing on AWS
    Oct 21, 2025 · HPC workloads demonstrate predictable resource patterns that can directly determine optimal cloud instance selection.
  35. [35]
    ‪V. S. Sunderam‬ - ‪Google Scholar‬
    PVM: Parallel Virtual Machine. A User's Guide and Tutorial for Networked Parallel Computing. A Beguelin, J Dongarra, GA Geist, W Jiang, R Manchek, V Sunderam.
  36. [36]
    Migrating from PVM to MPI - Stack Overflow
    May 24, 2012 · Chapter 9 of the useful book Using MPI talks pretty explicitly about comparing PVM and MPI and what you'd have to do to port.
  37. [37]
    Migrating from PVM to MPI.I. The Unify system - IEEE Xplore
    This paper presents a new kind of portability system, Unify, which modifies the PVM message passing system to provide (currently a subset of) the message ...Missing: Influence | Show results with:Influence