Fact-checked by Grok 2 weeks ago

Runtime system

A runtime system, also known as a runtime environment, is a that supports the execution of computer programs by providing essential services such as memory allocation, task scheduling, synchronization, and during program . It acts as an intermediary between the application code and the underlying or operating system, handling dynamic aspects of execution that compilers cannot fully resolve at . Runtime systems are fundamental to modern programming, enabling portability across different platforms and simplifying by abstracting low-level details like thread management and garbage collection. Key functions typically include behavior for optimization and orchestrating concurrent activities. In contexts, they adapt dynamically to status and application needs, mitigating issues like , contention, and overhead to improve and efficiency. Runtime systems vary by purpose and scope, with prominent types including language-specific runtimes that interpret or execute high-level code (e.g., the Runtime Environment, which manages memory, exceptions, and native method linking for Java applications), runtimes like Cilk that handle multithreading and load balancing, and monitoring frameworks for . Examples also encompass the runtime, which supports dynamic typing, and runtimes that enable shared-memory parallelism on multicore systems. These systems have evolved significantly since the , driven by advances in parallel architectures and the need for energy-efficient execution in .

Fundamentals

Definition and Purpose

A (RTS), also known as a , is a software layer that implements key aspects of a programming language's execution model, delivering essential services to programs during their execution. These services include allocation, , management, and dynamic linking, enabling the program to interact with underlying computing resources without direct exposure to specifics. The primary purposes of an RTS are to facilitate portability across diverse hardware and operating systems by abstracting low-level implementation details, and to support language-specific constructs such as dynamic typing, where type information is resolved and enforced at execution time rather than during . By handling these responsibilities, the RTS allows developers to focus on high-level logic while ensuring reliable and efficient program behavior in varied environments. In contrast to compile-time processes, which translate into form and resolve static elements like syntax and fixed dependencies, the RTS operates post-compilation to manage dynamic aspects of execution. For instance, it resolves unresolved symbols through mechanisms like of libraries and accommodates runtime behaviors such as polymorphic dispatch or conditional resource needs that cannot be predetermined statically. At a high level, the of an RTS positions it as an intermediary bridge between application code and the host operating system or , orchestrating resource access, error recovery, and execution to maintain program integrity and . Runtime systems often incorporate or with virtual machines to simulate standardized execution contexts.

Core Components

A runtime system's core components form the foundational modules that enable the loading, execution, and of programs during . The loader is responsible for reading code from , resolving dependencies, and placing it into for execution, ensuring that the and its libraries are properly initialized before is transferred to the application's . The scheduler manages the allocation of computational resources to threads or processes, determining the order and duration of their execution to optimize concurrency and responsiveness while coordinating with the underlying . The allocator handles dynamic requests from the , providing mechanisms to request, allocate, and deallocate space as needed during execution, often integrating with to prevent fragmentation and leaks. The exception handler detects errors, propagates them up the call through unwinding, and invokes appropriate or termination routines to maintain integrity. These components interact seamlessly to support continuous execution; for instance, the scheduler may invoke the allocator when creating new threads to secure necessary , while the loader collaborates with the scheduler to sequence the startup of multiple execution units. In error scenarios, the exception handler coordinates with the allocator to release resources during unwinding, preventing leaks, and signals the scheduler to pause or terminate affected threads. Such collaborations ensure that and error recovery occur without disrupting the overall execution flow. Runtime systems expose standard interfaces through or hooks that allow applications to interact with these components, such as initialization entry points like main() or runtime-specific startup functions that configure the loader and scheduler before program logic begins. These interfaces provide hooks for custom extensions, enabling developers to register callbacks for events like memory allocation failures or scheduling adjustments. Minimal runtime systems, common in embedded environments, consist of basic components focused on essential execution support with limited overhead, such as a simple loader for bare-metal code and a lightweight scheduler for constraints, often running without an underlying operating system. In contrast, full-featured runtime systems in high-level languages incorporate comprehensive implementations of all core components, supporting advanced and error handling to accommodate complex, portable applications across diverse hardware.

Conceptual Relations

Runtime Environment

The runtime environment constitutes the comprehensive execution context for a , encompassing the runtime system (RTS), associated libraries, and the dedicated execution that collectively isolate and sustain program operation. This setup provides an abstract, application-centric habitat where code runs independently of underlying hardware variations, ensuring portability and controlled resource access. In managed languages, for instance, the Java Runtime Environment (JRE) integrates the (JVM), class libraries, and supporting tools to form this isolated , enabling execution without direct hardware interaction. Key features of the environment include sandboxing mechanisms for security, enforcement of resource limits, and the incorporation of environment variables to modulate behavior. Sandboxing creates a protected around the program's execution, restricting access to sensitive operations like modifications or calls to mitigate risks from untrusted code, as seen in virtual machine-based environments where bytecode prevents malicious actions. Resource limits, such as configurable stack sizes and heap , prevent excessive consumption and ensure fair allocation. variables, passed at startup, influence decisions, such as selecting garbage collection algorithms or levels, thereby tailoring the execution without altering the source code. Distinct from the RTS itself—which primarily handles dynamic execution tasks like memory allocation and exception management—the runtime environment serves as the overarching habitat that embeds and extends the RTS, facilitating cross-platform consistency through standardized interfaces and virtualized execution. For example, the .NET runtime environment leverages the (CLR) within a broader that includes base class libraries and settings, allowing applications to run uniformly across diverse hosts by abstracting platform-specific details. implementations commonly host this environment to enforce uniformity. of the runtime environment occurs through mechanisms like command-line flags for immediate adjustments (e.g., setting size via JVM options like -Xmx) or configuration files that define persistent parameters, such as quotas or library paths, enabling developers to optimize for specific deployment scenarios.

Operating System Integration

Runtime systems integrate with operating systems primarily through system calls, which serve as the primary interface for requesting services such as (I/O) operations, access, and signaling mechanisms. These system calls allow the runtime to or wrap low-level OS interactions on behalf of applications, providing a layer of that simplifies while ensuring and . For instance, when an application requires I/O, the runtime intercepts the request and translates it into appropriate OS-specific invocations, handling details like buffering and error propagation to maintain consistency across executions. Runtime systems exhibit significant dependencies on the OS for fundamental operations, including process creation, (IPC), and . The manages process lifecycle events, such as forking or terminating processes, which the relies upon to initialize execution contexts without direct hardware access. IPC primitives, like or , enable coordination between runtime-managed components and external processes, while hardware abstraction layers () shield the from platform-specific details, allowing it to operate uniformly over diverse . These dependencies ensure that the can leverage the OS's robust handling of concurrency and , such as in multi-threaded environments where schedulers complement runtime components. A key challenge in runtime system design is achieving portability across different operating systems, stemming from variations in system call interfaces, such as the distinct syscall numbering and semantics between (using POSIX-compliant calls) and Windows (employing ). These differences can lead to failures or errors when porting code, as direct syscall invocations may not translate seamlessly. To mitigate this, runtime systems employ layers, such as portable wrappers or virtual syscall tables, that map platform-specific calls to a unified , reducing maintenance overhead and enabling cross-OS deployment without extensive rewrites. In hybrid models, systems can partially supplant OS functions by implementing mechanisms in user space, exemplified by user-space threading where the manages thread scheduling and context switching independently of the . This approach offloads from the OS, improving and in high-throughput scenarios, as the can threads without invoking costly traps. Such models integrate with the OS only for heavyweight operations like true parallelism across cores, balancing efficiency with the need for kernel-mediated resource access.

Practical Examples

In Managed Languages

In managed languages, the (JVM) serves as a cornerstone runtime system, executing platform-independent compiled from through or just-in-time () . The JVM handles by loading files into and executing instructions via an interpreter or compiled native , ensuring portability across diverse and operating systems. Class loading in the JVM involves a hierarchical system of class loaders, including the bootstrap loader for core classes and user-defined loaders for application-specific classes, which enforce isolation and at runtime. Additionally, the JVM incorporates a security manager that enforces a sandboxed execution environment, restricting access to system resources like file I/O or network connections based on policy files, thereby mitigating risks from untrusted . The .NET Common Language Runtime (CLR) provides a similar managed execution environment for languages like C# and Visual Basic .NET, processing intermediate language (IL) code generated by the compiler. The CLR supports IL execution through JIT compilation to native machine code, enabling efficient runtime performance while abstracting hardware differences. Assembly loading in the CLR occurs via the assembly loader, which resolves dependencies and loads managed modules into memory, supporting versioning and side-by-side execution of multiple assembly versions. App domains in the .NET Framework CLR offer logical isolation boundaries within a single process, facilitating security, reliability, and the ability to unload assemblies without terminating the application, which enhances modularity in enterprise scenarios. However, AppDomains are a legacy feature and were removed in .NET Core and later versions (unified .NET 5+); in modern .NET, isolation is typically achieved through separate processes, containers, or assembly-level boundaries. Both the JVM and CLR share key similarities in managed runtime features, such as automatic garbage collection for and bytecode/IL verification to ensure and prevent invalid operations before execution. The JVM's implementation distinguishes itself with advanced optimization techniques, including tiered compilation that profiles hot code paths for aggressive inlining and to eliminate unnecessary allocations. A comparative analysis confirms that these systems exhibit comparable overall performance, with differences primarily in optimization strategies rather than fundamental capabilities. These systems enable the "" paradigm by compiling to an intermediate form that the runtime interprets or compiles on target platforms, abstracting underlying differences in and OS while providing like garbage collection for developer productivity and portability.

In Low-Level Languages

In low-level languages such as and C++, runtime systems are typically lightweight libraries that provide essential support for program execution without the automated features found in higher-level environments. These systems emphasize explicit by the programmer, offering direct access to hardware and operating system services while minimizing overhead. The runtime library, exemplified by the GNU Library (), includes core functions for dynamic allocation via malloc and [free](/page/Free), which allow developers to request and release manually. Additionally, handles program startup through initialization routines like those in crt0.o, which set up the execution environment before calling main, and shutdown via functions such as atexit for registering cleanup handlers. Signal handling is another key component, with functions like signal and sigaction enabling responses to asynchronous events such as interrupts or errors. For C++, the runtime extends these capabilities through libraries like libstdc++, which builds on the C runtime and adds support for language-specific features. Libstdc++ incorporates the low-level support library libsupc++, providing mechanisms for , (RTTI), and terminate handlers, all while relying on underlying C functions for memory and process management. In performance-critical applications, developers may implement custom runtime systems to tailor these components, such as bespoke allocators or stack unwinding logic, often using POSIX-standard setjmp and longjmp for non-local control transfers that simulate basic exception propagation without full overhead. In embedded systems, runtime systems are further minimized to suit resource-constrained environments like microcontrollers. Newlib, a compact library, serves as a prime example, offering implementations of standard functions including malloc/free and signal handling, but with configurable stubs for system calls to integrate with no-OS bare-metal setups or real-time operating systems (RTOS). This approach allows direct hardware interaction while avoiding the bloat of full-featured libraries like . The use of such explicit systems in low-level languages grants developers fine-grained control over resources, enabling optimizations for speed and that are infeasible in managed environments. However, this control comes at the cost of increased error-proneness, as heightens risks of leaks, overflows, and without built-in safeguards. These trade-offs are particularly evident in , where runtime integration with the operating system—such as through syscalls for I/O—demands careful handling to maintain reliability.

Advanced Capabilities

Memory Management Techniques

Runtime systems employ various memory management techniques to allocate and deallocate memory efficiently during program execution, balancing performance, safety, and resource usage. One primary approach is , an automatic mechanism that identifies and reclaims memory occupied by unreachable objects, preventing memory leaks without explicit programmer intervention. Mark-and-sweep is a foundational tracing GC algorithm, first described in the context of implementations, where a marking phase traverses from root references to identify live objects, followed by a sweeping phase to free unmarked memory. This method ensures completeness by reclaiming all garbage but introduces stop-the-world pauses during collection, which can range from milliseconds to seconds depending on size, potentially impacting latency-sensitive applications. Pros include simplified programming and leak prevention; cons encompass non-deterministic pause times and overhead from tracing the object graph. Generational garbage collection builds on tracing algorithms by dividing the into generations based on object age, exploiting the weak generational hypothesis that most objects die young. Seminal work introduced generation scavenging, using a collector for the young generation () and mark-sweep for older ones, achieving significant throughput improvements, such as approximately three times faster than traditional methods in early implementations, by minimizing full collections. This reduces pause times for minor collections to under 1 ms in modern systems, though major collections can still cause longer interruptions; overall, it lowers by promoting only long-lived objects while enhancing allocation speed through bump-pointer techniques. Advanced variants include concurrent and low-latency garbage collectors, such as ZGC (introduced in JDK 11, with generational support in JDK 21 as of 2023) and , which perform most work concurrently with application threads to minimize pauses. These achieve sub-millisecond pause times (often under 1 ms even for large heaps up to terabytes) and high throughput, enabling scalable performance in server and cloud environments as of 2025. Reference counting is another automatic technique where each object maintains a count of incoming references, decrementing on release and deallocating when the count reaches zero. Originating in early graph structure management, it enables immediate reclamation without pauses, providing predictable latency and lower average due to on-demand freeing. However, it incurs runtime overhead from increment/decrement operations (typically 10-20% CPU in object-heavy workloads) and fails to collect cyclic references, necessitating approaches. In Python's runtime, serves as the primary mechanism, augmented by a cyclic using a mark-sweep variant for containers to handle loops. For manual allocation, runtime systems provide interfaces like malloc and realloc to request from the operating system or internal pools, allowing fine-grained control in performance-critical code. These functions manage fragmentation—where free becomes scattered into unusable small blocks—through strategies such as segregated free lists or buddy systems, which coalesce adjacent blocks to maintain larger contiguous regions and sustain allocation throughput above 90% efficiency in steady-state workloads. While offering minimal overhead and no pauses, manual management risks leaks or dangling pointers if mismanaged, with fragmentation potentially increasing effective usage by 20-50% over time without mitigation. Specialized techniques like region-based allocation address short-lived objects by grouping allocations into hierarchical s, deallocating entire regions at once upon scope exit, which avoids per-object overhead and fragmentation in temporary data structures. This approach, formalized in explicit region systems, excels for linear-time deallocation in O(1) operations per region, reducing for bursty allocations common in compilers or servers, though it requires careful region scoping to prevent leaks. In terms of metrics, modern GC techniques like generational and concurrent collectors achieve allocation throughputs of hundreds to thousands of MB/s (e.g., 2-3 GB/s in JVM benchmarks) while keeping pauses under 100 for 4 GB heaps, but increase footprint by 10-30% due to ; maintains constant with 5-15% higher CPU usage; manual methods minimize footprint but demand developer effort to sustain low fragmentation and high throughput.

Execution Optimization Methods

Runtime systems optimize execution by dynamically transforming and adapting code to the underlying and workload characteristics, thereby improving speed and efficiency without requiring upfront static analysis. These methods leverage information, such as execution frequencies and data patterns, to apply targeted transformations that pure interpreters or ahead-of-time compilers cannot achieve. Key techniques include , hybrid interpretation-compilation strategies, profiling-guided optimizations like inlining, and support for and parallelism. Just-in-Time () compilation is a core optimization where the runtime system translates or intermediate representations into native during program execution, enabling platform-specific optimizations and adaptation to runtime behaviors. This process typically involves an initial interpretation phase for rapid startup, followed by compilation of frequently executed ("hot") code paths into optimized native code, often with multiple tiers of increasing optimization levels to balance compilation overhead and performance gains. Adaptive JIT further refines this by recompiling methods based on accumulated runtime profiles, such as branch probabilities or object types, to apply aggressive optimizations like or speculation. For instance, in managed runtimes like the JVM, the HotSpot JIT uses tiered compilation to achieve near-native performance while minimizing initial latency. Hybrid approaches combining and address trade-offs between startup time and peak performance, where pure offers fast initial execution but limited optimization, while full delays startup due to upfront costs. hybrids mitigate this by interpreting cold code quickly and compiling only hot regions , resulting in startup times closer to interpreters (often under 100ms for small applications) while approaching speeds after warmup, with peak performance improvements of 2-10x over in benchmarks like SPECjvm. This balance is particularly valuable in interactive applications, where early responsiveness is critical, and the dynamically decides compilation thresholds based on execution counts to optimize overall throughput. Profiling and inlining are runtime-driven techniques where the system collects execution data, such as method invocation counts and loop frequencies, to identify and optimize hot paths. instruments code minimally to gather metrics like call graphs or edge profiles without significant overhead (typically <5% slowdown), enabling decisions on transformations like method inlining, which replaces function calls with inline to eliminate call overhead and expose further optimizations. , another -guided , duplicates loop bodies to reduce iteration overhead and improve , often yielding 20-50% speedups on hot loops in empirical studies. These optimizations are applied incrementally in JIT compilers, with inlining heuristics considering factors like size limits to avoid bloating, ensuring remains efficient even on resource-constrained systems. Vectorization and parallelism optimizations in runtime systems exploit hardware features like SIMD instructions and multi-core processors to accelerate data-parallel computations. For vectorization, the JIT compiler analyzes loops and applies auto-vectorization to generate SIMD code, such as using SSE/AVX instructions to process multiple data elements in parallel, achieving speedups of 2-8x on vectorizable workloads like numerical computations. Parallelism support involves runtime scheduling of multi-threaded execution, including thread creation, synchronization via locks or barriers, and load balancing across cores, with JIT optimizations like escape analysis to reduce locking overhead. In multi-threaded JIT scenarios, compilation policies adjust thread counts for parallel compilation phases, improving throughput by up to 30% on multi-core systems while maintaining single-threaded compatibility. These techniques are especially effective in data-intensive applications, where runtime adaptation to hardware vector widths enhances overall efficiency.

Historical Evolution

Origins in Early Computing

The origins of runtime systems emerged in the immediate post-World War II era, as electronic computers transitioned from manual configuration to more automated program execution support. The , completed in 1945, represented an early pinnacle of hardware computation but required physical reconfiguration via plugs and switches for each task, lacking dedicated software for loading or . By contrast, the , introduced in 1952 as the company's first commercial scientific computer, incorporated rudimentary runtime mechanisms such as a punched-card loader that read the initial program word into memory via a dedicated "Load" button, enabling sequential execution without constant manual intervention. This loader provided basic runtime support by facilitating program initialization and memory setup on vacuum-tube based hardware. Complementing this, developed the first symbolic for the around 1953, translating mnemonic instructions into to streamline programming and execution, thus forming an essential precursor to modern runtime translation layers. A pivotal advancement occurred with the advent of in the mid-1950s, marking the first explicit language-specific runtime system. Developed by and a team at , the compiler for the , released in 1957, generated efficient but relied on an accompanying to manage operations beyond core arithmetic, including formatted (I/O) via subroutines like READ and PRINT, and mathematical functions such as square roots and . This library, implemented as relocatable subroutines linked at load time, abstracted hardware-specific details, allowing programmers to focus on algorithms while the runtime handled execution-time support for data transfer and computation extensions. The system's design emphasized speed and reliability, with the runtime ensuring compatibility across IBM's 700-series mainframes and influencing subsequent high-level language implementations. Batch processing paradigms in 1950s mainframes further integrated runtime elements for resource management and job orchestration. On systems like the 704 and 709, runtime support evolved to handle batched job streams, where multiple programs were queued on or cards, and the system automatically sequenced their execution to optimize CPU utilization and minimize idle time between setups. This approach included primitive schedulers that allocated memory, initiated loaders for each job, and managed shared peripherals like tape drives, effectively providing runtime oversight for non-interactive workloads in scientific and applications. Such mechanisms reduced operator intervention and enabled efficient resource sharing among queued tasks, establishing foundational patterns for multiprogramming in early commercial computing environments. Key milestones in the built on these foundations with innovations in modular execution. The operating system, initiated in 1965 as a collaboration between MIT's Project MAC, Bell Telephone Laboratories, and , introduced dynamic linking as a core feature on the GE-645 computer. Unlike static linking, which resolved addresses at , ' dynamic resolved references at execution time using a segment-based model, allowing procedures to be loaded on demand and shared across processes without recompilation. This capability, detailed in early system overviews, enhanced flexibility in multi-user and served as a direct precursor to contemporary dynamic loaders in operating systems.

Developments in Modern Languages

Lisp's , from its inception in 1958, pioneered automatic through collection algorithms, such as mark-and-sweep, which became foundational for handling dynamic allocation without manual deallocation; these concepts profoundly shaped managed languages in the 1980s and . Smalltalk's system, developed in the , introduced efficient via , allowing method resolution at execution time and influencing flexible polymorphism in later designs. This era culminated in Java's release by in 1995, which introduced the (JVM) as a secure, portable supporting execution, automatic collection, and platform independence. The 2000s saw further expansions with Microsoft's (CLR), released in 2002 as part of the .NET Framework, offering a managed execution with cross-language , , and integrated garbage collection for enterprise applications. Google's , launched in 2008, advanced runtimes by using to native code for high performance; it powered from 2009 onward, incorporating an event-driven, non-blocking concurrency model to handle I/O-intensive workloads efficiently on a single thread. From the 2010s to the 2020s, established a new paradigm for runtime systems, with its in 2017 and W3C standardization in 2019, providing a instruction format for safe, near-native execution in browsers and portable environments beyond the web. Rust's runtime, introduced in , adopted a minimalistic approach without garbage collection, relying on compile-time ownership and borrowing rules to enforce and prevent data races while enabling zero-cost abstractions. In the early 2020s, runtime systems increasingly integrated with frameworks, such as adaptive in ML runtimes like Apache TVM (enhanced through 2024 with ML-based autotuning for hardware optimization). Additionally, runtimes are evolving to better support , automating scaling and cold-start mitigation in function-as-a-service models to reduce operational overhead in distributed environments.

References

  1. [1]
    Runtime Systems - Edge Computing Lab
    A runtime system is a framework that typically monitors and orchestrates execution. There are many different types of runtime systems.
  2. [2]
    What is runtime? | Definition from TechTarget
    Dec 2, 2021 · Runtime is a stage of the programming lifecycle. It is the time that a program is running alongside all the external instructions needed for proper execution.
  3. [3]
    Runtime Systems | CARV
    ### Summary of Runtime Systems from https://www.ics.forth.gr/carv/runtime-systems
  4. [4]
    A Survey: Runtime Software Systems for High Performance Computing
    Runtimes provide adaptive means to reduce the effects of starvation, latency, overhead, and contention. Many share common properties such as multi-tasking ...
  5. [5]
    Java And The Java Runtime Environment (JRE)
    The runtime system includes: The code necessary to run Java programs, dynamically link native methods, manage memory, and handle exceptions. An ...
  6. [6]
    [PDF] Cilk: An Efficient Multithreaded Runtime System
    Cilk (pronounced “silk”) is a C-based runtime system for multi- threaded parallel programming. In this paper, we document the effi-.
  7. [7]
    Runtime System - an overview | ScienceDirect Topics
    The runtime system can also implement runtime hardening by restricting capabilities at the runtime level. ... Programming Language Pragmatics , 2009 pp 761-816.
  8. [8]
  9. [9]
    Rethinking the Language Runtime System for the Cloud 3.0 Era
    Managed languages such as Java, Python or Scala are widely used in this setting. However, while these languages can increase productivity, they are often ...
  10. [10]
    Runtime vs. Compile Time | Baeldung on Computer Science
    Jul 31, 2021 · Runtime is the period of time when a program is running and generally occurs after compile time.Missing: support | Show results with:support
  11. [11]
  12. [12]
  13. [13]
    [PDF] Advanced Hard Real-Time Operating System, The Maruti Project.
    ... minimal runtime system for the execution of a Maruti application on the bare hardware. The stand-alone environment has the foDowing attributes: • The stand ...
  14. [14]
    Java Programming Environment and the Java Runtime Environment ...
    The JRE is the software environment in which programs compiled for a typical JVM implementation can run. The runtime system includes: Code necessary to run Java ...
  15. [15]
    Runtime Environment - an overview | ScienceDirect Topics
    A runtime environment is defined as the execution context in which applications operate, providing an isolated and protected context that allows code to run ...
  16. [16]
    [PDF] Language Run-time Systems: an Overview - DROPS
    This paper provides a high-level overview of language run-time systems with a focus on execution models, support for concurrency and parallelism, memory ...
  17. [17]
    Configure Sandbox Resource Limits - Oracle Help Center
    The 20.3 release of GraalVM introduced the Sandbox Resource Limits feature that allows for the limiting of resources used by guest applications.
  18. [18]
    Common Language Runtime (CLR) overview - .NET - Microsoft Learn
    .NET provides a run-time environment called the common language runtime that runs the code and provides services that make the development process easier.
  19. [19]
    .NET glossary - .NET | Microsoft Learn
    Sep 25, 2024 · In general, the execution environment for a managed program. The OS is part of the runtime environment but is not part of the .NET runtime.
  20. [20]
    [PDF] SPIN: Seamless Operating System Integration of Peer-to ... - USENIX
    Jul 14, 2017 · SPIN is a system that achieves these goals by integrat- ing P2P into the file I/O layer in the OS. The programmer uses standard pread and pwrite ...
  21. [21]
    Porting a spacecraft monitor and control system written in Ada
    The communication subsystem uses VMS system calls to perform I/Os and uses ... 1) or does it provide a means of inter- facing Ada tasks to the runtime system.
  22. [22]
    [PDF] Exascale Operating Systems and Runtime Software Report
    Dec 28, 2012 · Instead, new research to improve extreme-scale OS/R components must be integrated into future software stacks for the most capable HPC systems.
  23. [23]
    Hardware abstraction layer (HAL) overview
    Oct 9, 2025 · A HAL allows hardware vendors to implement lower-level, device-specific features without affecting or modifying code in higher-level layers.
  24. [24]
    Automated and Portable Native Code Isolation - ACM Digital Library
    doing this may interfere with the runtime system of the. Java programming ... Trapping system calls issued by native code is possible, but deciding ...
  25. [25]
    [PDF] Multiverse: Easy Conversion of Runtime Systems into OS Kernels ...
    Abstract—The hybrid runtime (HRT) model offers a path towards high performance and efficiency. By integrating the OS kernel, runtime, and application, ...Missing: replacing | Show results with:replacing
  26. [26]
    [PDF] First-Class User-Level Threads SOSP '91
    Many runtime environments implement lightweight processes (threads) in user space, but this approach usually results in second-class status for threads, making ...
  27. [27]
    The Red Hat newlib C Library - Sourceware
    This reference manual describes the functions provided by the Red Hat “newlib” version of the standard ANSI C library.Missing: runtime | Show results with:runtime
  28. [28]
    Safe Systems Programming in Rust - Communications of the ACM
    Apr 1, 2021 · Rust is the first industry-supported programming language to overcome the longstanding trade-off between the safety guarantees of higher-level ...
  29. [29]
    Safe to the Last Instruction - Communications of the ACM
    Dec 1, 2011 · To perform low-level tasks like memory management, a safe language usually relies on a runtime system written in an unsafe language (e.g., C), ...
  30. [30]
    4.3 Garbage Collection Performance
    The main performance problems with garbage collections are usually either that individual GCs take too long, or that too much time is spent in paused GCs.
  31. [31]
    Generation Scavenging: A non-disruptive high performance storage ...
    Generation Scavenging is a reclamation algorithm that has no noticeable pauses, eliminates page faults for transient objects, compacts objects without resorting ...
  32. [32]
    Garbage collector design - Python Developer's Guide
    Garbage collector design¶. This document is now part of the CPython Internals Docs. Next. Status of Python versions · Previous. The bytecode interpreter.
  33. [33]
    [PDF] Memory Management with Explicit Regions - Stanford CS Theory
    In explicit region-based memory management, each allocation specifies a region, and memory is reclaimed by destroying the region, freeing all allocated storage.
  34. [34]
    Garbage Collection and Performance - .NET | Microsoft Learn
    Jul 12, 2022 · This article describes issues related to garbage collection and memory usage. It addresses issues that pertain to the managed heap and explains how to minimize ...<|separator|>
  35. [35]
    [PDF] A Brief History of Just-In-Time - Department of Computer Science
    We examine the motivation behind JIT compilation and constraints imposed on JIT compilation systems, and present a classification scheme for such systems.
  36. [36]
    [PDF] Exploring Single and Multi-Level JIT Compilation Policy for Modern ...
    The remainder of this paper explores and explains the impact of different JIT compilation strategies on modern and future architectures using the HotSpot server ...<|control11|><|separator|>
  37. [37]
    [PDF] AOT vs. JIT: Impact of Profile Data on Code Quality
    In this section we describe the results of our experiments that inves- tigate the characteristics of current profiling-based JIT optimization systems in VMs.
  38. [38]
    An Empirical Study of Method In-lining for a Java Just-in-Time ...
    This paper describes an empirical study of online-profile-directed method inlining for obtaining both performance benefits and compilation time reductions.Missing: runtime seminal
  39. [39]
    Continuous path and edge profiling - ResearchGate
    ... (JIT) compilers, which must collect path profiles on the fly at runtime. In this paper, we propose an efficient online path profiling technique, called ...
  40. [40]
    [PDF] Vector Parallelism in JavaScript: Language and compiler support for ...
    Time (JIT) compiler immediately produces SIMD instruc- tions for those operations. The JIT compiler speculates that every high-level SIMD instruction ...
  41. [41]
    [PDF] Vectorization in PyPy's Tracing Just-In-Time Compiler
    May 25, 2016 · The empiri- cal evaluation shows that the vectorizer can gain speedups close to the theoretical optimum of the SSE4 instruction set. 1.
  42. [42]
    [PDF] JIT Compilation Policy on Single-Core and Multi-core Machines
    Consequently, research is needed to explore the best JIT compilation policy on multi-core machines with several concurrent compiler threads. In this paper, we ...
  43. [43]
    January 14: IBM's 701 Chief Architect Nathaniel Rochester Born
    The chief architect of IBM's first scientific computer, the 701, is born. Nathaniel Rochester also developed the prototype for the IBM 702.Missing: assembler | Show results with:assembler
  44. [44]
    [PDF] Nathaniel Rochester Papers - Library of Congress
    Rochester was the architect of IBM's first general purpose ... electronic computer, the type 701, and devised and wrote the first symbolic assembly program for ...
  45. [45]
    Fortran - IBM
    Fortran was born of necessity in the early 1950s, when computer programs were hand-coded. Programmers would laboriously write out rows of zeros and ones in ...
  46. [46]
    Time-sharing | IBM
    A technique called batch processing, in which previously collected jobs were processed in a single group, was developed to shorten downtime between the ...
  47. [47]
    History - Multics
    Jul 31, 2025 · Multics design was started in 1965 as a joint project by MIT's Project MAC, Bell Telephone Laboratories, and General Electric Company's Large ...
  48. [48]
    Introduction and Overview of the Multics System
    Multics (Multiplexed Information and Computing Service) is a comprehensive, general-purpose programming system which is being developed as a research project.<|control11|><|separator|>
  49. [49]
    [PDF] Origins of Garbage Collection
    Evaluation: This paper originates the idea of reference counting garbage collection, which is still used today. Reference counting is particularly useful for ...
  50. [50]
    Revisiting Dynamic Dispatch for Modern Architectures
    Oct 19, 2023 · Since the 1980s, Deutsch-Schiffman dispatch has been the standard method dispatch mechanism for languages like Smalltalk, Ruby, and Python.
  51. [51]
    A Brief History of Java | OpenJDK Migration for Dummies
    After rebranding due to trademark issues and negotiations with the fledgling browser company Netscape, Java was officially made public on May 23, 1995, with the ...
  52. [52]
    Celebrating 10 years of V8 - V8 JavaScript engine
    Sep 11, 2018 · V8 went open-source the same day Chrome was launched: on September 2nd, 2008. The initial commit dates back to June 30th, 2008. Prior to that ...
  53. [53]
    About Node.js
    Node.js® is a free, open-source, cross-platform JavaScript runtime environment that lets developers create servers, web apps, command line tools and ...
  54. [54]
    WebAssembly Core Specification - W3C
    Dec 5, 2019 · WebAssembly is a safe, portable, low-level code format for efficient execution, enabling high performance applications on the Web.
  55. [55]
    Understanding Ownership - The Rust Programming Language
    It enables Rust to make memory safety guarantees without needing a garbage collector, so it's important to understand how ownership works. In this chapter ...
  56. [56]
    Rise of the Planet of Serverless Computing: A Systematic Review
    This article provides a comprehensive literature review to characterize the current research state of serverless computing.