Fact-checked by Grok 2 weeks ago

Just-in-time compilation

Just-in-time (JIT) compilation is a dynamic compilation technique in which a 's code, typically in or an , is translated into native at immediately before execution, enabling adaptive optimization based on observed behavior. This method contrasts with ahead-of-time (AOT) compilation by deferring the translation process until the is running, allowing the to leverage information such as execution frequency and hardware specifics for targeted improvements. The origins of JIT compilation trace back to the 1960s, with early concepts appearing in John McCarthy's 1960 paper on LISP, which described fast runtime function compilation, and the University of Michigan Executive System in 1966, which employed runtime translation for efficiency. It evolved through implementations in dynamic languages like Smalltalk in 1984 by L. Peter Deutsch and Allan Schiffman, which used JIT to boost interpreted code performance, and later in the Self programming language during the 1990s, emphasizing adaptive optimizations. The technique gained widespread prominence with the rise of Java in the mid-1990s, and further advanced with Sun Microsystems' HotSpot JVM in 1999, which integrated JIT compilers to compile frequently executed "hot" code paths, marking a shift from pure interpretation to hybrid execution models. Key advantages of JIT compilation include enhanced performance through runtime-specific optimizations, such as inlining hot methods and tailored to actual usage patterns, which can outperform static in dynamic environments. It also promotes portability by allowing to run on any platform with a compatible JIT compiler, as seen in Java's and .NET's , where intermediate code is just-in-time translated to native instructions during execution. However, it introduces startup overhead from initial and potential pauses, though modern systems mitigate this via tiered strategies that start with and progress to optimized native code. JIT remains integral to s, scripting engines like V8 in , and , balancing flexibility with efficiency.

Fundamentals

Definition and Basic Principles

Just-in-time (JIT) compilation is a compilation technique that translates , intermediate representations, or into native during the execution of a program, rather than prior to execution. This approach serves as a between pure , which offers rapid startup but slower execution, and , which provides high at the cost of longer initial load times. By compiling , JIT balances portability across platforms with the efficiency of platform-specific optimizations. The basic principles of JIT compilation involve a multi-stage process to minimize overhead while maximizing performance. Initially, the program or its components are interpreted to enable quick startup and handle infrequently executed code paths efficiently. As execution proceeds, the system identifies "hot paths"—frequently executed sections of code—and triggers compilation of these hotspots into native . The resulting compiled code is cached in for reuse, preventing redundant recompilations during subsequent invocations of the same paths. JIT systems often distinguish between baseline compilers, which produce quick but less optimized code for initial use, and optimizing compilers, which apply advanced transformations to hotter code for greater efficiency. Key concepts in JIT compilation include hotspot detection and inline caching to adapt to runtime behaviors. Hotspots are detected through mechanisms such as invocation counters, which track execution frequency and trigger compilation once thresholds are met, or sampling techniques that periodically profile the running program's call stack to identify active code regions. Inline caching addresses dynamic language features like polymorphic types by storing type assumptions and dispatch results directly within the compiled code near call sites, enabling faster resolution of method lookups or property accesses without full runtime searches on each invocation.

Comparison to Other Execution Models

Ahead-of-time (AOT) compilation translates or intermediate representations into prior to program execution, enabling immediate execution without runtime overhead. This model, exemplified by native binaries in languages like C++, offers advantages in startup speed, as no initial or is required, and lower memory usage during execution since no JIT needs to be loaded. However, AOT requires platform-specific builds, leading to portability challenges across different architectures or operating systems, as the generated code is statically bound to the target environment. In comparison, just-in-time (JIT) compilation performs translation at runtime, starting with of or intermediate code before compiling frequently executed portions to optimized . While this introduces startup latency from the initial interpretation phase and compilation time, JIT achieves superior steady-state performance through runtime and adaptive optimizations tailored to actual execution patterns, such as branch probabilities observed during program runs. JIT also enhances portability by allowing a single artifact to be dynamically compiled for diverse hardware and OS configurations at deployment time, avoiding the need for multiple pre-built binaries. Pure executes directly via an evaluator that translates on-the-fly without generating , prioritizing simplicity in and maximum portability across platforms due to the absence of architecture-specific . This approach incurs significant overhead, as each requires repeated and execution in a loop, resulting in performance that is typically orders of magnitude slower than compiled for compute-intensive tasks. JIT addresses this limitation by initially interpreting like a pure interpreter but progressively compiling hot regions to native , balancing the simplicity of with the speed of and yielding performance closer to AOT in long-running applications. Within JIT systems, hybrid models vary in granularity, such as method-based and trace-based approaches, each trading off startup responsiveness against peak efficiency. Method-based JIT identifies and compiles entire functions or methods once they reach a hotness threshold, providing straightforward optimization scopes and quicker initial compilation for modular codebases, though it may miss inter-method optimizations. Trace-based JIT, conversely, records and compiles linear sequences of frequently executed instructions (traces) that span multiple methods, enabling more aggressive optimizations like across boundaries but requiring additional runtime machinery for trace detection and stitching, which can delay warmup. Qualitatively, method-based JIT exhibits a steeper initial performance ramp-up for short or method-centric workloads, while trace-based JIT demonstrates smoother scaling to high throughput in steady-state scenarios dominated by repetitive paths, such as loops or pipelines.

Historical Development

Origins in Early Systems

The conceptual roots of just-in-time (JIT) compilation trace back to the late and early , when researchers began exploring dynamic translation techniques to balance interpretative flexibility with execution efficiency in early computing environments. The earliest published description of a JIT-like mechanism appeared in John McCarthy's seminal 1960 paper on , where he proposed compiling Lisp functions into at during evaluation, rather than relying solely on interpretation. This approach addressed the need for efficient computation of recursive symbolic expressions on limited hardware like the , enabling the interpreter to generate native code for frequently used subexpressions on the fly. McCarthy's design laid a foundational idea for code , influenced by the demands of dynamic typing in , which required flexible evaluation without prior static analysis. An early implementation of runtime translation appeared in the Executive System in , which employed dynamic to improve efficiency in a environment. In the 1970s and 1980s, dynamic compilation concepts evolved further in object-oriented systems, particularly with the development of Smalltalk at PARC. Smalltalk-80, released in 1980, later featured a compiler implemented in 1984 by and Allan Schiffman. This translated bytecode to native lazily upon execution, supporting the language's exploratory and dynamic nature, including late binding and garbage collection. These innovations maintained responsiveness in resource-constrained systems. Parallel developments in portable language implementations highlighted JIT precursors through interpretive optimizations. The p-System, released in 1978, employed a pseudo-code (p-code) interpreter that used direct for efficient execution on diverse microcomputers like the and IBM PC. The system included optional tools for translating p-code to native code at load time, which helped reduce interpretive overhead in a portable , though not strictly during execution. This was motivated by the challenges of dynamic typing and in Pascal. The late 1980s saw more explicit JIT experimentation in prototype-based languages, exemplified by the Self programming language developed at Xerox PARC. In their 1987 OOPSLA paper, David Ungar and Randall B. Smith introduced Self as a dynamically typed, object-oriented language that relied on runtime compilation to achieve high performance through method specialization. Self's initial implementation included a bytecode interpreter augmented with a JIT compiler that generated optimized native code for hot methods during execution, addressing the interpretive slowdowns inherent in its exploratory, garbage-collected environment. This work built on Smalltalk's legacy but emphasized adaptive optimizations tailored to dynamic typing, marking a key step toward formal JIT systems.

Key Milestones and Adoption

The 1990s saw pivotal breakthroughs in JIT compilation, particularly with the integration of adaptive techniques into production virtual machines. Sun Microsystems announced the HotSpot Java Virtual Machine in April 1999, introducing runtime profiling and dynamic optimization that selectively compiled frequently executed code paths to native instructions, marking a shift from static to adaptive JIT strategies in enterprise Java environments. This innovation was built on earlier JIT experiments in Java but gained widespread attention for its role in enhancing JVM performance. In the 2000s, JIT compilation expanded into mainstream runtime environments and mobile platforms, driven by the need for efficient execution of dynamic languages. Microsoft introduced the initial JIT compiler in the .NET Common Language Runtime with the .NET Framework 1.0 release in February 2002, enabling just-in-time translation of intermediate language to machine code for improved application performance across Windows ecosystems. This was followed by the RyuJIT compiler in 2013 for 64-bit architectures, which became fully integrated by 2018, offering faster compilation and better code quality for .NET applications. Google's launch of the V8 JavaScript engine in September 2008 for the Chrome browser revolutionized web performance by employing a high-performance JIT that compiled JavaScript directly to native code without an interpreter intermediate step. In mobile computing, Android's Dalvik virtual machine, introduced in 2007, initially relied on interpretation but added JIT compilation in Android 2.2 (Froyo) in May 2010, with the transition to the Android Runtime (ART) in 2014 incorporating hybrid JIT and ahead-of-time compilation for better battery efficiency and speed. The 2010s and 2020s brought further innovations, particularly in cross-platform and specialized domains, with JIT adapting to emerging paradigms like and . The Wasmtime runtime, initiated in 2019 under the Bytecode Alliance, emerged as a key JIT-based engine for , providing secure and efficient execution of wasm modules across diverse hardware. Oracle's , released in version 1.0 in April 2018, introduced a polyglot JIT compiler capable of optimizing code from multiple languages including , , and on a shared substrate, facilitating seamless interoperability in cloud-native applications. In the 2020s, Apple's JavaScriptCore engine in and continued advancing JIT capabilities, with optimizations like enhanced baseline JIT and new bytecode formats introduced in recent updates to reduce memory usage and improve startup times for web applications. Similarly, the Cranelift code generation backend, integrated into Rust's ecosystem around 2020, provided a fast and secure JIT alternative for compilation in projects like Wasmtime, emphasizing verifiable for . For AI and , the Apache TVM framework, originating as a research project in 2017 and entering the Apache incubator in 2019, leveraged JIT to optimize tensor computations across heterogeneous hardware like GPUs and TPUs. A notable recent milestone is the inclusion of an experimental JIT compiler in 3.13, released in October 2024, marking Python's adoption of JIT techniques to improve performance for this widely used dynamic language. The widespread adoption of JIT compilation has been propelled by the explosive growth of web and mobile applications, where dynamic languages demand runtime flexibility without sacrificing speed. The proliferation of in browsers and /.NET in during the 2000s, combined with mobile OS dominance by and in the , necessitated JIT to bridge interpretation overhead and native performance. More recently, the rise of and AI workloads has extended JIT's reach, enabling portable, high-performance code execution in and specialized accelerators, with frameworks like TVM addressing the need for optimized ML inference.

Design and Implementation

Core Mechanisms and Triggers

Just-in-time (JIT) compilation operates through a structured pipeline that converts high-level bytecode or intermediate code into native machine code during program execution. The process typically starts with parsing the input bytecode into a platform-independent intermediate representation (IR), which enables initial analysis and optimizations such as constant folding or dead code elimination. This IR is then transformed by a backend compiler into target-specific assembly code, which is assembled into executable machine code optimized for the underlying hardware architecture. In the HotSpot JVM, for instance, the C2 compiler uses a high-level IR known as the Ideal Graph to represent both data and control flow in a static single assignment (SSA) form before generating architecture-specific code. JIT systems distinguish between basic JIT compilation, which produces quick but less optimized code, and JIT with deoptimization, which allows for more aggressive optimizations under runtime assumptions that can later be invalidated. In the latter approach, compilation assumes stable program behavior, such as unchanging class hierarchies or type stability, generating code that may trap if assumptions fail; deoptimization then unwinds the optimized frames and restarts execution in a safer mode, like or a lower optimization tier. This mechanism, as implemented in systems like the JVM, enables speculative optimizations without permanent commitment to potentially suboptimal code paths. Compilation is initiated by specific triggers designed to detect frequently executed "hot" code paths while minimizing overhead. Threshold-based triggers monitor invocation counts or loop iterations, compiling a method once it surpasses a predefined limit, such as 10,000 invocations in the server VM's tiered compilation mode. Sampling-based periodically samples running threads' program counters to identify hotspots without invasive counters, enqueueing promising methods for compilation. Event-driven triggers, such as method entry or exit events, increment counters in to track execution frequency. These mechanisms ensure compilation focuses on performance-critical code, with tiered systems like progressing from quick C1 compilations (low threshold around 200 invocations) to thorough C2 optimizations. Generated requires careful management to ensure safe and efficient execution. JIT compilers allocate dedicated executable pages, often in a segregated code cache, to store the compiled code separate from data , adhering to hardware protections like non-executable stacks to mitigate risks. Platform-specific generation involves architecture-tuned backends that emit instructions optimized for features like SIMD extensions or . If deoptimization occurs due to invalidated assumptions—such as a revised override or exceptional condition—the transfers control from the compiled code to an interpreter or recompiles with updated profiles, preserving program correctness. In , the code cache manages this by marking invalid code as non-entrant and freeing resources as needed.

Optimization Techniques

Just-in-time (JIT) compilers employ profiling-driven optimizations to enhance code quality by leveraging runtime data collected during program execution. These techniques involve lightweight sampling profilers that identify frequently executed code paths, or "hot spots," enabling targeted recompilation with specialized optimizations. For instance, type specialization refines generic code based on observed runtime types, replacing dynamic type checks with direct operations for common cases, as demonstrated in trace-based JIT systems for dynamic languages where runtime type profiles guide the generation of type-specific machine code. Branch prediction is improved through feedback loops that record execution histories, allowing the compiler to inline likely branches and reduce misprediction penalties in loops. Inlining decisions are informed by dynamic call graphs constructed from profile data, prioritizing method calls at hot sites to eliminate overhead and expose further optimizations, such as in Java JIT frameworks where profile-directed inlining at higher optimization levels yields significant speedups. Advanced optimization methods in JIT compilers focus on memory and control flow efficiencies through speculative and analytical techniques. determines whether newly allocated objects remain local to a or , enabling allocation elimination by promoting them to or registers, thus reducing collection pressure; this is often combined with partial evaluation in tracing JITs to remove unnecessary allocations and type checks along hot traces. Speculative optimizations assume runtime behaviors, such as monomorphic call sites where a virtual is invoked on a single receiver type, replacing with direct calls guarded by runtime checks; if assumptions fail at polymorphic sites, deoptimization restores interpretive execution. These guards extend to broader speculations, like assuming constant values or loop invariants, with fallback mechanisms ensuring correctness. Adaptive compilation strategies in JIT systems use tiered approaches to balance compilation overhead and performance. Baseline compilers produce quick, unoptimized code for rapid startup, while full optimizers apply aggressive transformations to hot code; tier-up mechanisms promote methods based on invocation counts or profile metrics, and tier-down deoptimizes invalidated code. Multi-level policies explore single- and multi-tier configurations to optimize for modern hardware, achieving better steady-state performance than single-tier systems by dynamically adjusting optimization intensity. Modern implementations, such as 2.1's trace-based compiler, extend speculation to "everything" along traces—including types, aliases, and allocations—with inline guards for efficient recovery, enabling near-native speeds for dynamic languages. In the 2020s, MLIR-based JIT frameworks facilitate modular optimizations by representing code at multiple abstraction levels, allowing dialect-specific passes for domain-targeted enhancements like tensor operations in runtimes.

Performance Analysis

Benefits and Speedups

Just-in-time (JIT) compilation delivers significant runtime efficiency gains over , especially in long-running applications where the upfront compilation costs are offset by repeated execution of optimized code. Early benchmarks using the SPECjvm98 suite showed speedups of 2.8x to 7.7x compared to interpreters across various workloads on multiprocessor systems. More recent evaluations in dynamic languages confirm this range; for example, PyPy's JIT compiler achieves a geometric mean speedup of 2.8x over the interpreter on a diverse set of benchmarks, with greater benefits in compute-intensive tasks. Following the warmup phase, during which frequently executed code paths are identified and compiled, JIT approaches or surpasses native execution speeds by leveraging profiling for aggressive optimizations unavailable at . This adaptive nature proves particularly advantageous for dynamic languages, where type information and execution patterns emerge only at ; PyPy's , for instance, outperforms the non-JIT by exploiting just-in-time type specialization and tailored to observed behaviors. In contrast to ahead-of-time (AOT) compilation, JIT supports smaller deployment artifacts in dynamic environments by enabling distribution of compact, platform-independent rather than bulky native binaries optimized for specific architectures. Real-world implementations highlight these gains in popular engines. Google's V8 JavaScript engine, through its TurboFan optimizing JIT, yields up to 4.35x faster performance on the JetStream benchmark suite relative to bytecode interpretation, driving efficient execution of web applications. In 2020s AI workloads, JIT techniques in TensorFlow's XLA compiler fuse operations and partition graphs for parallelism, reducing overhead and boosting throughput in machine learning inference and training on accelerators like GPUs.

Limitations and Overhead

One significant limitation of just-in-time (JIT) compilation is the warmup phase, during which interpretation occurs before hot code paths are identified and compiled, leading to initial execution slowdowns. In the (JVM), for instance, this warmup can account for up to 33% of execution time in data analytics workloads like Hadoop Distributed File System (HDFS) sequential reads of 1 GB files, with interpreter overheads reaching 15 ms per operation compared to 65 μs for compiled code. Such delays, often in the range of tens to hundreds of milliseconds during startup and early execution, arise because compilation is deferred until runtime profiling gathers sufficient data. JIT systems also incur substantial memory overhead from maintaining multiple versions of compiled code, including unoptimized, optimized, and deoptimized variants for different execution profiles. This can result in code caches consuming hundreds of megabytes, significantly exceeding the footprint of a single ahead-of-time (AOT) binary, as the must store both source and generated alongside . In resource-constrained environments, this overhead exacerbates peak memory usage, potentially leading to increased or out-of-memory conditions not seen in AOT-compiled applications. Several factors contribute to JIT's performance constraints. Compilation is CPU-intensive, often "stealing" cycles from the application itself, which can cause spikes and degrade , particularly in multi-threaded or containerized settings where resources are shared. Interactions with (GC) further amplify overheads, as aggressive JIT production of native code increases memory pressure, triggering more frequent GC pauses that hinder overall throughput. JIT underperforms notably in short-lived programs or scenarios with cold code paths, where the compilation investment never amortizes; for example, in benchmarks lasting mere seconds, warmup can dominate execution time, making interpreted or AOT approaches preferable. Compared to AOT compilation, JIT exhibits higher peak memory demands due to its dynamic nature, though it avoids upfront costs. Code generation in JIT is inherently platform-dependent, tailoring optimizations to the specific CPU architecture and operating system at , which introduces variability across environments like x86 versus . Recent studies post-2020 have critiqued JIT's on mobile devices, highlighting how compilation and associated increase power draw in battery-constrained scenarios, often favoring AOT for sustained low-energy operation.

Security Considerations

Vulnerabilities in JIT Processes

Just-in-time () compilation introduces unique security vulnerabilities due to its dynamic code generation at runtime, which can be exploited to bypass protections like (ASLR) and data execution prevention (DEP). One prominent is JIT spraying, where attackers force the JIT compiler to generate large amounts of predictable executable code containing fragments by repeatedly executing loops with attacker-controlled constants, enabling reliable hijacking in environments like engines. This technique was first demonstrated against Flash Player in 2010 and extended to architectures, highlighting its portability across JIT-based virtual machines. Speculative optimizations in JIT compilers, which assume stable runtime types based on profiling to generate efficient code, can lead to type confusion vulnerabilities if the assumptions fail, allowing attackers to treat objects as incompatible types and achieve arbitrary read/write primitives for remote code execution (RCE). For instance, flaws in type inference during optimization tiers have enabled exploits where corrupted type maps mislead the compiler into generating unsafe memory accesses. Return-oriented programming (ROP) exploits further leverage JIT-compiled regions by chaining short instruction sequences (gadgets) ending in returns within the dynamically generated code, evading code-signing and write-XOR-execute (W^X) policies that are more effective against static binaries. Runtime risks in JIT processes include memory corruption through unsafe code generation, such as buffer overflows in the intermediate representation (IR) during compilation, which can produce malicious machine code that executes arbitrary instructions. Attackers may corrupt the IR by exploiting bugs in the compiler's parsing or optimization passes, leading to overflows that alter generated binaries. Additionally, side-channel leaks arise from profiling data used for optimization decisions; non-uniform input distributions can induce timing variations correlated with secrets, as the JIT's compilation choices (e.g., inlining or loop unrolling) create observable execution time differences exploitable for information disclosure. In the 2010s, Chrome's faced multiple -related exploits, including JIT spraying to place in executable memory and type confusion bugs in optimizer leading to RCE, as seen in contests where attackers demonstrated escapes via these vectors. The dynamic nature of enables just-in-time attacks that manipulate on-the-fly, unlike ahead-of-time (AOT) compilation where code is fixed pre-execution, reducing opportunities for runtime tampering but lacking JIT's adaptive optimizations. Recent 2020s developments in WebAssembly (Wasm) highlight ongoing JIT vulnerabilities, with sandbox escapes achieved through type confusion and memory corruption in Wasm's JIT backends, allowing attackers to break isolation boundaries in browsers despite Wasm's memory safety guarantees. For example, flaws in Wasm validation and compilation have enabled exploits that corrupt JIT-generated code to access host resources outside the sandbox.

Protective Measures and Best Practices

To mitigate security risks in just-in-time (JIT) compilation, sandboxing techniques enforce memory isolation by implementing W^X (write XOR execute) policies, which prevent memory pages from being simultaneously writable and executable, thereby blocking code injection attacks during dynamic code generation. These policies are enforced at the operating system level or within virtual machines (VMs), where JIT code is allocated to read-only executable regions after compilation, as seen in browser engines like Firefox that enable W^X for all JIT code to reduce the attack surface. In capability-based systems within VMs, such as those used in JavaScript engines, access controls limit JIT operations to predefined capabilities, ensuring that compiled code cannot arbitrarily modify sensitive resources or escalate privileges. Hardening techniques further strengthen JIT security by obfuscating exploitable patterns in generated code. Constant blinding, for instance, XORs immediate constants with runtime-generated masks during optimization passes, preventing attackers from predicting or crafting gadgets for JIT spraying attacks in compilers like . Verified compilation paths, which formally prove the correctness of code generation using tools like extended for JIT backends, ensure that optimizations do not introduce unintended behaviors or vulnerabilities, as demonstrated in formally verified JIT implementations on x86 architectures. Additionally, (ASLR) applied to code caches randomizes the placement of JIT-generated code, complicating (ROP) exploits by making gadget addresses unpredictable across executions. Best practices for JIT deployment include configuring tiered compilation with limits on optimization levels to balance performance and security, reducing the complexity of that could expose optimization bugs exploitable in spraying attacks. Regular of JIT compilers, using coverage-guided tools like FuzzJIT or FUZZILLI, systematically uncovers bugs in optimization pipelines by generating adversarial inputs that trigger edge cases in JavaScript engines, leading to the discovery of dozens of previously unknown vulnerabilities. Adopting standards like WebAssembly's minimal interface through the WebAssembly System Interface (WASI), introduced in preview in 2019, minimizes the by restricting host interactions to a narrow, capability-mediated that isolates JIT-compiled modules from the broader system.

Applications

In Virtual Machines

Just-in-time (JIT) compilation plays a central role in virtual machines (VMs) such as the (JVM) and the (CLR), where it translates platform-independent into optimized native code at to balance startup latency with peak performance. In these environments, JIT integrates deeply with the VM's , leveraging verified and to enable aggressive optimizations while maintaining portability across hardware and operating systems. In the JVM, the HotSpot implementation employs a bytecode interpreter for initial execution, which gathers profiling data on method invocation frequency and branch probabilities to trigger JIT compilation. This is complemented by two tiered JIT compilers: the client compiler (C1), which performs lightweight, fast compilations suitable for quick startup and short-running applications, and the server compiler (C2), which applies more comprehensive optimizations like inlining and for long-running workloads. GraalVM, an advanced JVM variant, enhances these capabilities with partial , an optimization that analyzes object lifetimes to eliminate unnecessary heap allocations by replacing them with stack-based scalars or registers when objects do not escape their compilation unit. This technique, rooted in control-flow-sensitive analysis, significantly reduces memory pressure in object-heavy applications. The CLR, powering .NET applications, utilizes RyuJIT as its primary JIT compiler, which incorporates global value numbering to eliminate redundant computations by assigning unique values to expressions across basic blocks and propagating them for reuse. RyuJIT's design emphasizes cross-platform code generation for x64, x86, and ARM64 architectures, ensuring consistent optimizations regardless of the host OS. In the cross-platform CoreCLR , tiered compilation is enabled by default, starting with rapid "tier-0" interpretation or minimal for low-latency startup, then progressing to "tier-1" quick compilations and finally "tier-2" fully optimized code based on runtime profiles, which adapts to diverse workloads like server applications. Virtual machines benefit from bytecode verification, performed prior to JIT compilation, which statically checks , stack integrity, and to prevent invalid operations, allowing the JIT to generate code under the assumption of a secure execution environment without additional runtime checks. This verification enables safer, more aggressive JIT optimizations in both JVM and CLR by confirming that bytecode adheres to the VM's operational constraints. Additionally, JIT compilers synergize with garbage collection () mechanisms; for instance, in JIT informs GC by identifying non-escaping objects for on-stack allocation, reducing heap traffic and enabling concurrent GC pauses with minimal interference from compiled code. In HotSpot, this integration allows the JIT to embed GC barriers and metadata in generated code, facilitating efficient root scanning during collection cycles. As of 2025, ongoing developments like Project Valhalla introduce value types and primitive classes to the JVM, impacting by enabling specialized code paths that eliminate object header overhead and autoboxing, thus allowing compilers like and Graal to perform finer-grained scalar replacements and inline primitives directly, though full integration remains experimental with incomplete optimization coverage for legacy codebases.

In Scripting and Web Technologies

Just-in-time () compilation plays a pivotal role in modern engines, enabling high-performance execution of dynamic code in browsers. Google's V8 engine, used in and , employs a multi-tiered approach with Ignition as a interpreter that generates low-level instructions for initial execution, followed by as the optimizing compiler that transforms hot paths into efficient using sea-of-nodes for advanced optimizations like inlining and . Similarly, Mozilla's engine, powering , utilizes a compiler to quickly produce unoptimized from for broad coverage, complemented by IonMonkey as the optimizing that applies aggressive techniques such as type specialization and on frequently executed functions. These mechanisms significantly enhance performance in applications, particularly for operations involving DOM manipulation, where optimization reduces execution time in repetitive tasks like event handling and element traversal by compiling loops and conditionals to native speed. Beyond , JIT compilation extends to other scripting languages in and contexts. , an alternative interpreter, incorporates a tracing compiler that profiles execution traces and generates optimized , achieving average speedups of about 3x over for compute-intensive scripts commonly used in backends or data processing. , a lightweight for the language, employs a tracing compiler to dynamically compile bytecode to , making it ideal for scripting in technologies like game engines or network applications where low overhead is critical, with performance often approaching speeds for hot paths. In runtimes, V8 integrates a dedicated compilation pipeline with baseline and tiered optimizing JITs to execute Wasm modules efficiently alongside , supporting near-native performance for compute-heavy tasks without full AOT compilation. In web-specific scenarios, JIT compilation is essential for single-page applications (SPAs), where dynamic code loading and execution—such as in frameworks like or Vue—benefit from runtime optimization of user interactions and state updates, minimizing in client-side rendering. Emerging edge computing platforms in the 2020s, such as Workers, leverage V8's JIT within isolated environments to compile and execute at global edge locations, enabling low- serverless functions for web services with sub-millisecond cold starts after initial optimization.

Emerging and Specialized Uses

In and frameworks, just-in-time (JIT) compilation has emerged as a key technique for optimizing tensor operations and adapting to dynamic model inputs, particularly since 2017. Apache TVM, an open-source deep learning compiler stack released in 2018, leverages JIT to generate optimized for hardware backends like CPUs, GPUs, and accelerators, enabling end-to-end optimization of models from high-level specifications. Similarly, XLA (Accelerated Linear Algebra), integrated into , employs JIT compilation to analyze and fuse computation graphs at , specializing code for specific hardware and reducing execution overhead for workloads. These approaches allow adaptive compilation based on data, such as varying tensor shapes or execution profiles, which can yield substantial speedups; for instance, adaptive JIT strategies in systems like VELTAIR have demonstrated 45% to 71% performance gains in distributed ML environments by tuning schedules dynamically. In embedded and mobile systems, JIT compilation addresses resource limitations through hybrid models that combine runtime adaptation with precompilation. (ART), since version 7.0 in 2016, implements a hybrid ahead-of-time (AOT) and system where initial interpretation and profiling inform JIT optimizations, progressively enhancing application performance on mobile devices without excessive memory use. For (IoT) applications on low-power devices, co-operative JIT designs offload compilation to host systems while executing optimized code on energy-constrained coprocessors, improving efficiency in managed languages and reducing power consumption in sensing tasks. Additionally, JIT-accelerated extended (eBPF) virtual machines, such as those in the RIOT OS, enable secure, high-performance packet processing on resource-limited IoT hardware by compiling to native instructions at runtime. Beyond these domains, JIT finds specialized applications in databases, game engines, and . In database systems, JIT compilation optimizes query execution by generating for complex expressions, particularly in CPU-intensive operations; , for example, introduced JIT in version 11 to compile query plans dynamically, accelerating analytical workloads on large datasets. Game engines like incorporate runtime JIT via the Mono scripting backend, which compiles C# code on-demand for platforms permitting dynamic , enabling flexible asset loading and while balancing startup . In quantum simulation, JIT techniques enhance the speed of circuit emulation; , IBM's quantum software development kit, integrates with for JIT-compiled simulations of quantum dynamics, allowing just-in-time optimization of operators and solvers for noisy intermediate-scale quantum (NISQ) algorithms. As of 2025, compilation-based frameworks in and similar tools synthesize efficient simulators for multi-qubit systems.

References

  1. [1]
    [PDF] A Brief History of Just-In-Time - Department of Computer Science
    —Compiled programs run faster, espe- cially if they are compiled into a form that is directly executable on the under- lying hardware.
  2. [2]
    Just-In-Time Compilation (JIT) - Glossary - MDN Web Docs
    Jul 11, 2025 · JIT (Just-In-Time Compilation) is a compilation process in which code is translated from an intermediate representation or a higher-level languageMissing: advantages | Show results with:advantages
  3. [3]
    [PDF] JITed: A framework for JIT education in the classroom
    JIT compilation is a system where code is initially interpreted, and then heavily used sections (often referred to as "hot" segments) of the program are ...
  4. [4]
    1 Introduction to Java in Oracle Database
    JIT compilers quickly compile Java bytecodes to platform-specific, or native, machine code during run time. These compilers do not produce an executable file to ...
  5. [5]
    Managed Execution Process - .NET - Microsoft Learn
    Apr 20, 2024 · At execution time, a just-in-time (JIT) compiler translates the CIL into native code. During this compilation, code must pass a verification ...Choose a compiler · Compile to CIL
  6. [6]
    The JIT compiler - IBM
    The JIT compiler helps improve the performance of Java programs by compiling bytecodes into native machine code at run time.Missing: principles | Show results with:principles
  7. [7]
    Just in Time Compilation Explained - freeCodeCamp
    Feb 1, 2020 · Just-in-time compilation is a method for improving the performance of interpreted programs. During execution the program may be compiled into native code to ...
  8. [8]
    Sun Java Real-Time System 2.2 Compilation Guide
    As explained in the previous section, the JIT compilation scheme, which is the default for a Java program, can break the determinism of a real-time application.
  9. [9]
    Understanding Java JIT Compilation with JITWatch, Part 1 - Oracle
    This article provides a basic primer on JIT compilation as it happens in Java HotSpot VM. We'll discuss how to switch on simple logging for the JIT compiler.
  10. [10]
    A Practical Introduction to Achieving Determinism - Oracle Help Center
    Java RTS currently supports two compilation modes: Just-in-Time (JIT) compilation and Initialization-Time Compilation (ITC). JIT is the original compilation ...
  11. [11]
    Efficient implementation of the smalltalk-80 system
    Deutsch, L. P., Bobrow, D. G., "An efficient, incremental, real-time garbage ... Efficient implementation of the smalltalk-80 system. Software and its ...
  12. [12]
    AOT vs. JIT: impact of profile data on code quality - ACM Digital Library
    Just-in-time (JIT) compilation during program execution and ahead-of-time (AOT) compilation during software installation are alternate techniques used by ...
  13. [13]
    Hybrid Execution: Combining AOT & JIT Compilation
    Oct 19, 2023 · In this paper, we present a novel way to execute programs by bringing together the divergence that existed between AOT and JIT compilation.
  14. [14]
    Efficient interpreter optimizations for the JVM - ACM Digital Library
    We present two optimizations targeting these bottlenecks and show that the performance of the optimized interpreters increases dramatically: we report speedups ...
  15. [15]
    The structure and performance of interpreters - ACM Digital Library
    This paper examines interpreter performance by measuring and analyzing interpreters from both software and hardware perspectives.
  16. [16]
    Trace-based compilation for the Java HotSpot virtual machine
    Traditional method-based just-in-time (JIT) compilation translates whole methods to optimized machine code. Trace-based compilation only generates machine code ...
  17. [17]
    Adaptive multi-level compilation in a trace-based Java JIT compiler
    This paper describes our multi-level compilation techniques implemented in a trace-based Java JIT compiler (trace-JIT).
  18. [18]
    Sun Anncs Availability of the Java HotSpot Performance Engine
    Apr 30, 1999 · “By uniting the power of the Java HotSpot Performance Engine with the Java 2 platform, which was released at the Java Business Expo in December ...Missing: JVM | Show results with:JVM
  19. [19]
    JavaScript - Glossary - MDN Web Docs
    Oct 27, 2025 · In November 1996, Netscape began working with Ecma International to make JavaScript an industry standard. Since then, the standardized ...
  20. [20]
    .NET Framework version history - Wikipedia
    By late 2001 the first beta versions of .NET Framework 1.0 were released. The first version of .NET Framework was released on 13 February 2002, bringing ...
  21. [21]
    RyuJIT: The next-generation JIT compiler for .NET
    Sep 30, 2013 · This post introduces the .NET team's new 64-bit Just-In-Time (JIT) compiler. It was written by Andrew Pardoe, PM Manager for the CLR Runtime ...
  22. [22]
    Dalvik JIT - Android Developers Blog
    May 25, 2010 · The JIT is a software component which takes application code, analyzes it, and actively translates it into a form that runs faster, doing so while the ...Missing: introduction | Show results with:introduction
  23. [23]
    bytecodealliance/wasmtime: A lightweight WebAssembly ... - GitHub
    wasmtime. A standalone runtime for WebAssembly. A Bytecode Alliance project ... v38.0.3: Release Wasmtime 38.0.3 (#11934) Latest. 2 weeks ago · + 148 releases ...
  24. [24]
    Oracle Releases GraalVM 1.0, a Polyglot Virtual Machine and Platform
    Apr 27, 2018 · Oracle has announced the 1.0 release of GraalVM, a polyglot virtual machine and platform. The initial release includes the capability to run Java and JVM ...
  25. [25]
    rust-lang/rustc_codegen_cranelift: Cranelift based backend for rustc
    The goal of this project is to create an alternative codegen backend for the rust compiler based on Cranelift. This has the potential to improve compilation ...
  26. [26]
    The Apache Software Foundation Announces Apache® TVM™ as a ...
    Nov 30, 2020 · TVM enables machine learning developers to optimize and run computations efficiently on any hardware backend. The project originated in 2017 as ...
  27. [27]
    Android Runtime — How Dalvik and ART work? - ProAndroidDev
    Apr 14, 2021 · In this article, you'll learn how Android Runtime works, what is ART, DALVIK, JIT and AOT and how Android Runtime evolved over the years to ...
  28. [28]
    Apache TVM
    Apache TVM is a machine learning compilation framework, following the principle of Python-first development and universal deployment.Docs · Community · Download · Posts
  29. [29]
    HotSpot Glossary of Terms - OpenJDK
    The high-level intermediate representation in C2. It is an SSA form where both data and control flow are represented with explicit edges between nodes. It ...<|control11|><|separator|>
  30. [30]
    What the JIT!? Anatomy of the OpenJDK HotSpot VM - InfoQ
    Jun 28, 2016 · In this article we will explore the execution engine particularly the just-in-time (JIT) compilation, as well as runtime optimizations in OpenJDK HotSpot VM.<|control11|><|separator|>
  31. [31]
    PerformanceTechniques - HotSpot - OpenJDK Wiki
    May 25, 2013 · Deoptimization is the process of changing an optimized stack frame to an unoptimized one. With respect to compiled methods, it is also the ...
  32. [32]
    How the JIT compiler boosts Java performance in OpenJDK
    Jun 23, 2021 · This article introduces you to JIT compilation in HotSpot, OpenJDK's Java virtual machine. After reading the article, you will have an overview ...
  33. [33]
    How Tiered Compilation works in OpenJDK - Microsoft for Java ...
    Aug 21, 2023 · The “-Xcomp” and “-Xint” options instruct the JVM that methods must only be compiled (-Xcomp) or that they should only be interpreted (-Xint).
  34. [34]
  35. [35]
    Design and evaluation of dynamic optimizations for a Java just-in ...
    This article describes the design and implementation of a dynamic optimization framework in a production-level Java JIT compiler, together with two techniques ...
  36. [36]
    Allocation removal by partial evaluation in a tracing JIT
    In this paper we present a simple compiler optimization based on online partial evaluation to remove object allocations and runtime type checks in the context ...
  37. [37]
    A study of type analysis for speculative method inlining in a JIT ...
    In order to increase the number of inlining opportunities, a type analysis can be used to identify monomorphic virtual calls. In a JIT environment, the ...<|separator|>
  38. [38]
  39. [39]
    [PDF] Improving Java Performance Using Hardware Translation
    Simulation studies based on SPECjvm98 benchmarks show that the proposed architecture improves performance by 2.8 to 7.7 times relative to interpreters and 0.58 ...
  40. [40]
    PyPy Speed
    The geometric average of all benchmarks is 0.36 or 2.8 times faster than cpython. How has PyPy performance evolved over time? Plot 2: Speedup compared to ...
  41. [41]
    JIT Performance: Ahead-Of-Time versus Just-In-Time - Azul Systems
    Oct 28, 2022 · It can adapt the compiled native code to handle the data or perform its action based on the actual needs. These are just a few examples of what ...Missing: near- sources
  42. [42]
    What are the advantages of just-in-time compilation versus ahead-of ...
    Jan 21, 2010 · What are the advantages of just-in-time compilation versus ahead-of-time compilation? · Just-in-time compilation allows for greater portability.What does a just-in-time (JIT) compiler do? - Stack OverflowWhat are the differences between a Just-in-Time-Compiler and an ...More results from stackoverflow.comMissing: history | Show results with:history
  43. [43]
    Maglev - V8's Fastest Optimizing JIT - V8 JavaScript engine
    Dec 5, 2023 · TurboFan helps V8 run the suite 4.35x as fast! JetStream has a reduced emphasis on steady state performance compared to past benchmarks (like ...
  44. [44]
    [PDF] Understand and Eliminate JVM Warm-up Overhead in Data - USENIX
    Nov 2, 2016 · Reading a 1GB file on HDFS from a hard drive spends 33% of its time in warm-up. We consider bytecode interpretation as an overhead because ...
  45. [45]
    [PDF] JITServer: Disaggregated Caching JIT Compiler for the JVM in the ...
    Jul 13, 2022 · In our experiments, JIT compilation accounted for up to 50% of CPU time used during the start-up and warm-up phases, and for up to hundreds of ...
  46. [46]
    [PDF] ShareJIT: JIT Code Cache Sharing across Processes and Its ... - arXiv
    Oct 22, 2018 · Sharing cached code across multiple applications and multiple processes can lead to a reduction in memory use. It can directly reduce compile ...
  47. [47]
    Compilation in Java: JIT vs AOT - BellSoft
    May 13, 2024 · JIT compiles at runtime, enabling dynamic optimization, but has a warmup period. AOT compiles at build time, resulting in instant startup, but ...
  48. [48]
    Free your JVM from the JIT with JITServer technology
    Jan 9, 2020 · The JIT compiler no longer steals CPU cycles from the Java application, thus eliminating performance hiccups, improving the quality-of-service ( ...
  49. [49]
    Lessons from the field #6: IBM Java and OpenJ9 Just-In-Time ...
    Jun 30, 2021 · In general, JIT compilation CPU activity is high during “startup” and reduces over time. If you are running very short-lived benchmarks, you ...
  50. [50]
    Just-In-Time Compilation on ARM—A Closer Look at Call-Site Code ...
    This approach not only amortizes the cost of compilation, it also prevents compiling code paths that will never be executed. Furthermore, there are cases where ...
  51. [51]
    Memory Management on Mobile Devices | Proceedings of the 2024 ...
    Yet, despite memory management being key to their responsiveness, energy efficiency, and cost, mobile devices are understudied in the literature. ... jit-compiler ...Abstract · Information & Contributors · Published In
  52. [52]
    [PDF] Too LeJIT to Quit: Extending JIT Spraying to ARM
    Feb 11, 2015 · JIT spraying is an attack which defeats both DEP and. ASLR by enabling an attacker to predictably influence large swaths of the victim process's ...
  53. [53]
    [PDF] Randomization can't stop BPF JIT spray - Black Hat
    3 JIT spray attack. JIT spraying is an attack where the behavior of a Just-In-Time compiler is (ab)used to load an attacker-provided payload into an ...
  54. [54]
    Exploiting Logic Bugs in JavaScript JIT Engines
    Oct 5, 2021 · ... JIT compiler can use type information from previous executions. This, in turn, enables speculative optimization: the compiler will assume ...
  55. [55]
    [PDF] Just-In-Time Code Reuse: On the Effectiveness of Fine-Grained ...
    This approach was dubbed return-oriented programming. To date, return-oriented programming has been applied to a broad range of architectures (including ...
  56. [56]
    [PDF] NOJITSU: Locking Down JavaScript Engines
    Feb 23, 2020 · [27] showed that an attacker can force the JIT compiler to generate malicious code by corrupting the intermediate representation of the compiler ...
  57. [57]
    [PDF] SoK: Make JIT-Spray Great Again - USENIX
    If hardware resources. Page 3. Table 1: Defenses bypassed by JIT-Spray and JIT-based code reuse attacks and proposed mitigations. Attack Flavor. Exploit Goal.
  58. [58]
    War on JITs: Software-Based Attacks and Hybrid Defenses for JIT ...
    May 6, 2025 · In this article, we present a survey of software attacks on Just-In-Time (JIT) compilers, which dynamically produce optimized code at run time.
  59. [59]
    [PDF] RockJIT: Securing Just-In-Time Compilation Using Modular Control ...
    A JIT compiler emits JITted code in the code heap and executes it. The code heap is readable. (R), writable (W), and executable (X). A typical JIT compiler con-.
  60. [60]
    [PDF] JIT Compiler Security through Low-Cost RISC-V Extension - HAL
    The VM enforces a strong W ⊕X policy where memory pages cannot be writable and executable at the same time. This policy disables code-injection attacks.
  61. [61]
    Look Ma, no constants: practical constant blinding in GraalVM
    Apr 5, 2022 · In this paper we present our constant blinding implementation in the GraalVM compiler, enabling constant blinding across a wide range of ...
  62. [62]
    [PDF] Verified Just-In-Time Compiler on x86
    Nov 5, 2009 · The idea of JIT compilation, i.e. to dynamically translate input programs into native machine code, then execute only native code, is an old ...
  63. [63]
    [PDF] A General Persistent Code Caching Framework for Dynamic Binary ...
    Jun 22, 2016 · However, due to ASLR used by operating systems for security rea- sons, these function addresses are very likely to change across executions.
  64. [64]
    Tiered Compilation in JVM | Baeldung
    Jul 12, 2021 · The JVM's just in time compiler employs multiple techniques to optimize our software as it runs. We explore the various tiers and how they ...
  65. [65]
    [PDF] FuzzJIT: Oracle-Enhanced Fuzzing for JavaScript Engine JIT Compiler
    Nov 9, 2022 · An evaluation of mainstream JavaScript engines, where. FuzzJIT exposes 33 new bugs in JIT compilers and shows better performance and bug-finding ...
  66. [66]
    WebAssembly and Security: a review - arXiv
    Jul 17, 2024 · WebAssembly is a formal specification for portable machine code, born to allow the realization of portable code developed in any language ...
  67. [67]
    4 Compilation Optimization - Java
    Unlike Oracle JRockit, HotSpot features a Java byte code interpreter in addition to two different Just In Time (JIT) compilers: client (also known as C1) ...
  68. [68]
    What's new in .NET 8 runtime - Microsoft Learn
    May 7, 2024 · Dynamic PGO works hand-in-hand with tiered compilation to further optimize code based on additional instrumentation that's put in place during ...
  69. [69]
    Graal Compiler - GraalVM
    For example, it includes a partial-escape-analysis optimization that can remove the costly allocations of certain objects. See the value PartialEscapeAnalysis ...
  70. [70]
    [PDF] Partial Escape Analysis and Scalar Replacement for Java
    This thesis presents a new, practical algorithm that performs control flow sensitive. Partial Escape Analysis in a dynamic Java compiler.
  71. [71]
    Performance Improvements in RyuJIT in .NET Core and .NET ...
    Jun 29, 2017 · RyuJIT is the just-in-time compiler used by .NET Core on x64 and now x86 and by the .NET Framework on x64 to compile MSIL bytecode to native ...Missing: global | Show results with:global
  72. [72]
    Performance Improvements in .NET Core 3.0
    May 15, 2019 · Tiered compilation is a solution for the problem that very good compilation from MSIL to native code takes time; the more analysis to be done, ...
  73. [73]
    The Java HotSpot Performance Engine Architecture - Oracle
    Full-speed debugging: the Java HotSpot VM utilizes dynamic deoptimization technology to support debugging of applications at full speed. In earlier Java virtual ...
  74. [74]
    Ignition · V8
    V8 features an interpreter called Ignition. Ignition is a fast low-level register-based interpreter written using the backend of TurboFan.Missing: JIT | Show results with:JIT
  75. [75]
    TurboFan - V8.dev
    TurboFan is one of V8's optimizing compilers leveraging a concept called “Sea of Nodes”. One of V8's blog posts offers a high-level overview of TurboFan.
  76. [76]
    SpiderMonkey — Firefox Source Docs documentation - Mozilla
    ... SpiderMonkey in your own projects can be found at https://spidermonkey.dev. ... The WarpMonkey JIT replaces the former IonMonkey engine and is the highest ...
  77. [77]
    Performance tips for JavaScript in V8 | Articles - web.dev
    Oct 11, 2012 · Daniel Clifford gave an excellent talk at Google I/O on tips and tricks to improve JavaScript performance in V8.Hidden Classes · Arrays · The Full Compiler
  78. [78]
    PyPy
    Speed: thanks to its Just-in-Time compiler, Python programs often run faster on PyPy. · Memory usage: memory-hungry Python programs (several hundreds of MBs or ...Download · PyPy documentation · PyPy Sponsors and Consultants · Performance
  79. [79]
    LuaJIT is a Just-In-Time Compiler (JIT) for the Lua programming ...
    LuaJIT is a Just-In-Time Compiler (JIT) for Lua, considered one of the fastest dynamic language implementations, combining a high-speed interpreter with a JIT ...
  80. [80]
    WebAssembly compilation pipeline - V8 JavaScript engine
    In this document we dive into the WebAssembly compilation pipeline in V8 and explain how we use the different compilers to provide good performance.
  81. [81]
    [PDF] A Concurrent Trace-based Just-In-Time Compiler for Single ...
    Because tracing overlaps with compilation, the interpreter prepares the trace earlier for subsequent compilation, thus the JIT delivers the native code more.
  82. [82]
    Safe in the sandbox: security hardening for Cloudflare Workers
    Sep 25, 2025 · V8 already uses memory protection keys for the JIT compilers. The JIT compilers for a language like JavaScript generate optimized, specialized ...
  83. [83]
    [PDF] TVM: An Automated End-to-End Optimizing Compiler for Deep ...
    Oct 10, 2018 · TVM is a compiler that generates optimized code for diverse hardware, taking a high-level deep learning program specification.
  84. [84]
    [PDF] arXiv:2201.06212v1 [cs.DC] 17 Jan 2022
    Jan 17, 2022 · We evaluate and compare the proposed ideas in VELTAIR, where the combined adaptive compilation and scheduler can improve the system by 45% - 71% ...
  85. [85]
    Implement ART just-in-time compiler - Android Open Source Project
    Oct 9, 2025 · Android runtime (ART) includes a just-in-time (JIT) compiler with code profiling that continually improves the performance of Android applications as they run.Jit Compilation · Jit Workflow · Force Compilation<|separator|>
  86. [86]
    Co-operative JIT Compilation for Resource-Constrained Low-Power ...
    Oct 9, 2025 · We propose a JIT compilation design for managed languages to enhance the efficiency of LP coprocessor usage. These languages tend to increase ...
  87. [87]
    [PDF] End-to-End Mechanized Proof of a JIT-Accelerated eBPF ... - Hal-Inria
    Oct 31, 2024 · The eBPF instruction set is also used at the lowest end of the spectrum of the. IoT, on low-power and resource-constrained devices using micro- ...
  88. [88]
    How-to use JAX with qiskit-dynamics - GitHub Pages
    Mar 19, 2024 · We demonstrate here how, using the JAX backend, functions built using Qiskit Dynamics can be just-in-time compiled, resulting in faster ...
  89. [89]
    Synthesis of Quantum Simulators by Compilation - ACM Digital Library
    Mar 1, 2025 · A compilation-based framework that addresses these issues by leveraging an intermediate language to synthesize efficient quantum simulators in C and CUDA.