Fact-checked by Grok 2 weeks ago

Peephole optimization

Peephole optimization is a used in compiler design to improve the efficiency of generated by examining and small, contiguous sequences of instructions, often referred to as "peepholes," with equivalent but shorter or faster alternatives that preserve the program's semantics. This optimization operates primarily in the backend of a , after instruction selection, where it scans the target assembly code through a sliding window of a few —typically 1 to 10—and applies predefined pattern-matching rules to identify redundancies, such as unnecessary moves or arithmetic operations that can be simplified. For instance, a sequence like move $a $b; move $b $a can be reduced to a single move $a $b to eliminate redundancy, or addiu $a $a i; addiu $a $a j can become addiu $a $a i+j through . These transformations focus on local optimizations within basic blocks, making them simple to implement and machine-specific, which allows tailoring to particular architectures for better performance. Historically, peephole optimization dates back to early development, with foundational work in the late and 1970s emphasizing its role in the final compilation stages to discard redundant instructions without requiring global analysis. Traditionally, rules are hand-crafted by experts, but modern approaches automate their generation using superoptimization techniques, where short instruction sequences are exhaustively enumerated and tested offline to produce thousands of optimized patterns, far surpassing manual efforts. This automation, as seen in systems like the one described in the 2006 ASPLOS paper, enables scalable application during compilation via efficient lookup tables. The benefits of peephole optimization include reductions in code size by 1-6% and execution time by 1-5% on benchmarks like SPEC, with even greater speedups (up to 10x) on code, while maintaining correctness through equivalence-preserving rewrites. It complements broader optimizations like and is particularly valuable in resource-constrained environments, though it can introduce bugs if rules are not verified, prompting tools for in verified compilers like . Despite its local scope limiting global insights, repeated passes ensure cumulative improvements, making it a staple in production compilers for generating high-quality .

Introduction

Definition and Purpose

Peephole optimization is a compiler technique that involves examining a small, contiguous "window" or "peephole" of instructions—typically ranging from 1 to 10—in the generated code to identify inefficient patterns and replace them with more efficient equivalents, all while preserving the program's semantics. This local inspection targets object code or intermediate assembly representations, focusing on short sequences rather than the entire program. The primary purpose of peephole optimization is to improve code quality by enhancing execution speed, reducing the overall code size, or achieving both, through the elimination of redundancies and machine-specific inefficiencies that may evade higher-level analyses. By addressing low-level details such as unnecessary loads, stores, or algebraic simplifications, it boosts performance without requiring complex global program understanding, making it a simple yet effective method suitable for the final stages of compilation. This approach emphasizes locality and ease of implementation, allowing compilers to yield measurable gains in efficiency on resource-constrained or performance-critical systems. In scope, peephole optimization applies mainly to intermediate or phases, where it operates without delving into data flow or analysis, distinguishing it from global optimizations that consider broader program context. Its limited window size ensures simplicity but restricts it to intra-basic-block improvements, often integrated via to scan and transform code sequences iteratively. This makes it particularly valuable for retargetable s, where machine-dependent tweaks can be applied post-code generation.

Historical Development

Peephole optimization was first proposed by William M. McKeeman in 1965, in his seminal paper "Peephole Optimization," published in Communications of the ACM. McKeeman introduced the concept as a method to examine small sequences of machine instructions—termed a ""—and replace them with more efficient equivalents, emphasizing its simplicity and effectiveness for improving code generated by compilers. This work formalized peephole optimization as a targeted technique to address inefficiencies in without requiring global . The technique emerged during the era of early s for mainframe systems like the , released in 1964, where computational resources were limited and code efficiency was paramount. McKeeman's approach was adopted as a practical, low-cost alternative to more complex global optimizations, allowing compiler developers to perform local improvements with minimal overhead. Early implementations appeared in optimizing compilers in the late and , where methods helped eliminate redundant instructions and simplify arithmetic operations in generated code. During the and , peephole optimization gained prominence in assemblers and code generators, evolving from manual rule-based systems to more systematic frameworks. Key advancements included the development of retargetable peephole optimizers by J. W. Davidson and C. W. Fraser in 1980, which automated the generation of optimization rules from machine descriptions, making the technique portable across architectures. Their work, detailed in "The Design and Application of a Retargetable Peephole Optimizer," demonstrated significant code size reductions—up to 20% in some cases—while maintaining compilation speed, influencing subsequent designs. By the 1990s, peephole optimization had become a standard component in production compilers, integrated into tools like the GNU Compiler Collection (GCC) for machine-specific enhancements such as in integer operations. Contributions like those from Torbjörn Granlund in GCC's early versions further refined its application, underscoring its enduring role as a foundational optimization in modern compiler pipelines.

Core Techniques

Instruction Replacement

Instruction replacement in peephole optimization involves scanning sequences of instructions within a small window, typically a few consecutive operations, to identify patterns that can be substituted with equivalent but more efficient alternatives tailored to the target machine's instruction set. This technique, introduced by McKeeman in 1965, focuses on replacing suboptimal code generated earlier in the compilation process with faster or more compact instructions that exploit hardware capabilities, such as specialized operations or addressing modes. For instance, a common pattern replaces separate load instructions for operands followed by an arithmetic operation with a single instruction that loads and computes in one step, if the architecture supports complex addressing. The effectiveness of instruction replacement is highly machine-dependent, as it relies on the specific features of the target architecture, including the availability of fused or compound instructions that combine multiple operations. On RISC processors like ARM64, peephole optimizers often replace sequences of multiply followed by add instructions with a single fused multiply-add (FMA) operation, which performs the computation with a single rounding step for improved precision and reduced latency compared to separate instructions. In contrast, CISC architectures such as x86 may already incorporate more complex instructions natively, but peephole techniques can still optimize by selecting variants that minimize execution cycles or better utilize pipelines. This dependency ensures that replacements are architecture-specific, often requiring separate rule sets for different processors. By reducing the number of instructions executed, instruction replacement lowers overall code size and execution latency, contributing to performance gains in critical code paths. A representative example is optimizing the sequence for Z := Z + Y, which might initially generate:
LDA Y     ; Load Y into accumulator
[STA](/page/STA) TMP   ; Store to temporary
LDA Z     ; Load Z into accumulator
ADD TMP   ; Add temporary to accumulator
[STA](/page/STA) Z     ; Store back to Z
If the store to a temporary is unnecessary and the add can directly reference Y (assuming nondestructive accumulator behavior), this can be replaced with:
LDA Z     ; Load Z
ADD Y     ; Add Y directly
STA Z     ; Store back
This eliminates redundant loads and stores, cutting the instruction count from five to three and reducing potential access delays. Such optimizations can tie into broader elimination by avoiding duplicate loads in adjacent patterns, but primarily target direct swaps. Quantitative impacts include up to 14% size and 1.5× speedup in translated binaries on RISC targets.

Redundancy Elimination

Redundancy elimination in peephole optimization involves scanning a small window of consecutive instructions to identify and remove duplicate computations or superfluous operations that do not affect the program's semantics. This technique targets inefficiencies arising from , such as repeated loads or moves, by replacing or deleting them while preserving the original behavior. Introduced as a core aspect of local code improvement, it operates on machine-specific or intermediate code to reduce instruction count and execution time without requiring global analysis. A primary form of redundancy elimination focuses on detecting repeated computations, particularly consecutive identical loads or moves where the source remains unchanged. For instance, in assembly code, a sequence like MOV AX, BX followed by MOV CX, BX can have the second move eliminated if no intervening instructions modify BX, as the value in BX is already available and the copy is unnecessary. This identification relies on simple within the peephole window, typically 2-5 instructions, to flag such duplicates and streamline the code stream. Such optimizations are particularly effective in , where compiler-generated temporaries often lead to avoidable repetitions. Dead code removal, a key type of redundancy elimination, targets null or canceling instruction sequences that net zero effect, such as an increment immediately followed by a decrement on the same variable. An example is INC R1 paired with DEC R1, which can be entirely deleted since the operations cancel out without side effects. This process ensures that only instructions with no impact on program state or are excised, often during a forward pass through the code to maintain data dependencies. in peephole contexts is limited to local, unreachable, or useless operations within the window, distinguishing it from broader global . Common subexpression elimination within peephole optimization addresses redundant calculations of the same value, such as duplicate address computations for memory accesses. For example, two identical load address instructions like LDA [R1 + 4] followed later by another LDA [R1 + 4] in the window can be optimized by reusing the first result, avoiding recomputation if the address base is unmodified. This is achieved by tracking value equivalences locally, often using temporary registers to propagate the shared subexpression forward. The method is constrained to the peephole's scope to avoid complex inter-block analysis, making it efficient for intermediate code stages. The overall process employs straightforward forward scans or rule-based to flag redundant instructions for deletion, iterating over the code in a single pass while checking for side effects like accesses or branches. By limiting transformations to the small window, the optimizer preserves program semantics, as modifications only affect local sequences without altering external dependencies. This approach, applied post-register allocation, enhances through fewer memory operations.

Algebraic Simplification

Algebraic simplification in peephole optimization involves applying mathematical and logical identities to rewrite short sequences of instructions within a local window, thereby improving without altering program semantics. This technique leverages properties such as , associativity, and distributivity to reorder or consolidate operations, often enabling better or addressing modes. For instance, the of addition allows reordering operands in expressions like Z := Z + X derived from X := Y; Z := Z + X, potentially eliminating redundant loads and stores if the store to X is into a temporary location. In arithmetic contexts, associativity can be exploited to regroup operations for optimization. Consider a sequence representing a + b + c generated as (a + b) + c; if the target architecture favors right-associative forms for certain addressing, peephole optimization may it as a + (b + c) to reduce count or . Distributivity similarly enables factoring, such as transforming x * (a + b) into x * a + x * b if is cheaper than in the local , though such rewrites are constrained to the visible window to avoid side effects. These transformations are particularly effective in final stages, where machine-specific details become apparent. Logical optimizations extend algebraic simplification to control flow, simplifying branches and comparisons within the peephole. A common pattern replaces a conditional branch followed by an increment—such as if x > 0 then x = x + 1—with a conditional move or add instruction (e.g., cmovgt x, temp where temp holds x+1), eliminating the branch and reducing execution overhead on architectures supporting predicated execution. This approach avoids pipeline stalls from branches while preserving semantics, as verified in modern compiler frameworks. The scope of algebraic simplification remains inherently local, limited to the fixed-size peephole window, which precludes inter-block analysis or optimizations spanning labels and jumps. For example, replacing multiplication by 2 with a left shift (x * 2 to x << 1) is only applied if the shift instruction is available and no intervening control flow disrupts the pattern. Such constraints ensure termination but may miss global opportunities, and algebraic simplifications can overlap with redundancy elimination by removing null results like additions of zero.

Implementation Methods

Pattern Matching Approaches

Peephole optimization relies on to identify suboptimal sequences within a small, local window of code. The basic algorithm employs a sliding window that scans the intermediate or target code linearly from start to end, examining fixed-size segments typically consisting of 1 to 5 . For each window position, the sequence is compared against a predefined of "bad" patterns—such as redundant loads or unnecessary jumps—and replaced with equivalent "good" patterns that are more efficient, using table-driven matching where patterns are represented as strings, trees, or simple structures for quick lookup. This approach, introduced in the seminal work on peephole optimization, ensures that optimizations are applied locally without requiring global analysis, making it straightforward to implement in compilers. Advanced variants extend this basic mechanism to handle more complex or variable-length patterns. Finite state automata can be used to model the matching process as a state machine that transitions through code sequences, efficiently recognizing patterns that span irregular instruction lengths or involve conditional branches. Similarly, recursive descent parsers, often based on extended Backus-Naur Form (EBNF) grammars, parse the instruction stream to match hierarchical or context-sensitive patterns, such as those involving operand types or register usages, allowing for greater flexibility in optimization rules. These methods support variable-length windows, adapting the scan to patterns that may extend beyond a fixed size, typically up to 5 instructions, while maintaining the locality of the search. String pattern matching techniques, as in declarative approaches, further enhance this by treating instruction sequences as text and using efficient algorithms like those for expressions, though optimized to avoid overhead. The efficiency of these approaches stems from their linear traversal of the . The is O(n), where n is the length of the , because each is examined a number of times proportional to the maximum window size, with matching operations performed in time via table lookups or automaton transitions. A outline for the basic sliding window illustrates this simplicity:
for i from 0 to length([code](/page/Code)) - window_size:
    current_window = [code](/page/Code)[i : i + window_size]
    for each [rule](/page/Rule) in pattern_table:
        if match([rule](/page/Rule).bad_pattern, current_window):
            replace([code](/page/Code)[i : i + window_size], [rule](/page/Rule).good_pattern)
            break  # Apply first matching rule and advance window
This structure ensures progressive scanning without revisiting distant , though advanced parsers may introduce minor overhead for complex . Overall, these strategies balance expressiveness and performance, enabling effective local optimizations in production compilers.

Integration in Compilers

Peephole optimization is typically integrated into the backend of pipelines, occurring after and often on generated code or low-level intermediate representations like . This placement allows it to refine machine-specific instruction sequences produced by earlier phases, such as initial , without disrupting higher-level analyses. Multiple passes may be employed, with subsequent iterations applied after other backend transformations to capture newly emergent optimization opportunities. In modern compilers, peephole optimization is implemented through architecture-specific mechanisms. The GNU Compiler Collection (GCC) uses peephole definitions defined in machine description (.md) files, which specify patterns for replacing instruction sequences during the register transfer language (RTL) phase. Similarly, the LLVM compiler infrastructure employs the PeepholeOptimizer pass on MachineInstr representations in its code generation stage, enabling targeted rewrites of low-level instructions. In just-in-time (JIT) compilers like the Java HotSpot virtual machine, peephole optimization operates on low-level intermediate representation (LIR) after register allocation, generating more efficient machine code. These features are often configurable via optimization flags; for instance, GCC enables advanced peephole optimizations at the -O2 level and higher through the -fpeephole2 option. Integrating peephole optimization presents challenges related to handling diverse target architectures, as patterns must be tailored to specific instruction sets and variants to avoid invalid transformations. Additionally, it interacts closely with other backend optimizations, such as , requiring careful pass ordering to ensure that peephole rewrites do not conflict with or undermine scheduling decisions.

Examples

Optimizing Arithmetic Operations

Peephole optimization applied to arithmetic operations often targets redundant computations or inefficient instruction sequences within a small window of code, replacing them with more efficient equivalents that preserve semantics while minimizing execution cycles. A representative example occurs in stack-based virtual machines like the Java Virtual Machine (JVM), where arithmetic instructions operate on the operand stack. Consider the bytecode sequence aload_1; aload_1; imul, which loads the integer value from local variable 1 onto the stack twice and then multiplies them. This can be optimized to aload_1; dup; imul, where dup duplicates the top stack item, eliminating the redundant load instruction and reducing stack manipulation overhead. This transformation is particularly beneficial in just-in-time (JIT) compilers, as it shortens the instruction stream without altering program behavior, assuming the loaded value is not modified between the two original loads. In low-level code for architectures like , optimizations can leverage hardware-specific efficiencies in arithmetic units. For instance, the sequence add r1, r1, r1, which doubles the value in r1 by adding it to itself, can be replaced with lsl r1, r1, #1 ( left by 1), exploiting the fact that shifting is typically faster and consumes fewer cycles than on shift-optimized processors. This replacement relies on algebraic for non-negative integers and is enabled by basic simplification rules, such as recognizing that by 2 equates to a left shift by 1. Such optimizations are common in retargetable frameworks, where hardware characteristics guide rule selection. Both examples illustrate how peephole techniques reduce the number of instructions executed, thereby lowering instruction fetch and decode overhead in pipelined processors, provided no dependencies exist outside the optimization window. In , these changes yield measurable gains; for instance, studies have shown reductions in code size of up to 14% in dynamic systems. The assumptions hold within the local scope, ensuring safety without global analysis.

Stack and Register Management

Peephole optimizations for stack and register management target redundant operations that arise from conservative , such as unnecessary saving and restoring of or stack manipulations around transfers. These techniques are especially valuable in stack-based machines, where stack depth adjustments can incur significant overhead, or in resource-constrained architectures like the Z80, where minimizing memory accesses improves both code density and performance. By scanning small windows of assembly code, the optimizer identifies patterns where operations can be collapsed or eliminated while maintaining program correctness and call/return semantics. Seminal work on retargetable peephole optimizers highlights how such local transformations can reduce instruction counts by 10-20% in naive code generators. A representative example occurs in Z80 code generated by compilers for procedure calls. Compilers often insert sequences like PUSH AF; PUSH BC; PUSH DE; PUSH HL; CALL _ADDR; POP HL; POP DE; POP BC; POP AF to preserve register state across the call, assuming potential stack modifications by the subroutine. If within the peephole window confirms the subroutine does not alter the , this can be simplified to a direct CALL _ADDR, eliminating the eight push/pop instructions and saving up to 32 bytes and numerous clock cycles per invocation. This optimization is applied in Z80-targeted compilers, where peephole rules combine or remove call-related stack operations to achieve up to 12.5% . Register redundancy provides another key opportunity for , particularly in sequences involving chained moves. Consider the pattern MOV AX, BX; MOV CX, AX, which copies the value from BX to CX via the intermediate AX. If no instructions in the peephole window modify BX, this can be replaced by the single MOV CX, BX, bypassing the redundant intermediate assignment and freeing AX for other uses. Such transformations are standard in peephole passes for x86-like architectures, as demonstrated in compiler courses where they eliminate self-moves (e.g., MOV R, R to nothing) and chained copies to streamline . This not only reduces instruction count but also mitigates register pressure in limited-register ISAs. These stack and register optimizations often intersect with broader redundancy elimination, such as removing null sequences like paired push/pop of the same register, but they specifically focus on preserving data flow around control points like calls. In practice, implementing these requires symbolic machine descriptions to match patterns portably across ISAs.

Advantages and Limitations

Performance Benefits

Peephole optimization delivers measurable improvements in program execution speed and code size, making it valuable for performance-critical applications. By replacing inefficient local instruction sequences with more efficient equivalents, it reduces the overall instruction count, leading to faster runtime. For instance, verified peephole optimizations in the CompCert compiler achieved speedups of up to 4.0% on the SipHash24 benchmark and 0.7% on SHA-3, with an average 3.9% improvement on a verified SHA-256 implementation. In dynamic binary translation environments, these optimizations yielded a maximum speedup of 1.52× (52% faster execution) across the SPEC CINT2006 benchmark suite, particularly benefiting code with frequent redundant operations. Such gains are especially pronounced in tight loops, where instruction reductions translate to noticeable performance boosts without altering program semantics. A key benefit is the reduction in generated code size, which counters bloat from initial phases and is crucial for resource-constrained embedded systems. Retargetable peephole optimizers can shrink by 15-40% even after applying global optimizations, as demonstrated on benchmarks like tree printers and matrix multipliers across architectures such as PDP-11 and Cyber 175. This compaction lowers memory usage and improves efficiency, further enhancing in memory-bound scenarios. The technique incurs negligible compilation overhead due to its localized, pattern-matching nature, allowing efficient integration without significantly prolonging build times. Beyond isolated gains, peephole optimization complements global techniques by targeting residual local redundancies, contributing to overall compiler efficiency. Its straightforward implementation has established it as a staple in production compilers, including and , where it routinely enhances output quality across diverse workloads.

Constraints and Challenges

Peephole optimization operates within a limited local scope, typically examining short sequences of instructions confined to a single , which prevents it from addressing optimizations that span multiple blocks or involve inter-block data dependencies. For instance, it cannot perform , where computations independent of loop iterations are hoisted outside the loop, as this requires global to identify such invariants across boundaries. Similarly, peephole techniques overlook dependencies like or live variable information that extend beyond the immediate window, limiting their ability to eliminate redundancies in programs with complex s. This locality constraint reduces effectiveness when applied to code already processed by higher-level optimizations, such as or removal, which preemptively simplify structures and leave fewer local patterns to exploit. Maintaining peephole optimization frameworks introduces significant overhead, particularly as pattern tables expand to accommodate new instruction set architectures (ISAs) or evolving compiler backends. Developers must manually craft and verify numerous transformation rules, which can number in the dozens for production compilers like (e.g., 68 patterns). Moreover, incomplete pattern coverage risks introducing bugs, especially in edge cases such as differences between signed and unsigned arithmetic operations; for example, a long-standing bug in 's peephole rules for addition nodes incorrectly handled operand access, persisting undetected for 13 years until automated testing revealed it. Such errors can propagate or incorrect transformations, underscoring the fragility of hand-written rules without exhaustive verification tools like Alive. In modern superscalar processors, where and dominate performance improvements by exploiting , peephole optimizations contribute less relative impact compared to global techniques like trace scheduling or software pipelining. These CPUs dynamically reorder instructions at , diminishing the benefits of static local rewrites that assume fixed execution orders. Additionally, peephole approaches remain incomplete without integration into broader contexts, such as just-in-time (JIT) compilation in virtual machines, where dynamic amplifies the need for global to handle variations. Recent approaches, such as LLM-assisted detection (e.g., Lampo as of August 2025), help identify missed optimizations and reduce manual effort.

References

  1. [1]
    [PDF] Automatic Generation of Peephole Superoptimizers
    Jul 31, 2006 · Peephole optimizers are typically constructed using human-written pattern matching rules, an approach that requires expertise and time, as well ...
  2. [2]
    [PDF] Code Optimization
    In a very simple compiler, we can use a peephole optimizer to peruse already-generated target code for obviously suboptimal sequences of adjacent ...
  3. [3]
    [PDF] Intermediate Code & Local Optimizations CS143 Lecture 14
    • Optimization is our last compiler phase. • Most complexity in modern ... • instruction selection / peephole optimization. • register allocation. Object ...<|control11|><|separator|>
  4. [4]
    Peephole optimization | Communications of the ACM
    Redundant instructions may be discarded during the final stage of compilation by using a simple optimizing technique called peephole optimization.
  5. [5]
    [PDF] Peephole Optimization Technique for analysis and review of ...
    Abstract : In term of Compiler optimization, classical compilers implemented with an effective optimization technique called Peephole optimization.
  6. [6]
    Lecture 38, Peephole Optimization - Compiler Construction
    Peephole optimization involves examination of code at a very local level, attempting to find patterns of instructions that can be replaced with more efficient ...
  7. [7]
    [PDF] Compiler-Based Code-Improvement Techniques
    McKeeman included constant folding in his early peephole optimizer [173] (see § 4.1.3). Balke's local value-numbering algorithm (see § 4.1.1) performs ...
  8. [8]
    The Design and Application of a Retargetable Peephole Optimizer
    Peephole optimizers improve object code by replacing certain sequences of instructions with better sequences. This paper describes PO, a peephole optimizer ...
  9. [9]
    Performance Improvements via Peephole Optimization in Dynamic ...
    Apr 23, 2024 · The following are three concrete examples of pattern-matching rules: Rule C1: Fusion of single-precision FP multiply–add/sub operations into ...
  10. [10]
    [PDF] Quick Compilers Using Peephole Optimization
    This paper describes a portable C compiler that uses abstract machine modeling for portability, and a simple rule-directed peephole optimizer to produce ...Missing: seminal | Show results with:seminal
  11. [11]
    Using Peephole Optimization on Intermediate Code
    Using Peephole Optimization on Intermediate Code. Editor: Susan L. Graham ... TANENBAUM, A.S., STEVENSON, J.W., AND VAN STAVEREN, H. Description of an ...
  12. [12]
    Peephole Optimization - an overview | ScienceDirect Topics
    Peephole optimization is a technique in computer science that involves replacing combinations of simpler instructions with more complex instructions that ...Missing: seminal | Show results with:seminal
  13. [13]
    Practical verification of peephole optimizations with Alive
    Peephole optimizations, which perform local rewriting of the input program to improve the efficiency of generated code, are a persistent source of compiler bugs ...
  14. [14]
    [PDF] Intel® Architecture Optimization
    processors' conditional move (cmov or fcmov) instructions can eliminate ... • Peephole Optimization—assembly coach identifies particular instruction ...
  15. [15]
    Pattern Matching Strategies for Peephole Optimisation - ResearchGate
    Book. Jan 1989. Des Watson · View · Peephole optimization. Article. Jan 1965. William M. McKeeman · View · Affix Grammar Driven Code Generation. Article. Oct ...
  16. [16]
    Declarative peephole optimization using string pattern matching
    Declarative peephole optimization using string pattern matching. Software and its engineering · Software notations and tools · Compilers · General programming ...
  17. [17]
    Optimize Options (Using the GNU Compiler Collection (GCC))
    Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; ...
  18. [18]
    The LLVM Target-Independent Code Generator
    Optimizations that operate on “final” machine code can go here, such as spill code scheduling and peephole optimizations. Code ...
  19. [19]
    Peephole Definitions (GNU Compiler Collection (GCC) Internals)
    In addition to instruction patterns the md file may contain definitions of machine-specific peephole optimizations. The combiner does not notice certain ...
  20. [20]
    lib/CodeGen/PeepholeOptimizer.cpp Source File - LLVM
    Optimization of sign / zero extension instructions. It may be extended to 14// handle other instructions with similar properties.
  21. [21]
    The Java HotSpot Performance Engine Architecture - Oracle
    ... peephole optimization on the LIR and generates machine code from it. ... New I/O optimizations: the Java HotSpot compilers treat operations on New I/O ...
  22. [22]
    16 Machine Descriptions - GCC, the GNU Compiler Collection
    A machine description has two parts: a file of instruction patterns ( .md file) and a C header file of macro definitions. ... Machine-Specific Peephole Optimizers ...Missing: tables | Show results with:tables
  23. [23]
  24. [24]
    [PDF] Verified Peephole Optimizations for CompCert
    This paper presents Peek, a framework for expressing, verifying, and running meaning-preserving assembly-level program trans- formations in CompCert. Peek ...Missing: seminal | Show results with:seminal
  25. [25]
    Revisiting Optimization-Resilience Claims in Binary Diffing Tools
    Performance impact of peephole optimization on both the reduction in binary code size and the number of instructions. The percentages presented in Columns 2 ...<|control11|><|separator|>
  26. [26]
    The Design and Application of a Retargetable Peephole Optimizer
    Peephole optimizers improve object code by replacing certain sequences of instructions with better sequences. This paper describes PO, a peephole optimizer ...
  27. [27]
    [PDF] Loop optimizations - Purdue Engineering
    Nov 30, 2015 · • s is loop invariant if both b and c satisfy one of the following ... peephole optimization. • Peephole: replace expensive instruction like.
  28. [28]
    [PDF] Global Instruction Scheduling for SuperScalar Machines - cs.wisc.edu
    Global instruction scheduling for superscalar machines moves instructions beyond basic blocks using control and data dependence information, as basic block ...
  29. [29]
    [PDF] JOG: Java JIT Peephole Optimizations and Tests from Patterns
    JOG allows devel- opers to write a peephole optimization as a pattern in Java itself. ... [11] W. M. McKeeman. 1965. Peephole Optimization. Commun. ACM 8, 7 (1965) ...<|control11|><|separator|>
  30. [30]
    CS 6120: Provably Correct Peephole Optimizations with Alive
    Oct 4, 2020 · Peephole optimizations are optimizations that involve a small set of instructions. The word peephole (sometimes also called window) refers to ...
  31. [31]
    [PDF] Performance Analysis and Tuning on Modern CPUs
    Most modern CPUs are superscalar i.e., they can issue more than one instruction in a given ... eliminating unnecessary work, doing various peephole optimizations, ...
  32. [32]