Inline expansion
Inline expansion, also known as function inlining, is a compiler optimization technique in which a function call is replaced by the body of the called function directly at the call site, thereby eliminating the runtime overhead of the call and return mechanism.[1] This substitution allows the compiler to apply further optimizations across the integrated code, such as enhanced register allocation, constant propagation, and instruction scheduling, which can improve overall program performance.[2] Commonly used in languages like C and C++, inline expansion is particularly beneficial for small, frequently called functions where the call overhead is significant relative to the function's execution time.[3]
Compilers decide whether to perform inline expansion based on heuristics evaluating factors like function size, call frequency, and potential benefits versus costs, often prioritizing small functions to minimize code bloat.[3] Programmers can suggest inlining using the inline keyword in C++, which serves as a hint but does not guarantee expansion, as the compiler retains final control to avoid excessive code size increases or other drawbacks.[1] Extensions like Microsoft's __forceinline or Intel's __attribute__((always_inline)) provide stronger directives to encourage or enforce inlining, though even these may be overridden in cases such as recursive functions or when the function address is taken.[1][3]
While inline expansion reduces execution time by streamlining control flow and enabling cross-function optimizations, it can increase the program's binary size due to code duplication, potentially leading to higher instruction cache misses in larger applications.[2] Advanced compilers, such as those in the Intel oneAPI DPC++/C++ suite or Microsoft Visual C++, integrate it with interprocedural optimization (IPO) phases, sometimes guided by profile data to target high-impact call sites.[3] Limitations include challenges with external or library functions where source code is unavailable, and risks like stack overflow in deeply recursive scenarios, prompting compilers to impose depth limits, such as 16 levels in MSVC.[1][2]
Fundamentals
Definition and Purpose
Inline expansion, also known as function inlining, is a compiler optimization technique in which a call site—a location in the source code or intermediate representation where a function is invoked—is replaced by the body of the called function, with appropriate substitutions for parameters and return values.[4] This transformation eliminates the need for the actual function call mechanism during execution.[5] Inline expansion typically occurs as part of an optimization pass, a dedicated phase in the compilation process where the compiler analyzes and modifies the code to improve efficiency without altering its observable behavior.[4]
The primary purpose of inline expansion is to reduce execution time by avoiding the overhead associated with function calls, such as stack frame allocation, parameter passing, and control transfer.[5] By integrating the function body directly into the caller, the compiler can also expose more opportunities for subsequent optimizations, including constant propagation—where constant values are substituted throughout the code—and dead code elimination, which removes unnecessary computations.[4] This approach is particularly beneficial for small, frequently called functions, as it trades potential increases in code size for overall performance gains.
To illustrate, consider a simple pseudocode example involving an adder function called within a loop:
Before inlining:
int add(int a, int b) {
return a + b;
}
int sum = 0;
for (int i = 0; i < 10; i++) {
sum += add(i, 1); // Call site
}
int add(int a, int b) {
return a + b;
}
int sum = 0;
for (int i = 0; i < 10; i++) {
sum += add(i, 1); // Call site
}
After inlining:
int sum = 0;
for (int i = 0; i < 10; i++) {
sum += i + 1; // Function body substituted
}
int sum = 0;
for (int i = 0; i < 10; i++) {
sum += i + 1; // Function body substituted
}
In this transformation, the compiler replaces the call to add with its body, adjusting parameters a and b to i and 1, respectively, thereby removing the call overhead and allowing potential loop-specific optimizations.[5]
Historical Context
Inline expansion has roots in the 1950s with early compilers, such as Grace Hopper's A-2 system, which collected and inlined subroutines to optimize code. It gained prominence in the 1970s alongside the development of early optimizing compilers, particularly for languages like Fortran, where subroutine substitution helped reduce call overhead in performance-critical applications. Optimizing compilers during this period, building on foundational work in program analysis by researchers such as Frances Allen at IBM, began incorporating techniques to inline simple subroutines automatically, marking a shift from manual assembly-level optimizations to compiler-driven decisions.[6]
By the 1980s, inlining gained traction in C compilers as computing resources became more constrained, prompting optimizations to minimize function call costs. Widespread adoption followed with the GNU Compiler Collection (GCC), first released in 1987, which included initial optimization passes capable of inline expansion, though explicit inline keyword support as an extension was refined in subsequent versions during the early 1990s.[7][8]
The 1990s saw inlining's importance amplified by the rise of Reduced Instruction Set Computing (RISC) architectures, which emphasized simple instructions but incurred higher penalties for branches and calls, making automated inlining essential for exposing optimization opportunities like instruction scheduling. Pioneering work by David Patterson and John Hennessy in RISC design, as detailed in their influential textbook, highlighted how compilers could leverage inlining to mitigate these overheads in architectures like SPARC and MIPS.
In the 2000s, just-in-time (JIT) compilers further elevated inlining's role, with Sun Microsystems' HotSpot JVM—introduced in 1999 and widely used by the mid-2000s—employing profile-guided inlining to dynamically optimize frequently called methods during runtime, significantly boosting Java application performance. Post-2010, the LLVM compiler infrastructure has driven ongoing advancements, including heuristic improvements and the integration of machine learning-based inliners to better predict profitable inline decisions across diverse workloads.[9][10]
Implementation
Core Mechanism
Inline expansion, also known as function inlining, involves the compiler replacing a function call with the body of the called function to eliminate the overhead of the call and enable further optimizations.[2] The process begins with identifying a suitable call site within the caller's code, where the function invocation occurs.[11]
The transformation proceeds in several key steps. First, the compiler copies the body of the callee function and inserts it directly at the call site in the caller. Second, it substitutes the actual arguments from the call site for the formal parameters in the copied body, ensuring that variables are renamed if necessary to avoid name conflicts with the caller's scope. Third, the control flow is adjusted, such as removing the original call instruction and any return statements in the inlined body, replacing them with jumps or direct continuations to the caller's subsequent code. Finally, cleanup occurs, which may include removing the original function definition if it is no longer referenced elsewhere after all inlining decisions are applied.[2][12][11]
Compilers must handle several complexities during this process. For static variables declared within the function, the inlined copy preserves their semantics by either renaming them to maintain locality or adjusting initializations to match the caller's context, preventing unintended sharing across instances. Recursion is typically prevented by not inlining recursive calls, as this could lead to infinite expansion; compilers detect cycles in the call graph and leave such calls intact. When the same function is called from multiple sites, the body is duplicated at each location, resulting in code replication that expands the overall program size.[2][12]
Consider a simple pseudocode example to illustrate the transformation:
Before inlining:
function addOne(x) {
return x + 1;
}
function main(y) {
z = addOne(y);
print(z);
}
function addOne(x) {
return x + 1;
}
function main(y) {
z = addOne(y);
print(z);
}
After inlining the call to addOne:
function addOne(x) { // May be removed if unused
return x + 1;
}
function main(y) {
z = y + 1; // Inlined body with parameter substitution
print(z);
}
function addOne(x) { // May be removed if unused
return x + 1;
}
function main(y) {
z = y + 1; // Inlined body with parameter substitution
print(z);
}
This replacement eliminates the function call and return, directly integrating the computation into the caller and altering the control flow to a linear sequence.[2][11]
Two primary variants of inline expansion exist: full inlining, where the entire function body is copied and substituted at the call site, and partial inlining, where only selected portions of the body—such as a fast-path branch or initial computations—are inlined, leaving the remainder as a call to a helper function. Partial inlining is rarer but has emerged in advanced compilers to balance code expansion with optimization opportunities.[2][13]
Decision Heuristics
Compilers employ decision heuristics to evaluate whether inlining a function will yield net benefits in performance or code efficiency, primarily by assessing factors such as function size, call frequency, and potential code growth. Basic criteria often revolve around the function's size, typically measured in instructions or intermediate representation units, with thresholds commonly set between 10 and 100 instructions for consideration; for instance, GCC's max-inline-insns-single parameter defaults to around 40-75 pseudo-instructions depending on the version, allowing inlining only for functions below this limit to avoid excessive code bloat.[14] Call frequency is another key factor, where functions called multiple times—especially within loops—are prioritized, as the savings from eliminating repeated call overhead outweigh the one-time insertion cost; static call graph analysis further aids this by identifying call sites without dynamic dispatch.[15]
Advanced heuristics incorporate runtime and interprocedural data to refine decisions. Profile-guided optimization (PGO) uses execution profiles to weigh call-site hotness, favoring inlining of frequently executed paths while deferring cold paths to preserve code size; for example, Intel compilers with PGO and interprocedural optimization (IPO) aggressively inline small functions at hot sites based on dynamic counts.[3] Hot/cold path analysis segments code into likely and unlikely execution branches, applying stricter size thresholds to cold paths to minimize bloat. Interprocedural optimization (IPO) extends this across compilation units, enabling cross-module inlining decisions via whole-program analysis, though it increases compile time.[15]
Threshold models formalize these decisions through cost-benefit comparisons that weigh estimated performance improvements from eliminating call overhead against increases in code size due to duplication. In practice, compilers like GCC implement variants with parameters such as inline-min-speedup (default 14% performance gain threshold) to quantify this, balancing estimated execution time reductions against growth limits like large-function-growth (default 100, allowing up to 2x size increase).[14]
Recent advances as of 2025 incorporate machine learning to enhance inlining heuristics. Techniques like those in Google's Iterative BC-Max use ML to optimize inlining decisions, reducing binary size and improving performance by learning from benchmark data. Similarly, .NET 10's JIT employs improved devirtualization-aware inlining, and research tools like ACPO apply AI for phase-ordering including inlining. These methods outperform traditional heuristics in complex scenarios by predicting net benefits more accurately.[16][17][18]
These heuristics involve inherent trade-offs, particularly in balancing compile-time static analysis—which relies on conservative estimates from the call graph and may miss runtime behaviors—against runtime feedback from PGO, which provides accurate frequencies but requires multiple compilation passes. In object-oriented languages, handling devirtualization adds complexity, as heuristics must predict whether virtual calls can be resolved to direct calls post-inlining, often using class hierarchy analysis to avoid unnecessary expansions that could introduce indirect branches.[3] Overall, the goal is to maximize performance gains while constraining code size increases to under 10-30% in typical scenarios.[15]
Advantages
Inline expansion provides significant runtime speedups by eliminating the overhead associated with function calls and returns, which typically involve several CPU cycles for tasks such as register saves, restores, parameter passing, and stack management.[8] This overhead can range from a few to tens of cycles per call on modern processors, depending on the architecture and optimization level.[19] By replacing the call site with the function body, inlining avoids these costs entirely, particularly benefiting hot code paths with frequent small function invocations.[20] Furthermore, inlining exposes the inlined code to the caller's context, enabling subsequent compiler optimizations such as loop unrolling and dead code elimination that would otherwise be limited by function boundaries.[21]
In terms of code quality improvements, inline expansion enlarges the visible scope for key optimizations, leading to more efficient register allocation across what were previously separate functions.[21] This allows the compiler to better utilize available registers, reducing spills to memory and improving overall execution efficiency. Similarly, it facilitates superior instruction scheduling by providing a broader view of dependencies and opportunities for reordering, which can minimize pipeline stalls.[21] In tight loops or performance-critical sections, inlining also reduces the number of branch instructions associated with calls, thereby lowering the likelihood of branch mispredictions and associated penalties.
Empirical studies demonstrate tangible performance gains from inline expansion, particularly for small functions in compute-intensive workloads. For instance, aggressive inlining yielded up to 32% speedup (1.32×) on SPECint95 benchmarks and 24% (1.24×) on SPECint92, with individual programs seeing factors as high as 2.02×.[22] In the SPEC2000 integer suite, adaptive inlining heuristics produced an average 5.28% speedup across 11 benchmarks, with notable improvements in programs like bzip2 and twolf.[23] These benefits are especially pronounced in embedded systems with limited resources, where inlining minimizes call stack setup and parameter passing overheads, optimizing both execution speed and energy consumption.[24]
Beyond performance, inline expansion offers non-performance advantages such as enhanced cache locality through fewer inter-function jumps, which reduces instruction cache misses and improves overall memory access patterns. In debugging contexts, some tools leverage expanded code views to simplify breakpoint placement and trace execution within inlined sections, aiding developers in analyzing optimized binaries.
Drawbacks and Limitations
One primary drawback of inline expansion is the increase in code size resulting from duplicating the inlined function body at each call site, which can lead to larger binaries and exacerbate instruction cache misses, particularly in performance-critical applications. In extreme cases, this code bloat can cause binary sizes to grow significantly, as observed in empirical studies of compiler optimizations on benchmark suites.[24][25] This expansion is especially problematic in resource-constrained environments, where it directly impacts memory utilization and may violate platform-specific limits, such as code section sizes capped at 64 KB in certain embedded targets like legacy microcontroller architectures.[24][25]
Inline expansion also imposes significant compile-time overhead, as the compiler must process larger intermediate representations (IR) generated by duplicating code, leading to prolonged build times that scale with function complexity and call frequency. This overhead is particularly pronounced for recursive functions or those with large bodies, where excessive inlining can inflate IR size and hinder optimization passes, sometimes increasing compilation duration by factors observable in production builds. For instance, forcing inlining of complex operations in optimized C++ code with Microsoft's Visual Studio can elevate build times, for example, from 25 seconds to over 13 seconds in sample projects by removing it, indicating the potential for substantial increases.[26]
Specific limitations further constrain inline expansion's applicability. It cannot typically occur across dynamic dispatch mechanisms, such as virtual function calls in object-oriented languages, because the compiler lacks sufficient type information at compile time to resolve the exact callee, preventing direct substitution of the function body. Additionally, legal restrictions arise with non-pure functions exhibiting observable side effects, as inlining may alter program semantics if not handled carefully across translation units, violating language standards that require consistent behavior. Platform-specific constraints in embedded systems amplify these issues, where aggressive inlining risks exceeding memory budgets under strict code size limits, necessitating selective application to avoid overflows.[27][28]
Inline expansion should be avoided in cases involving infrequently called large functions, where the overhead of code duplication outweighs any potential call elimination benefits, often resulting in net performance degradation. Benchmarks on standard suites like SPEC demonstrate slowdowns in such scenarios due to increased cache pressure and bloat, underscoring the need for heuristics to detect and mitigate over-inlining.[20]
Comparisons
Versus Traditional Function Calls
Traditional function calls introduce overhead through prologue and epilogue code, which typically involves saving and restoring registers, pushing and popping stack frames, and executing jump instructions to transfer control. This process, absent in inline expansion, can add several instructions per invocation, depending on the calling convention and function complexity. For instance, on x86 architectures, a basic function call might require several instructions for these operations alone, accumulating in performance-critical code paths.[1][3]
Inline expansion enables advanced optimizations that are infeasible with opaque function calls, as the compiler gains visibility into the function body at the call site. This allows for whole-program analysis techniques, such as constant propagation and folding across what were previously function boundaries, potentially simplifying expressions and eliminating redundant computations. In contrast, traditional calls treat functions as black boxes, limiting interprocedural optimizations to summary-based approximations.
From a behavioral perspective, traditional function calls maintain modularity by encapsulating implementation details, preserving abstraction and facilitating code reuse without exposing internal logic. Inline expansion, however, integrates the function body directly, which can break this abstraction but unlocks aggressive local optimizations like dead code elimination tailored to the caller's context. While calls support polymorphism and dynamic dispatch more seamlessly, inlining demands static resolution and may increase code size, trading modularity for potential efficiency gains.[3]
Consider a loop iterating 1000 times and invoking a small function that performs a simple arithmetic operation, such as adding two constants. With traditional calls, each iteration incurs the full prologue/epilogue overhead—potentially several instructions—including branch instructions that may cause pipeline stalls or mispredictions. Inlining replaces these with the function's body, eliminating the overhead entirely and allowing the compiler to unroll the loop or fold constants into a single instruction sequence, resulting in, for example, up to 59% fewer dynamic function calls in benchmark programs and measurable speedups in execution time.[2]
Versus Macros
Inline expansion and macro expansion both aim to eliminate function call overhead by substituting code at the call site, but they differ fundamentally in their mechanisms and implications. Macros operate through textual substitution performed by the preprocessor, which replaces macro invocations with their definitions before compilation begins, without any semantic analysis or type checking. This can introduce errors, such as unintended expansions or violations of scoping rules, because the preprocessor treats the code as plain text. In contrast, inline expansion is a compiler-level optimization that occurs after parsing and type checking, treating the inline function as a semantic entity that can be integrated with subsequent optimization passes, such as constant propagation or dead code elimination.[29][1]
Inline expansion offers several advantages over macros, particularly in terms of safety and reliability. Because inlining happens post-type-checking, it enforces type safety, catching mismatches or invalid operations that macros might overlook due to their blind substitution. For instance, macros can lead to multiple evaluations of arguments with side effects, altering program behavior unexpectedly; a classic example is the macro #define MAX(a, b) ((a) > (b) ? (a) : (b)), where passing x++ as a increments x twice, potentially yielding incorrect results. Inline functions evaluate arguments exactly once, mirroring the semantics of a regular function call while avoiding such pitfalls. Additionally, inline code benefits from the compiler's optimizer, enabling transformations like instruction scheduling that macros, being pre-optimized text, cannot leverage as effectively.[29][1]
Despite these benefits, inline expansion has drawbacks relative to macros in certain scenarios. Macros are simpler to implement and always result in substitution without relying on compiler heuristics, ensuring consistent inlining regardless of function complexity or optimization flags. Inline functions, however, are merely a suggestion to the compiler, which may reject inlining for large functions to avoid code bloat or excessive compilation time, reverting to traditional calls with associated overhead. This heuristic-based decision can lead to unpredictable performance, whereas macros guarantee expansion but at the cost of potential debugging challenges and lack of type safety.[1]
Historically, inline functions emerged as a more sophisticated alternative to macros, initially mimicking their substitution behavior in early C++ compilers to address performance needs in object-oriented code. Introduced in the C++98 standard, the inline keyword allowed function definitions in headers without multiple-definition errors, evolving from macro-like textual replacement to a type-aware mechanism that integrates with modern optimizers. This shift reduced reliance on error-prone macros for performance-critical code, promoting safer practices while retaining the core goal of overhead elimination.[1]
Language and Compiler Support
C and C++
In C, the inline keyword was introduced in the C99 standard as a function specifier to suggest that the compiler substitute the function body at call sites for potential performance gains, though the compiler is not obligated to inline and must still generate an out-of-line copy unless the function is static.[30] This keyword can appear multiple times in declarations, with consistent behavior, and is particularly useful for small functions to reduce call overhead without altering linkage rules for non-static functions.[31] In GCC, the __attribute__((always_inline)) extension forces inlining even without optimization flags, overriding default heuristics by treating the function as if optimization is enabled solely for inlining purposes.[7]
C++ extends the inline keyword's semantics to handle templates and linkage more flexibly, allowing function definitions in headers without violating the One Definition Rule (ODR) by permitting multiple identical definitions across translation units, which the linker resolves to a single instance.[32] For templates, inline facilitates definition in header files to enable instantiation at call sites, avoiding separate compilation units while maintaining external linkage for non-static functions; this is essential for template-heavy codebases to ensure consistent behavior.[33] Compilers like GCC and Clang apply heuristics based on function size, call frequency, and optimization level (-O2 or higher) to decide inlining, with options like GCC's -finline-functions (now integrated into standard optimization passes) encouraging aggressive substitution for marked functions during optimized builds.[14] Microsoft's Visual C++ (MSVC) uses /Ob flags to control inlining: /Ob0 disables it, /Ob1 enables only explicitly inline functions, and /Ob2 (default for /O2) allows automatic inlining based on heuristics like function complexity and hotness.[34]
Best practices in C and C++ recommend applying inline to small, frequently called ("hot") functions, such as accessors or simple computations, to minimize overhead while avoiding large functions that could bloat code size; for instance, in C++, inlining template methods in class definitions within headers ensures efficient instantiation without ODR issues.[1] A key pitfall in C++ is mishandling non-inline definitions in headers, which can lead to ODR violations if multiple translation units define the same entity differently, resulting in undefined behavior at link time—mitigated by consistently using inline for header-defined functions.[32] Clang supports similar attributes like [[clang::always_inline]] for forced inlining, aligning with GCC extensions for portability in mixed-toolchain environments.[35]
Link-time optimization (LTO) in GCC, introduced in version 4.5 in 2010, enables cross-file inlining by analyzing the entire program during linking via the -flto flag, allowing optimization of functions defined in separate compilation units that standard per-file inlining cannot reach.[36] This extension complements inline hints by applying whole-program heuristics, such as call graph analysis, to inline across boundaries while preserving ODR compliance in C++.[14]
Java and JVM Languages
In the Java Virtual Machine (JVM), inline expansion is performed dynamically by just-in-time (JIT) compilers rather than at compile time, allowing runtime profiling to guide optimization decisions. The HotSpot JVM, introduced as the default in JDK 1.3 in 2000, employs two primary JIT compilers: the client compiler (C1) for quick startup with basic optimizations and the server compiler (C2) for aggressive optimizations in long-running applications.[37] Inlining decisions rely on invocation counters and type profiles; for instance, monomorphic call sites—where only one receiver type is observed—are prioritized for inlining to eliminate virtual dispatch overhead.[37]
Java lacks an explicit inline keyword, leaving inlining entirely to the JIT compiler, which automatically targets small methods to minimize code size explosion. The default threshold for inlining non-frequent methods is under 35 bytes of bytecode (-XX:MaxInlineSize=35), while frequently executed (hot) methods can be inlined if under 325 bytes (-XX:FreqInlineSize=325), based on invocation rates and profiling.[37][38] Virtual methods are handled through devirtualization, where the JIT replaces dynamic dispatch with direct calls if profiling confirms a single implementation, enabling inlining across class hierarchies.[38]
In JVM-based languages like Scala and Kotlin, developers can influence inlining for performance-critical code, particularly in functional paradigms. Scala's @inline annotation serves as a hint to the optimizer, encouraging the compiler to substitute the method body at call sites, which is useful for small utilities but requires enabling the optimizer flag.[39] Kotlin provides the inline modifier for functions, especially those accepting lambdas, which inlines both the function and lambda bodies to avoid object allocation and virtual calls, yielding benefits in higher-order functions common to functional constructs.[40]
Advanced JVM optimizations, such as escape analysis, often follow inlining to further enhance efficiency by enabling stack allocation or scalar replacement for non-escaping objects. After inlining exposes object lifetimes, the JIT can eliminate heap allocations for local objects, reducing garbage collection pressure.[41] In GraalVM, an alternative JIT compiler, aggressive inlining extends these benefits through partial escape analysis, particularly for abstracted code like streams and lambdas.[42]
Benchmarks demonstrate substantial performance gains from inlining in server applications; for example, feedback-directed object inlining in HotSpot yielded average peak improvements of 9% on SPECjvm98, with maximum speedups reaching 51% in compute-intensive workloads.[43] These optimizations are crucial for scaling JVM applications, though they are bounded by code cache limits to prevent excessive growth.[37]
Rust and Other Systems Languages
In Rust, inline expansion is facilitated through the #[inline] attribute, which serves as a hint to the compiler to consider replacing a function call with the function's body, though the final decision rests with the LLVM backend's heuristics based on factors like function size, call frequency, and optimization level.[44][45] These heuristics aim to balance runtime performance gains against increased code size and compile time. Since the release of Rust 1.0 in 2015, inline expansion has been integral to the language's philosophy of zero-cost abstractions, enabling high-level constructs like generics and traits to compile to efficient machine code without runtime overhead.[46]
Inline expansion in Rust integrates seamlessly with the language's memory safety guarantees, as the borrow checker verifies ownership and borrowing invariants on the mid-level intermediate representation (MIR) before optimizations like inlining occur during code generation. For generic functions and traits, monomorphization—Rust's process of generating type-specific copies of generic code—effectively inlines the implementations at each use site, allowing the borrow checker to enforce safety on concrete types while preserving the invariants established in the source code. This ensures that abstractions remain safe and performant; for example, a generic function like fn process<T: Borrow<U>>(item: T) can be monomorphized for specific types such as String or &str, with the borrow checker confirming no aliasing violations in the expanded form.[47][48]
In contrast to Rust's attribute-based hints, other systems languages like Go employ automatic inline expansion in their gc compiler, where functions are inlined if their intermediate representation size does not exceed a budget of approximately 80 nodes, prioritizing small, frequently called routines to minimize call overhead without explicit programmer intervention.[49] The D programming language offers more direct control via the pragma(inline, true) directive, which instructs the compiler to attempt inlining a function or block, or pragma(inline, false) to discourage it, differing from Rust's non-guaranteed hints by providing a stronger but still heuristic-driven mechanism.[50] These approaches highlight Rust's emphasis on explicitness to aid LLVM's decisions while maintaining compatibility with zero-cost principles.
Modern Rust extensions, such as link-time optimization (LTO) enabled via Cargo's profile settings (e.g., lto = true for fat LTO), facilitate cross-crate inline expansion by allowing whole-program analysis across dependencies, which can inline non-generic functions without attributes and enhance optimizations like dead code elimination.[51] Benchmarks from the rustc performance suite demonstrate that such inlining contributes to measurable runtime improvements, with studies showing Rust code achieving near-C-level speeds in microbenchmarks through these optimizations, underscoring inlining's role in upholding compile-time safety without compromising efficiency.[45][52]