Lambda lifting
Lambda lifting is a program transformation technique in functional programming and compiler design that restructures programs by converting nested, locally defined functions (lambda abstractions) into top-level, globally defined functions, while explicitly passing their free variables as additional formal parameters to eliminate closures and dependencies on enclosing scopes.[1]
Developed in the mid-1980s by researchers including Lennart Augustsson, John Hughes, Thomas Johnsson, and Simon Peyton Jones, lambda lifting originated as a method to facilitate the compilation of lazy functional languages by flattening block-structured code into a set of recursive equations suitable for graph reduction and supercombinator-based execution.[2] The technique was first detailed in Johnsson's 1985 paper, where it was applied to the Lazy ML compiler to transform programs with local definitions into global rewrite rules, enabling more efficient evaluation strategies.[1]
The primary goal of lambda lifting is to simplify program structure for optimization, separate compilation, and backend code generation by removing the need for runtime closure allocation in many cases, thereby trading heap usage for stack-based parameter passing and reducing overhead in functional language implementations.[2] It achieves this through a multi-step process: first, analyzing the program's call graph to identify free variables and mutual dependencies; second, lifting each local function to the global scope as a recursive equation; and third, updating all call sites to include the newly required parameters, often propagating free variables across mutually recursive components.[1] Early algorithms, such as Johnsson's, operated in cubic time due to exhaustive dependency analysis, but subsequent refinements, including partitioning into strongly connected components, have reduced complexity to quadratic time, which is provably optimal with a lower bound of Ω(n²) for general cases.[2]
Lambda lifting is closely related to but distinct from closure conversion, another technique for handling free variables; while closure conversion packages free variables into explicit data structures (closures) for heap allocation, lambda lifting passes them as values to callee functions, avoiding closure creation when possible and supporting scope-insensitive transformations.[2] It is widely applied in compilers for functional languages, including GHC for Haskell—where it optimizes local function handling by trading heap allocations for static parameters—and in implementations of Scheme, Lisp dialects, and lazy evaluators to enable fully lazy code generation and supercombinator reduction.[3][4] Beyond compilation, the transformation appears in partial evaluators and program analyzers to support modular code analysis and defunctionalization.[2]
Fundamentals
Definition and Motivation
In lambda calculus, the theoretical foundation for functional programming, a variable in a lambda expression is either bound—formally introduced by a lambda abstraction—or free, meaning it is not bound within the scope of that expression and refers to a variable from an enclosing context.[5] Lambda lifting is a program transformation that restructures nested lambda expressions, which are anonymous functions potentially containing free variables, into top-level functions by explicitly adding those free variables as additional parameters.[6] This process eliminates local function definitions, converting them into global ones that can serve as rewrite rules in a compiler.[6]
The primary motivation for lambda lifting arises in the compilation of functional languages, where local functions with free variables complicate code generation and execution. By lifting these functions to the top level and parameterizing free variables, the transformation avoids the need to create closures—data structures that bundle functions with their environments—thus preventing runtime overhead from heap allocations and environment captures.[6] This is particularly valuable for lazy evaluation strategies, as in early languages like Lazy ML, where it facilitates efficient graph reduction on specialized machines like the G-machine by treating functions as global constants.[6]
Key benefits include enhanced code modularity through global function definitions, which support separate compilation and simplify static analysis in compilers.[7] It also aids tail-call optimization by enabling recursive functions to be recognized and optimized as loops without closure interference.[2] In languages like Scheme, lambda lifting makes higher-order functions practical for optimization by eliminating non-local variables that would otherwise require heap allocation.[7] Similarly, in Haskell's GHC compiler, it reduces excess closure allocations during the STG (Spineless Tagless G-machine) phase, improving performance in scenarios with frequent small inner functions, such as short list traversals, though selective application balances against potential stack growth from extra parameters.[3]
Historical Development
Lambda lifting originated in the early 1980s as a compilation technique for functional programming languages, drawing from foundational efforts to implement lambda calculus efficiently. Peter Landin's 1964 SECD machine provided an early abstract model for evaluating lambda expressions through stack-based reduction, influencing later optimizations for handling local functions and free variables in functional code.[8]
The transformation was independently developed by John Hughes in 1983, though the term "lambda lifting" was coined by Thomas Johnsson in his 1985 paper during his work on compiler optimizations at Chalmers University of Technology.[4][6]
Johnsson formalized the technique in 1985, presenting lambda lifting as a program transformation that converts block-structured functional programs with local definitions into sets of recursive equations, enabling graph reduction on stock hardware without relying on specialized machinery. This approach explicitly passes free variables as parameters to lifted functions, avoiding the overhead of implicit closures and facilitating supercombinator-based execution. Johnsson's work built on prior graph reduction methods, such as David Turner's 1979 compilation of SASL to S, K combinators, and was particularly applied in early Scheme and Lisp-inspired compilers.[6]
During the late 1980s and 1990s, lambda lifting evolved with refinements to better handle recursive functions and mutual recursion, incorporating contributions from key figures including Lennart Augustsson, John Hughes, Thomas Johnsson, and Simon Peyton Jones, who adapted it for lazy evaluation systems.[9] It was adopted in compilers for Standard ML, where implementations like Edinburgh ML used it to extract closure bodies and reduce environment passing costs, and in Scheme partial evaluators for improved code generation.[10][5]
A major milestone came with its integration into the Glasgow Haskell Compiler (GHC) in the early 1990s, where Peyton Jones incorporated lambda lifting into the Spineless Tagless G-machine for optimizing higher-order functions in Haskell.[11] Post-2010, the technique has seen renewed application in compiling functional languages to WebAssembly, aiding optimization of anonymous functions and closures in resource-constrained environments like web browsers.[12]
Implementation in Programming Languages
Core Algorithm
The core algorithm for lambda lifting transforms nested lambda expressions in functional programs into top-level function definitions by explicitly passing captured free variables as parameters, enabling efficient compilation to first-order code. This procedure is typically implemented as a pass over the abstract syntax tree (AST) in a compiler or interpreter for languages like Scheme or Haskell. The algorithm ensures semantic preservation while facilitating optimizations such as closure elimination.
The process begins with identifying nested lambda expressions that capture free variables—those referenced in the lambda body but bound outside it—via a depth-first traversal of the AST. Next, variables are renamed uniquely across the program to prevent capture or shadowing conflicts during transformation, often using a systematic fresh-name generation scheme. Each identified lambda is then lifted to the top-level scope: a new function definition is created with the original parameters augmented by the free variables as additional formal parameters, and the lambda body is adjusted accordingly. Finally, the original nested lambda site is replaced by a function application passing the free variable values as arguments to the lifted function.
Handling recursion requires special care to preserve mutual dependencies. For mutually recursive functions, the algorithm groups them into strongly connected components (SCCs) based on the call graph and lifts them collectively, adding shared free variables as parameters to all functions in the group and establishing recursive links through the top-level definitions; alternatively, fixed-point combinators like the Y combinator can be introduced for non-mutual cases to encode recursion without altering the call structure.
The following high-level pseudocode outlines the transformation in a recursive descent style, assuming an AST representation where expressions include lambdas with bound variables and bodies:
function lambdaLift(expr):
if expr is Lambda(boundVars, body) and freeVars(body) non-empty:
// Step 1 & 2: Identify and rename
renamedBody = renameVars(body, generateFreshNames())
allParams = boundVars + freeVars(renamedBody)
// Step 3: Lift to top-level
liftedName = generateGlobalName()
topLevelDefs[liftedName] = Lambda(allParams, renamedBody)
// Step 4: Replace with call (handling recursion via SCC if mutual)
if isRecursiveContext(expr):
handleMutualRecursion(liftedName, currentSCC())
freeArgs = [lookupVar(v) for v in freeVars(renamedBody)]
return Apply(Var(liftedName), freeArgs)
else:
// Recurse on subexpressions
for mutable child in childrenOf(expr):
child = lambdaLift(child)
return expr
// Post-process: For recursion, coalesce SCCs and add mutual links
function handleMutualRecursion(name, scc):
sharedFreeVars = intersectFreeVars(scc.functions)
for func in scc.functions:
func.params += sharedFreeVars
// Establish recursive equations or fixed-point if needed
function lambdaLift(expr):
if expr is Lambda(boundVars, body) and freeVars(body) non-empty:
// Step 1 & 2: Identify and rename
renamedBody = renameVars(body, generateFreshNames())
allParams = boundVars + freeVars(renamedBody)
// Step 3: Lift to top-level
liftedName = generateGlobalName()
topLevelDefs[liftedName] = Lambda(allParams, renamedBody)
// Step 4: Replace with call (handling recursion via SCC if mutual)
if isRecursiveContext(expr):
handleMutualRecursion(liftedName, currentSCC())
freeArgs = [lookupVar(v) for v in freeVars(renamedBody)]
return Apply(Var(liftedName), freeArgs)
else:
// Recurse on subexpressions
for mutable child in childrenOf(expr):
child = lambdaLift(child)
return expr
// Post-process: For recursion, coalesce SCCs and add mutual links
function handleMutualRecursion(name, scc):
sharedFreeVars = intersectFreeVars(scc.functions)
for func in scc.functions:
func.params += sharedFreeVars
// Establish recursive equations or fixed-point if needed
Refined implementations of this algorithm operate in quadratic time O(n²), where n is the program size, due to call graph analysis and SCC computation, which is provably optimal.[2]
Practical Example
To illustrate lambda lifting, consider a simple functional program in Haskell-like syntax where a nested lambda captures a free variable. The original code defines a function f that takes x and uses an inner function g to compute expressions involving x plus some values.[4]
haskell
let f = \x -> let g = \y -> x * x + y in (g 3 + g 4) in f 6
let f = \x -> let g = \y -> x * x + y in (g 3 + g 4) in f 6
Here, the inner lambda \y -> x * x + y has x as a free variable, which is captured from the enclosing scope. In lazy languages, lambda lifting is often combined with let-floating to share computations of free variable expressions like x * x, avoiding redundant evaluations.[4]
The transformation proceeds step by step. First, identify all free variables in the inner lambda: x is free with respect to g. To preserve laziness, lift the common subexpression x * x into a shared binding v. Next, lift g to a top-level supercombinator by adding the free variables as explicit parameters, yielding $g v y = v + y. Then, replace the original g definition with an application of the lifted function, sharing v: g = $g v where v = x * x. Finally, introduce a main entry point if needed. The result is a program without free variables, where all dependencies are passed explicitly.[4]
The lifted version appears as follows:
haskell
$g v y = v + y
f x = let v = x * x in let g = $g v in (g 3 + g 4)
$main = f 6
$g v y = v + y
f x = let v = x * x in let g = $g v in (g 3 + g 4)
$main = f 6
In the before-and-after comparison, the original relies on implicit closure capture for x, potentially requiring runtime environment allocation, while the lifted form passes x explicitly (via shared v) to $g, enabling direct function calls without closures and improving efficiency in compilation to machine code.[4]
This technique adapts naturally to other languages. In Scheme, lambda lifting often involves moving local defined procedures to the top level or enclosing lambda, adding free variables as arguments; for instance, a recursive loop helper in a reverse function is lifted by defining it outside the inner scope and passing captured variables explicitly.[13] In modern JavaScript with arrow functions, a similar transformation converts a nested arrow like const g = y => x * x + y; into a top-level const $g = (x, y) => x * x + y; with explicit x passing, though JS implementations may still use closures for convenience unless optimized.[4]
Comparison to Closures
Closures are runtime data structures that pair a function with a heap-allocated environment capturing its free variables, allowing the function to access non-local variables from its defining scope even after the enclosing context has ended.[14] In contrast, lambda lifting performs a compile-time transformation that eliminates the need for such closures by converting free variables into explicit static parameters passed at call sites, resulting in top-level functions without free variables.[15] This static approach binds variables at compile time rather than dynamically at runtime, as closures do through environment lookup.[14]
The primary differences lie in their handling of free variables: lambda lifting resolves them through additional function arguments, enabling optimizations like inlining and register allocation, while closures maintain dynamic binding via heap environments, supporting higher flexibility for first-class functions that can be returned or passed around.[16] Lambda lifting thus prioritizes performance gains in speed and space by avoiding runtime overhead, such as heap allocation and garbage collection for environments, whereas closures offer greater expressiveness at the cost of increased memory pressure.[15]
Trade-offs include added complexity from extra parameters in lifted code, which can increase stack usage and complicate control flow, but this is offset by enabling aggressive optimizations that closures hinder due to their opaque environments.[3] Closures, conversely, introduce garbage collection overhead from frequent allocations, particularly in loops or recursive calls, potentially leading to performance degradation in allocation-heavy scenarios.[15]
Lambda lifting is commonly applied in strict functional language compilers for optimization, such as in GHC's STG pipeline where selective lifting reduces allocations by up to 18% in benchmarks like the N-queens solver.[15] Closures, however, are favored in dynamic languages like Python, where they enable flexible nested functions that capture enclosing scope variables without compile-time parameterization.[17]
Lambda Lift Operation
In pure lambda calculus, the lambda lift operation is a term rewriting rule that transforms a nested lambda abstraction by explicitly binding its free variables as additional formal parameters, thereby promoting it toward a top-level, closed form.[18] Formally, for a lambda term \lambda x . M where x is the bound variable and the set of free variables in M (excluding x) is F = \{y_1, \dots, y_n\}, the lift operation rewrites it as \lambda y_1 \dots y_n x . M.[19] This addition of parameters for F ensures that the resulting abstraction has no free variables outside the new binders, preserving the semantics of the original term up to \beta-equivalence when the free variables are appropriately applied.[18]
The operation applies specifically to anonymous lambda abstractions, which lack explicit names or let-bindings, by directly introducing the free variables as outermost binders without requiring fresh variable names beyond those already present in F.[19] In this context, free variables refer to those not bound by any enclosing abstraction in the term.[20] For named abstractions—those arising in extensions of lambda calculus with named functions or recursive bindings—the lift propagates existing names upward by incorporating them into the parameter list while adjusting inner scopes to avoid capture.[18] This propagation maintains the binding structure, ensuring that the lifted term remains equivalent under substitution.
To illustrate the notation, consider the term \lambda y . ((\lambda x . x y) z), where the body M = ((\lambda x . x y) z) has free variable z (with y bound by the outer abstraction). The lift operation yields \lambda z y . ((\lambda x . x y) z).[19] Here, z is added as the outermost parameter, closing the term while preserving the application structure. This rewriting rule forms the atomic step in broader lambda-lifting transformations, emphasizing the shift from implicit free-variable capture to explicit parameterization.[18]
The lambda-lift transformation is a recursive process applied to lambda terms that systematically eliminates free variables from inner abstractions by converting them into additional bound parameters, thereby restructuring nested functions into a form suitable for global definition. This transformation operates bottom-up on the term structure, first recursively lifting all subterms (inner lambdas and their applications) before processing the enclosing abstraction or application. For an application term of the form (\lambda x . M) N, the process lifts M and N independently to obtain M' and N', then forms the lifted application M' N'; subsequently, if the outer abstraction requires lifting due to free variables in the lifted body, additional parameters are introduced.[21]
The core recursive rule for abstractions formalizes this as follows: for a term \lambda x . M, first compute the lifted body M' = \lambda-lift(M); then, to handle free variables, identify the set F of free variables in M' (excluding those bound by outer scopes), choose fresh variables y_1, \dots, y_k for each in F, and substitute them appropriately to yield \lambda-lift(\lambda x . M) = \lambda x' . \lambda y_1 \dots \lambda y_k . M''[y_i / f_i], where x' is a fresh rename of x to avoid capture, M'' is M' with bound variables adjusted, and f_i are the original free variables. This ensures that all free variables are explicitly parameterized, with the recursion ensuring inner terms are processed first. For non-abstraction terms like variables or pure applications without free variables needing lift, the transformation is identity or structural preservation.[22]
The transformation preserves the semantics of the original term, producing a result that is \beta-equivalent, meaning it evaluates to the same normal form under \beta-reduction. Additionally, it is confluent, ensuring that the order of lifting subterms does not affect the final transformed term, which supports its use as a deterministic rewriting step in formal systems. These properties hold due to the careful handling of variable renaming and substitution, maintaining equivalence throughout the recursive traversal.[22][21]
Selection Criteria for Lifting
In lambda lifting, the primary criterion for selecting expressions to lift is the presence of free variables that are referenced across multiple scopes, as this necessitates explicit parameter passing to eliminate dependencies and enable global function definitions. Expressions with free variables shared in nested contexts are prioritized for lifting to reduce environmental dependencies. For instance, if a subterm's free variables are captured frequently in enclosing lambdas, lifting it can consolidate parameter propagation.[23]
In the context of lambda calculus, selection focuses on subterms exhibiting maximal free variable sets, as lifting these minimizes the overall parameter passing required across the term structure. This approach optimizes by targeting subterms where free variables dominate, reducing the need for auxiliary parameters in transformed equations. Static analysis techniques, such as constructing call graphs and dominator trees, aid in identifying optimal candidates by determining the minimal set of extraneous parameters for each function.[23]
The goals of these criteria are to minimize the number of parameters while preserving the program's semantic equivalence, as validated in transformations from local to recursive equations.[1]
Applications and Examples
Execution Model
After lambda lifting, functional programs are transformed into sets of global supercombinators or recursive equations, where evaluation proceeds through explicit application of these top-level functions to all required arguments, thereby replacing implicit environment captures with direct parameter passing. This eliminates the need for dynamic environment lookups during function invocation, reducing indirection and enabling more straightforward reduction strategies in the interpreter or virtual machine. In particular, the lifted terms support normal-order evaluation, where arguments are evaluated lazily as needed, often leveraging fixed-point combinators like the Y combinator to handle recursive definitions without altering their semantics.[24]
In interpreters designed for functional languages, such as the G-machine, lambda-lifted code is executed using stack-based mechanisms for parameter passing, where arguments are pushed onto an evaluation stack and unwound during function application. This approach facilitates efficient handling of tail recursion, as the interpreter can optimize tail calls into jumps without the overhead of constructing closures or managing additional stack frames for free variables. The absence of closure creation further supports seamless integration with graph reduction techniques, allowing shared subexpressions to be updated in place via pointer tagging and root updating.[24]
Performance benefits arise primarily from decreased runtime overhead, as lambda lifting avoids heap allocations for closures and minimizes garbage collection pressure in functional codebases. The technique contributes to performance improvements in implementations like Lazy ML by reducing runtime overhead through fewer intermediate structures and optimized combinator applications, such as those using S and K combinators for lazy argument propagation. These gains are particularly evident in recursive and higher-order functions, where explicit passing reduces the number of reduction steps compared to unlifted forms.[1][24]
Illustrative Cases
Lambda lifting finds particular utility in handling recursive functions, especially those involving mutual recursion, where local definitions depend on shared outer-scope variables. Consider a program with two mutually recursive functions, one referencing an outer variable a and the other referencing b:
let a = ... and b = ... in
letrec f = λx. ... a ... g ...
and g = λy. ... b ... f ...
in ... f ... g ...
let a = ... and b = ... in
letrec f = λx. ... a ... g ...
and g = λy. ... b ... f ...
in ... f ... g ...
After lambda lifting, the functions are promoted to global definitions with the free variables added as explicit parameters, preserving mutual recursion:
f b a x = ... a ... g a b ...
g a b y = ... b ... f b a ...
... let a = ... and b = ... in ... f b a ... g a b ...
f b a x = ... a ... g a b ...
g a b y = ... b ... f b a ...
... let a = ... and b = ... in ... f b a ... g a b ...
This transformation ensures equivalence by substituting the captured free variables with the new parameters at call sites, maintaining the original semantics while eliminating closures; a proof sketch involves showing that the beta-reduction of the lifted applications matches the original nested reductions step-by-step.[1]
In handling let-expressions, lambda lifting converts local bindings into global function applications, effectively inlining free variables as arguments. For instance, the expression let x = 5 in let f y = x + y in f 3 transforms to a global function f x y = x + y applied as f 5 3. The lifted form computes the same result, as the parameter substitution directly replaces the free occurrence of x, with equivalence following from the alpha-equivalence of the bound variable and the original let-binding semantics. This approach extends to recursive letrec by lifting inner functions iteratively until no free variables remain.[1]
For higher-order functions like fold, lambda lifting propagates outer parameters through recursive calls, enabling efficient compilation without closures. Take the right-fold function:
fun foldr (f, b, xs) = let fun walk nil = b | walk (x :: xs) = f (x, walk xs) in walk xs end
fun foldr (f, b, xs) = let fun walk nil = b | walk (x :: xs) = f (x, walk xs) in walk xs end
The lifted version introduces auxiliary global functions:
fun foldr (f, b, xs) = foldr_walk (f, b) xs
and foldr_walk (f, b) nil = b
| foldr_walk (f, b) (x :: xs) = f (x, foldr_walk (f, b) xs)
fun foldr (f, b, xs) = foldr_walk (f, b) xs
and foldr_walk (f, b) nil = b
| foldr_walk (f, b) (x :: xs) = f (x, foldr_walk (f, b) xs)
Here, f and b become explicit parameters to walk, ensuring all calls pass them, which preserves the fold's associative accumulation; equivalence is demonstrated by induction on the list length, where base and inductive cases mirror the original recursion. This pattern scales to more complex higher-order uses, such as polynomial evaluation via foldr, where nested locals are similarly lifted.
Lambda Drop Operation
The lambda drop operation, commonly referred to as lambda-dropping, serves as the inverse of lambda lifting within the framework of lambda calculus. It transforms a set of top-level recursive equations—typically produced by lambda lifting—back into a block-structured program featuring nested lambdas and lexical scoping, thereby reintroducing local bindings and reducing explicit parameter passing for variables that are lexically visible. This process exploits the program's scope information to restore nesting without altering the overall semantics.[25]
The basic rule of lambda dropping involves identifying and eliminating redundant parameters in top-level functions that correspond to variables defined in an enclosing scope. Specifically, for a top-level abstraction \lambda [F \, x](/page/F/X) . M, where F represents global variables passed as explicit parameters (lifted free variables), the operation inlines the function body as a nested lambda \lambda x . M, rendering F as free variables bound externally in the surrounding context. This rule applies iteratively, using def/use paths to determine which parameters can be dropped, ensuring that invocations no longer need to pass values already available lexically.[25]
Lambda dropping preserves the denotational semantics of the original equations while potentially introducing closures to capture free variables, which can affect memory allocation in implementations. It is particularly valuable in compiler intermediate representations for deoptimization or code generation, as it minimizes parameter arity—often reducing dozens of arguments to a handful—thereby enhancing both compile-time analysis and runtime performance in residual programs. Unlike lifting, which globalizes functions to eliminate free variables, dropping restores locality to facilitate optimizations like inlining or scope-based alias analysis.[25]
Abstraction Sinking
Abstraction sinking is a program transformation technique that relocates lambda abstractions from top-level or outer scopes inward into their specific application contexts, thereby localizing functions and reducing overall parameter lists in functional programs. This process complements lambda-dropping by restoring block-structured scoping after higher-order functions have been introduced, such as through lambda-lifting.[26]
The core process of abstraction sinking, often termed block sinking in the literature, identifies lambda expressions or mutually recursive function groups that are invoked exclusively within a single enclosing function. It then moves these abstractions into local blocks using static analysis tools like call graphs to detect usage patterns and dominator trees to pinpoint the innermost dominating scope.[26] Once relocated, the sunk abstractions can be inlined locally where unused parameters—those bound to visible variables—are identified and eliminated, streamlining the term without altering semantics.
Abstraction sinking is frequently applied as part of lambda-dropping, immediately after initial re-nesting but before full parameter elimination, to optimize deeply nested structures by minimizing global dependencies and improving lexical scoping.[26] For example, the term (\lambda x. f\, x)\, g can be sunk to f\, g by pushing the abstraction inward and inlining it at the application site, assuming x carries no free occurrences beyond this context. This not only shortens parameter chains but also enhances compile-time efficiency through reduced arity and localized computation.[26]
Parameter Dropping Techniques
Parameter dropping techniques form a critical optimization phase in lambda dropping, aimed at eliminating redundant formal parameters from functions derived from recursive equations to restore efficient block-structured programs. These methods identify parameters that are consistently bound to the same lexically visible variable across all invocations, allowing their removal without altering program semantics. By reversing the parameter-lifting step of lambda lifting, parameter dropping minimizes unnecessary argument passing, which can otherwise increase compilation and runtime overhead in functional programs.[26]
The process begins with a build phase, where parameter lists are constructed by tracking variable usage through dependency analysis. This involves creating call graphs and dependence graphs to map free variables and their paths in the program, identifying which parameters are redundant based on consistent bindings. Flow graphs and dominator trees further annotate variables with their defining functions, enabling the detection of parameters that do not vary across calls. For instance, if a parameter is always supplied with a value from the enclosing scope, it qualifies for dropping, as determined by def/use chains that trace variable origins. This phase ensures that only essential dependencies are preserved, often using scope graphs transformed into directed acyclic graphs (DAGs) to reveal hierarchical structures.[26][27]
In the drop phase, unused or redundant parameters are removed, with corresponding adjustments to function calls and definitions. Techniques such as eta-reduction simplify expressions by eliminating abstractions for parameters bound to fixed values, effectively uncurrying parameterless functions in higher-order contexts. Dead-code elimination complements this by pruning parameters with no references in the function body, further de-thunking optimized terms. Recursion is handled by localizing mutually recursive functions through block sinking, which collapses strongly connected components while preserving fixed points to maintain semantic equivalence. Calls are then updated to omit dropped arguments, ensuring the program regains lexical scoping without extraneous parameters.[26][27]
These techniques yield significant benefits, including reduced function arity that simplifies terms and lowers the cost of parameter passing in both compilation and execution. By minimizing unnecessary argument passing, parameter dropping enhances overall program efficiency, particularly in scenarios involving nested functions or higher-order combinators. This optimization is especially impactful when integrated with related transformations like abstraction sinking, which relocates blocks to support parameter reductions.[26]