Fact-checked by Grok 2 weeks ago

Evaluation strategy

In a programming language, an evaluation strategy refers to the set of rules that govern the order and timing of evaluating expressions during execution. These strategies dictate when operands or arguments are computed, particularly in applications, and play a crucial role in determining computational efficiency, memory usage, and whether a terminates. The concept originated in the 1930s with Alonzo Church's , where reduction strategies such as (outermost-leftmost) and applicative order (innermost) were formalized. These ideas influenced early programming languages: introduced call-by-name (a form of ), while adopted eager evaluation. Later developments, including in languages like SASL (1970s) and (1990), expanded the field. Evaluation strategies are especially prominent in languages, where expressions are treated as first-class entities and side effects are minimized or absent. The two primary categories are eager evaluation (also known as strict evaluation) and (non-strict evaluation). In eager evaluation, arguments to a are fully computed before the function body is executed, following an applicative order from left to right; this approach is common in languages like , , and , promoting predictable performance but potentially wasting resources on unused computations. Conversely, delays the computation of an expression until its value is actually required, enabling features like infinite data structures and conditional execution without evaluating irrelevant branches; it is the default in languages such as and can be simulated in others via mechanisms like thunks. Parameter-passing mechanisms often align with these strategies, including call-by-value (eager evaluation of arguments before passing), call-by-reference (passing memory addresses for modification), call-by-name (reevaluating arguments on each use), and call-by-need (an optimized lazy variant that caches results after first evaluation). The choice of strategy influences program behavior in and its extensions, where confluent reduction ensures that results are independent of order in pure settings, though efficiency varies. In , hybrid strategies combine lazy functional evaluation with non-deterministic search to balance completeness and performance.

Introduction

Definition and overview

In a programming language, an evaluation strategy specifies the rules governing the order and timing of expression evaluation, as well as the binding of arguments to formal parameters during computation. This encompasses decisions on whether and when subexpressions are reduced to values before or during function application. Evaluation strategies consist of two primary components: the evaluation order, which dictates when arguments to a function are computed (e.g., before or after entering the function body), and the binding mechanism, which determines how argument values or expressions are associated with and stored for parameters (e.g., by copying values or sharing references). Key terminology includes eager evaluation (also called strict evaluation), where arguments are fully evaluated prior to function application, aligning with applicative order; and lazy evaluation (non-strict evaluation), where evaluation is postponed until the argument's value is actually required, corresponding to normal order. Strict evaluation is the default in most imperative languages like C and Java, while non-strict evaluation appears in functional languages such as Haskell. These strategies profoundly influence program behavior and performance. Eager evaluation can improve for computations where all arguments are needed but may lead to non-termination if an argument diverges (e.g., an ) even if unused, and it executes side effects immediately, which is critical in imperative languages for predictable sequencing. enhances efficiency by skipping unneeded computations—useful in declarative languages for infinite data structures or conditional expressions—and can ensure termination in cases where eager evaluation fails, but it complicates side-effect management and due to deferred execution. Overall, the choice balances resource use, correctness amid side effects, and expressiveness between imperative and declarative paradigms. To illustrate evaluation points, consider this for a call:
function add(x, y)
    return x + y

result = add(computeExp1(), computeExp2())
In eager (applicative-order) evaluation, computeExp1() and computeExp2() are evaluated to values before the add body executes. In lazy (normal-order) evaluation, these expressions are bound unevaluated and computed only when referenced in the addition.

Historical development

The foundations of evaluation strategies in programming languages trace back to Alonzo Church's development of in the 1930s, where evaluation—reducing the leftmost outermost redex first—emerged as a core reduction strategy for computing functions on symbolic expressions. This approach influenced subsequent language designs by prioritizing substitution without premature argument evaluation, laying groundwork for non-strict paradigms. In the mid-20th century, early imperative languages adopted strict evaluation models to align with hardware constraints and predictable execution. , released in 1957, implemented strict eager evaluation through its call-by-reference mechanism, evaluating arguments fully before passing addresses, which facilitated efficient numerical computations but limited expressiveness for unevaluated forms. Similarly, (1960) introduced applicative order evaluation (call-by-value) for functions, where arguments are reduced to values prior to application, alongside call-by-name as a non-strict alternative; (1958) had used a akin to call-by-name. John McCarthy's seminal 1960 paper on established applicative order as the default evaluation strategy, diverging from lambda calculus's by evaluating function arguments before application to enable practical implementation on early computers, though it acknowledged the theoretical appeal of for avoiding redundant computations. This choice influenced many subsequent languages, but terminology began to shift; McCarthy's precise distinction between applicative (eager) and normal (lazy-like) orders evolved in later designs, where "" came to denote delayed computation with , distinct from pure substitution. Key advancements in non-strict strategies appeared in the 1970s, with proposed for dialects through techniques like and indefinite data structures, as explored in works by and , enabling infinite lists without immediate computation. , developed in 1977, introduced call-by-need elements via its goal-directed and generators, which compute values on demand during searches, bridging imperative and functional styles. The rise of paradigms in the 1980s amplified non-strict strategies for modularity and expressiveness. David Turner's (1985) adopted as standard, treating functions as first-class citizens and delaying reductions until needed, which inspired broader adoption. This culminated in the 1990s with Haskell's release (version 1.0 in 1990), which formalized non-strict semantics—implemented via —to support pure , equational reasoning, and handling infinite data structures efficiently.

Illustrative Examples

Basic evaluation example

To illustrate the impact of evaluation strategies on execution, consider a simple that increments its argument by one, defined in abstract syntax as:
inc(x) = x + 1
This is applied to a conditional expression as the argument:
inc(if cond then expensive() else 0)
Here, cond evaluates to false, expensive() represents a computationally intensive operation (such as a recursive that could indefinitely if invoked), and the if construct selects between the branches based on cond. Under strict evaluation (also known as applicative order), the argument to inc is fully evaluated before the function body is entered. The evaluation proceeds as follows:
  1. Evaluate the operator inc, yielding the increment function.
  2. Evaluate the argument if cond then expensive() else 0: First, evaluate cond (false), but since if is treated as a regular function application in pure applicative order, both branches may be evaluated prior to selection in some models; however, the key point is that the else branch (0) is selected without needing expensive(), yet in strict regimes without special-casing conditionals, unnecessary evaluation of the then branch could occur if not optimized.
  3. To highlight the difference starkly, adapt to a test-like construct where the conditional is the function itself, as in the classic example: Define test(x, y) = if (= x 0) then 0 else y, and call test(0, expensive()). In applicative order, evaluate x to 0 and y to expensive() before applying test, triggering the full computation of expensive() (potentially infinite loop) even though the result would be 0.
The outcome is that strict evaluation performs extraneous work, consuming resources on expensive() despite the conditional rendering it unnecessary. In contrast, under non-strict evaluation (normal order), arguments are not evaluated until their values are required during function application (detailed further in the Evaluation Orders section). The trace is:
  1. Substitute the unevaluated argument into inc: inc(if cond then expensive() else 0).
  2. Evaluate the body of inc: Compute (if cond then expensive() else 0) + 1.
  3. Evaluate the conditional: cond is false, so select the else branch (0) without ever invoking expensive().
  4. Compute 0 + 1, yielding 1.
Thus, non-strict evaluation avoids the unnecessary computation, producing the correct result 1 efficiently. This example demonstrates how evaluation strategy influences both performance and termination behavior in programs with conditional logic.

Comparison table

StrategyEvaluation TimingMemory UsageSide-Effect HandlingTermination GuaranteesExample Languages
Call by ValueEager evaluation of arguments before function call.Copies the value, requiring additional space for the local copy.No side effects on actual parameters; modifications affect only the local copy.Strict: May fail to terminate if an argument expression diverges, even if the result would terminate without it.C, Java (primitives)
Call by ReferenceArguments evaluated before call; passes location.Shares memory location; no extra copy.Allows modification of actual parameters through aliases.Strict: Similar to call by value; evaluates arguments eagerly.Fortran, C++ (references), Ada (in out)
Call by Copy-RestoreCopy in before call; copy out on return.Temporary local copy during execution.Updates actual on return; potential issues with aliases.Strict: Evaluates arguments eagerly.Ada (for some modes)
Call by SharingPasses reference to the object at call time.Shares the reference; no deep copy for structures.Modifications affect the shared object.Strict: Evaluates arguments before passing reference.Java (objects), Lisp
Call by NameLazy: reevaluates argument expression each use.Thunk storage for delayed evaluation; potential multiple computations.Side effects occur each reevaluation; can affect actuals.Non-strict: Terminates whenever a terminating reduction strategy does; avoids evaluating unused args.Algol 60
Call by NeedLazy: evaluates argument once on first use, memos result.Stores thunk and result after first evaluation; avoids recomputation.Typically read-only after evaluation; no multiple side effects.Non-strict: Like call by name, but more efficient; terminates if possible.Haskell
To illustrate differences, consider the following for a f(x) where x is an : Call by Value (C-like):
int f(int x) { return x * 2; }
int main() { int a = 3; f(a); }  // a remains 3
Call by Name (Algol-like):
[procedure](/page/Procedure) f(expr x); ... end;
f(a + b);  // x reevaluated as a + b each time used
Call by Need (Haskell-like):
f x = x * 2
let y = undefined in f y  -- errors only if y used, but memoized if used once
```[](https://www.cs.fsu.edu/~lacher/courses/COP4020/fall10/lectures/Subroutines.pdf)[](https://www.nyu.edu/classes/jcf/CSCI-GA.2110-001/slides/session4/Subprograms.pdf)

## Evaluation Orders

### Strict evaluation

Strict evaluation, also known as applicative-order evaluation, is a [strategy](/page/Strategy) in which all arguments to a [function](/page/Function) are fully evaluated to their normal form before the [function](/page/Function) body is applied. This approach ensures that subexpressions are reduced to values prior to any combination or substitution in [lambda calculus](/page/Lambda_calculus) terms.[](https://www.macs.hw.ac.uk/~greg/books/gjm.lambook88.pdf) In programming languages, it aligns with eager evaluation, where computations occur immediately upon encountering an expression.

Mechanically, strict evaluation proceeds by selecting the leftmost innermost redex for [reduction](/page/Reduction), evaluating [arguments](/page/Argument) from the inside out before applying the outer [function](/page/Function). This outer-to-inner order contrasts with strategies that delay [argument](/page/Argument) computation, prioritizing complete resolution of inputs to avoid partial applications. In [lambda calculus](/page/Lambda_calculus), for an application $ M \, N $, $ N $ is reduced to a value before substituting into $ M $. A classic illustration is the identity [function](/page/Function) applied to a diverging [term](/page/Term):

$$
(\lambda x. x) \, \Omega
$$

where $ \Omega = (\lambda x. x \, x) \, (\lambda x. x \, x) $. Under strict evaluation, $ \Omega $ is fully evaluated first, resulting in non-termination as it enters an infinite self-application loop.[](https://www.cs.cornell.edu/courses/cs6110/2016sp/lectures/lecture04.pdf)

One key advantage of strict evaluation is its predictability, particularly for side effects, as all computations occur in a defined sequence, facilitating debugging and reasoning about program behavior. It is the default in most imperative languages, such as C and Java, where immediate evaluation supports efficient resource management for terminating expressions.[](https://www.macs.hw.ac.uk/~greg/books/gjm.lambook88.pdf) However, a significant disadvantage arises with non-terminating arguments: even if the function does not require the argument's value, strict evaluation will attempt to compute it, potentially leading to infinite loops or errors in cases like conditionals with diverging false branches. This can cause programs that would terminate under delayed evaluation to diverge entirely. Strict evaluation is often realized through binding strategies like call by value, which enforce this eager semantics.

### Non-strict evaluation

Non-strict evaluation, also referred to as normal-order evaluation, is a reduction strategy in [lambda calculus](/page/Lambda_calculus) and [functional programming](/page/Functional_programming) where the leftmost outermost beta-redex (reducible expression) is reduced first, delaying the evaluation of arguments until they are actually needed.[](https://www.cs.cornell.edu/courses/cs6110/2018sp/lectures/lec04.pdf) This approach contrasts with strict strategies by prioritizing the overall term structure over immediate argument computation, ensuring that if [a normal form](/page/A-normal_form) exists, the reduction sequence will reach it due to the standardization theorem.[](https://www.cs.yale.edu/flint/trifonov/cs430/notes/17.pdf) In practice, arguments are represented as thunks—suspended computations that encapsulate unevaluated expressions—and are only forced when their values are demanded during the execution of the function body.[](https://www.brics.dk/RS/97/7/BRICS-RS-97-7.pdf)

The mechanics of non-strict evaluation focus on achieving head-normal form, where the term is reduced until the outermost [lambda](/page/Lambda) abstraction is exposed, with inner expressions evaluated [on demand](/page/On-demand) as they contribute to the result. This [on-demand](/page/On-demand) evaluation occurs lazily, meaning subexpressions not required for the final output are never computed, which is particularly useful in functional languages for constructing and manipulating infinite data structures like streams or lists without immediate termination.[](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf) For instance, in [Haskell](/page/Haskell), this strategy enables concise definitions of algorithms that process only the necessary portions of potentially unbounded data.[](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf)

One key advantage of non-strict [evaluation](/page/Evaluation) is its ability to avoid superfluous computations, thereby improving [efficiency](/page/Efficiency) in scenarios where arguments may not influence the outcome, such as conditional expressions or higher-order functions. It is foundational in purely functional languages, facilitating modular and composable code without the need for explicit control over evaluation timing.[](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf) However, it introduces overhead from thunk creation and management, which can increase memory usage through accumulated suspensions and complicate performance prediction.[](https://www.brics.dk/RS/97/7/BRICS-RS-97-7.pdf) Additionally, if the language permits side effects, delayed evaluation can lead to non-intuitive ordering, making [debugging](/page/Debugging) more challenging.[](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/history.pdf)

A illustrative example highlights the terminating behavior of non-strict evaluation: consider the lambda term $(\lambda x.\ 42)\ \Omega$, where $\Omega = (\lambda x.\ x\ x)(\lambda x.\ x\ x)$ is a diverging expression that loops indefinitely under reduction. In non-strict evaluation, the outer redex reduces first to 42, bypassing the evaluation of $\Omega$ entirely, whereas a strict strategy would diverge by attempting to evaluate the argument beforehand.[](https://www.cs.tufts.edu/comp/150AVS/03-lambda.pdf) This demonstrates how non-strict evaluation can yield results in cases where strict evaluation fails. Non-strict evaluation is often implemented via binding strategies like call-by-name, which provide the necessary delay mechanism.[](https://www.brics.dk/RS/97/7/BRICS-RS-97-7.pdf)

### Comparison of applicative and normal order

Applicative-order evaluation, also known as strict evaluation, evaluates the arguments to a function before applying the function itself, focusing on the innermost redex (reducible expression) first.[](https://web.mit.edu/6.001/6.037/sicp.pdf) In contrast, [normal-order evaluation](/page/Normal_order), a non-strict approach, applies the function first and evaluates arguments only when they are actually needed, prioritizing the outermost (leftmost) redex.[](https://web.mit.edu/6.001/6.037/sicp.pdf) This difference in timing—pre-application for applicative order versus post-application for normal order—fundamentally affects how expressions are reduced in [lambda calculus](/page/Lambda_calculus) and [functional programming](/page/Functional_programming) languages.[](https://www.cs.cornell.edu/courses/cs6110/2016sp/lectures/lecture04.pdf)

Regarding termination, applicative order can fail to terminate if any argument diverges, as it evaluates all arguments regardless of whether they are used; for instance, applying a function to a non-terminating expression like `(define (p) (p))` will loop infinitely before the function body executes.[](https://web.mit.edu/6.001/6.037/sicp.pdf) [Normal order](/page/Normal_order), however, can achieve termination in such cases by deferring evaluation of unused arguments, ensuring convergence to [a normal form](/page/A-normal_form) if one exists.[](https://www.cs.cornell.edu/courses/cs6110/2016sp/lectures/lecture04.pdf) This property makes [normal order](/page/Normal_order) particularly advantageous for expressions involving conditionals where branches may not be taken.[](https://files.core.ac.uk/download/pdf/354675334.pdf)

Efficiency trade-offs arise from these strategies: applicative order avoids the overhead of creating thunks (delayed computations) and prevents redundant evaluations of the same argument, but it wastes effort on unused computations.[](https://people.csail.mit.edu/jastr/6001/fall07/r26.pdf) [Normal order](/page/Normal_order) saves work by skipping unnecessary evaluations but may recompute arguments multiple times without memoization, leading to higher computational cost in some scenarios; for example, computing `(gcd 206 40)` requires 18 remainder operations under [normal order](/page/Normal_order) but only 4 under applicative order.[](https://web.mit.edu/6.001/6.037/sicp.pdf)

Side effects further highlight the contrasts, as applicative order executes all argument side effects upfront, potentially performing more operations than needed, while [normal order](/page/Normal_order) defers them, which can alter the order or multiplicity of execution unpredictably.[](https://web.mit.edu/6.001/6.037/sicp.pdf) In languages with mutable state, this deferral in [normal order](/page/Normal_order) complicates reasoning about program behavior.[](https://files.core.ac.uk/download/pdf/354675334.pdf)

To illustrate, consider a conditional [function](/page/Function) `test` defined as `(define (test x y) (if (= x 0) 0 y))` applied to `(test 0 (display "side effect"))`, where the second argument produces a [side effect](/page/Side_effect) (printing). Under applicative order, the [side effect](/page/Side_effect) executes immediately before the conditional, printing regardless of the branch. Under [normal order](/page/Normal_order), the [side effect](/page/Side_effect) is deferred and skipped since the true branch is taken, avoiding the print.

**Applicative-order trace:**
1. Evaluate arguments: `x` to 0, `y` to ([display](/page/Display) "[side effect](/page/Side_effect)") — prints "[side effect](/page/Side_effect)".
2. Apply `if (= 0 0) 0 y` → returns 0.

**Normal-order trace:**
1. Substitute into body: `if (= 0 0) 0 ([display](/page/Display) "[side effect](/page/Side_effect)")`.
2. Evaluate condition: true.
3. Return 0 — no evaluation of `y`, so no print.

This example demonstrates how [normal order](/page/Normal_order) avoids unnecessary [side effect](/page/Side_effect)s, while applicative order incurs them.[](https://web.mit.edu/6.001/6.037/sicp.pdf)

## Strict Binding Strategies

### Call by value

In call by value, each argument expression is fully evaluated to produce a value, which is then copied and bound to the corresponding formal [parameter](/page/Parameter) as a [local variable](/page/Local_variable) before execution of the [procedure](/page/Procedure) or [function](/page/Function) body begins. This mechanism treats the parameter as an independent entity initialized with the argument's value, ensuring that the binding is strict and occurs prior to any side effects within the called [procedure](/page/Procedure).[](https://www.softwarepreservation.org/projects/ALGOL/report/Algol60_report_CACM_1960_June.pdf)[](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf)

A key property of call by value is the absence of [aliasing](/page/Aliasing) for mutable state, as modifications to the [parameter](/page/Parameter) do not propagate back to the original argument or its associated data. Any side effects arising from the [evaluation](/page/Evaluation) of the argument expression, such as [function](/page/Function) calls or increments, are realized before the [procedure](/page/Procedure) body executes, promoting predictable behavior in strict [evaluation](/page/Evaluation) contexts.[](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf)[](https://www.cs.tufts.edu/~nr/cs257/archive/john-mccarthy/recursive.pdf)

This strategy is prevalent in imperative languages like C and Java. In C, arguments are passed by value, with the order of evaluation unspecified but a sequence point ensuring completion before the function body starts; Java similarly passes copies of primitive values or object references by value, where reassigning the parameter does not alter the caller's reference.[](https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf)[](https://docs.oracle.com/javase/tutorial/java/javaOO/arguments.html)

The term "call by value" originated in the [ALGOL 60](/page/ALGOL_60) report, where it specified explicit value assignment to formal parameters declared with the `value` keyword. In contrast, John McCarthy's 1960 Lisp design used applicative-order evaluation—evaluating arguments to normal form before [function application](/page/Function_application), effectively passing evaluated expressions or closures in a functional sense—rather than the imperative notion of value copying that became standard in later languages like ALGOL 60 derivatives. This evolution shifted the semantics from reduction strategies in expression-based systems to parameter binding via duplication in block-structured procedural languages.[](https://www.pls-lab.org/Call-by-value)[](https://www.cs.tufts.edu/~nr/cs257/archive/john-mccarthy/recursive.pdf)[](https://www.softwarepreservation.org/projects/ALGOL/report/Algol60_report_CACM_1960_June.pdf)

The following C example illustrates that mutation of the parameter does not affect the original argument:

```c
#include <stdio.h>

void modify(int param) {
    param = 10;  // Modifies local copy only
}

int main() {
    int arg = 5;
    modify(arg);
    printf("arg = %d\n", arg);  // Outputs: arg = 5
    return 0;
}
This behavior aligns with call by value semantics, where the parameter param receives a copy of arg's value, and changes remain isolated.

Call by reference

Call by reference is a parameter-passing in which the formal is bound to the (L-value) of the actual , allowing the subroutine to access and modify the original data directly without copying. This approach is typically used in strict evaluation contexts, where arguments are fully evaluated prior to the binding. One key advantage of call by reference is its efficiency when passing large data structures, such as arrays or objects, since only the address is transmitted rather than duplicating the entire content. However, this shared access introduces , where multiple identifiers refer to the same location, potentially enabling side effects that alter the caller's data unexpectedly during execution. Such properties make it suitable for operations requiring in-place modifications but demand careful management to avoid non-local effects. In languages like Pascal, call by reference is explicitly invoked using the keyword for formal parameters, ensuring the address of the argument is passed and changes propagate back to the caller. Similarly, C++ supports this through reference types, declared with the & symbol, which act as aliases to the original variables and differ from pointers by being non-nullable and non-resignable, promoting safer usage without explicit dereferencing. Despite these benefits, call by reference can lead to issues such as unintended modifications via , which complicate program reasoning and , as changes in one part of the may unexpectedly affect distant variables. Scoping problems may also arise if the lifetime of the referenced data ends before the subroutine completes, potentially causing dangling references or . For illustration, consider this C++ example where a increments a passed by :
cpp
#include <iostream>

void increment(int& x) {
    x += 1;
}

int main() {
    int value = 5;
    increment(value);
    std::cout << value << std::endl;  // Outputs: 6
    return 0;
}
Here, the call modifies the original value in main because x aliases its address. A similar effect occurs in Pascal with var:
pascal
program Example;
var
    value: integer;

procedure Increment(var x: integer);
begin
    x := x + 1;
end;

begin
    value := 5;
    Increment(value);
    writeln(value);  // Outputs: 6
end.
This demonstrates how mutations within the procedure directly alter the caller's variable.

Call by copy-restore

Call by copy-restore, also known as copy-in copy-out or value-result, is a strict parameter-passing mechanism in which the value of an argument is copied into a local parameter upon entry to the subroutine, and any modifications to that parameter are then copied back to the original argument upon exit. This process occurs as part of , ensuring arguments are fully evaluated before the subroutine executes. The mechanism simulates pass-by-result semantics, allowing subroutines to update caller variables without direct sharing during execution, which can be useful in languages lacking native reference passing to mimic modifiable outputs. It has been used in languages like for in out parameters, though early versions primarily employed call-by-reference; value-result became more prominent in later standards and other languages. A key advantage of call by copy-restore is that it prevents aliasing issues within the subroutine, as the parameter operates on an independent copy, avoiding unintended side effects from simultaneous modifications—unlike bidirectional sharing—while still enabling one-way updates to the original argument on return. For example, consider a pseudocode subroutine that updates a specific element in an array:
SUBROUTINE UpdateArray(A, index, newValue)
    // Copy-in: Create local copy of A
    LOCAL_ARRAY = COPY(A)
    
    // Modify the local copy
    LOCAL_ARRAY[index] = newValue
    
    // Copy-out: Restore changes to original A
    A = COPY(LOCAL_ARRAY)
END SUBROUTINE
In this case, calling UpdateArray(myArray, 5, 42) evaluates myArray and copies it locally before modification, then copies the updated local array back to myArray upon completion, ensuring the caller's array reflects the change without exposing the original to aliasing during execution.

Call by sharing

Call by sharing is an evaluation strategy where the caller and the called procedure share access to the same argument object by passing a reference to it, without duplicating the object's contents. This approach was introduced in the to enable efficient passing of complex data structures while maintaining object integrity. In practice, the argument is fully evaluated prior to the call, and a reference to the resulting object—typically treated as a value itself—is bound to the formal parameter, often via a shallow copy of the reference for efficiency. This strategy strikes a balance between call by value and call by reference by passing the reference by value, which prevents rebinding the parameter to a different object but allows in-place mutations to propagate back to the caller. It is particularly prevalent in object-oriented languages for handling mutable objects, such as lists or arrays in and , where the object's mutability enables shared modifications without the overhead of deep copying. In , for instance, all arguments are passed this way, as the language treats everything as an object and passes references to those objects. Modern (since Fortran 90) supports call-by-value via the VALUE attribute for scalar arguments, providing a strict alternative to its default reference passing. The shared access in call by sharing can introduce side effects, as mutations to the object within the procedure affect the original argument in the caller's scope, potentially leading to unintended aliasing behaviors if programmers assume independent copies. To illustrate, consider the following Python example where a list is passed to a function that appends an element:
python
def modify_list(lst):
    lst.append(42)  # Modifies the shared object

my_list = [1, 2, 3]
modify_list(my_list)
print(my_list)  # Outputs: [1, 2, 3, 42]
Here, the function modifies the original list because both the parameter lst and the argument my_list reference the same object. Similar behavior occurs in Ruby for mutable objects like arrays, where passing an array allows the callee to alter its contents, affecting the caller's view.

Call by address

Call by address is a strict parameter-passing mechanism where the explicit memory address of the argument is passed to the function parameter, typically as a pointer value, requiring dereferencing within the function to access or modify the original data. This approach evaluates the argument to obtain its address prior to the call, aligning with strict evaluation semantics. In practice, the function receives a copy of the address (consistent with call-by-value for the pointer itself), but dereferencing allows indirect mutation of the caller's data. This mechanism provides low-level control over memory locations, enabling operations like dynamic allocation via functions such as malloc in C, where pointers facilitate heap management without unnecessary data copying. However, it is inherently error-prone, as improper dereferencing can lead to segmentation faults, dangling pointers, or buffer overflows, demanding careful programmer oversight. Languages like C implement this through explicit pointer declarations (e.g., int *p), drawing from assembly language influences where direct address manipulation is routine for performance-critical code. PL/I also supports call by address as a default linkage convention, passing parameters by reference to their storage locations unless specified otherwise. The primary advantage of call by address lies in its efficiency for systems programming, where it minimizes overhead for large or complex data structures by avoiding full copies and supporting in-place modifications. For instance, pointer arithmetic allows traversing and altering arrays efficiently, which is essential for low-level tasks like operating system development or embedded systems. A representative example is a swap function in C that exchanges two integers without returning values:
c
void swap(int *a, int *b) {
    int temp = *a;
    *a = *b;
    *b = temp;
}
This is invoked as swap(&x, &y);, where & yields the addresses of x and y, enabling the function to dereference and update them directly. Such patterns underscore its utility in resource-constrained environments, though they require robust error handling to mitigate risks.

Call by unification

Call by unification is a strict evaluation strategy employed in logic programming languages, where the arguments of a predicate are matched and bound to variables through a process of unification before the body of the predicate is executed. This approach ensures that the terms are made identical via substitutions, enabling declarative specification of computations based on logical inference rather than imperative assignment. The mechanics of call by unification involve applying an algorithm to find a most general unifier (mgu), which is a substitution that renders two terms syntactically identical while preserving their structure. Unification occurs eagerly upon predicate invocation: the call's arguments are unified with the head of a clause, and if successful, the resulting substitution is extended to the clause's body for further evaluation. This substitution binds variables to terms, such as constants, variables, or complex structures, and is composed incrementally during resolution. The process supports the resolution principle, where unification facilitates matching between goals and facts or rules. A key property of call by unification is its integration with backtracking, allowing the system to retract bindings and explore alternative clauses upon failure, which is essential for search-based problem solving in nondeterministic domains. Variables are bound to ground terms or other variables, enabling flexible pattern matching without explicit control flow for binding propagation. This strategy is central to , where it forms the core mechanism for SLD-resolution (Selective Linear Definite clause resolution), driving the language's execution model since its inception in the early 1970s. One advantage of call by unification is its support for declarative pattern matching, allowing programmers to specify what relations hold rather than how to compute them, which simplifies expressing logical queries and rules. For example, in Prolog, the goal unify(X, [a|X], [a,b]). succeeds by unifying the second and third arguments, binding X to [b], as the list structures match under the substitution {X ← [b]}.

Non-strict Binding Strategies

Call by name

Call by name is a non-strict evaluation strategy in which the argument expression passed to a function is not evaluated at the call site but is instead textually substituted for the corresponding formal parameter within the function body, with the substituted expression being evaluated anew each time it is referenced during execution. This substitution occurs dynamically at runtime, treating the argument as unevaluated text that is inserted wherever the parameter appears, often enclosed in parentheses to preserve syntactic validity. As a result, the argument's side effects, if any, may occur multiple times if the parameter is used more than once in the body. The core property of call by name lies in its pure textual substitution semantics, which avoids the creation or storage of thunks—delayed computation wrappers—relying instead on direct re-substitution and re-evaluation without intermediate caching. This approach aligns with normal-order reduction in lambda calculus, where outermost redexes are reduced first, but it implements this through syntactic replacement rather than graph-based optimization. Call by name was formally introduced in the Revised Report on the Algorithmic Language ALGOL 60, where it is described as replacing "any formal parameter not quoted in the value list... throughout the procedure body, by the corresponding actual parameter." It influenced the design of early Lisp implementations, where McCarthy incorporated a flexible variant resembling ALGOL's call by name through explicit evaluation control via the eval function, enabling delayed argument handling in symbolic expressions. A significant disadvantage of call by name is its potential for inefficient repeated evaluation, which can lead to exponential time complexity in cases where an expensive argument is referenced multiple times within the function body. For instance, consider an ALGOL 60-style procedure that computes both the square and double of its parameter:
procedure square_plus_double(p);
  begin
    integer result;
    result := p * p + 2 * p
  end;
If invoked as square_plus_double(fib(30)), where fib(n) is a naive recursive Fibonacci function requiring exponential time itself, the argument fib(30) would be fully recomputed three times—twice for p * p and once for 2 * p—resulting in approximately three times the work of a single evaluation, amplifying the overall cost exponentially with deeper recursion or more references. This inefficiency arises because each substitution triggers independent evaluation without , making call by name unsuitable for performance-critical applications involving costly computations.

Call by need

Call by need is a non-strict evaluation strategy that delays the evaluation of function arguments until their values are required, while ensuring that each argument is computed at most once through . In this approach, arguments are represented as —suspended computations—that are evaluated on first demand and then cached, with the result shared across all subsequent uses to avoid redundant work. This mechanism combines the benefits of 's laziness with improved efficiency by preventing repeated evaluations of the same expression. The strategy operates by substituting arguments with let-bindings in the , where evaluation occurs only in contexts that necessitate the value, such as primitive operations or case expressions. Once evaluated, the is updated to hold the value, enabling sharing via graph structures in implementations. exhibits lazy properties, meaning unevaluated arguments do not cause divergence unless demanded, while its caching provides efficiency gains over pure . In practice, it is often realized through graph reduction techniques, where expressions form a directed acyclic graph () that is rewritten incrementally, preserving sharing and minimizing duplication during reduction. Haskell implements call by need as its default evaluation strategy, where lazy evaluation is achieved through this memoized demand-driven approach. One key advantage is the ability to handle infinite data structures without non-termination, as only the portions needed for the computation are evaluated. For example, consider defining an infinite list of natural numbers in Haskell: naturals = 0 : map (+1) naturals. Accessing the first few elements, such as take 5 naturals, forces evaluation of only the initial segment [0,1,2,3,4], while the tail remains a thunk until further demand, preventing infinite computation.

Call by macro expansion

Call by macro expansion is a non-strict evaluation strategy in which argument expressions are textually substituted into the body of a macro or function definition at compile time, with evaluation deferred until the expanded code is executed as part of the program. This static process resembles textual replacement, enabling code transformation without runtime overhead or dynamic thunk creation. As such, it achieves delayed evaluation through compile-time binding rather than runtime mechanisms. In Lisp, macros facilitate this strategy via definitions like defmacro, where the macro receives unevaluated argument forms and generates a replacement expression that is inserted during compilation. The substitution occurs structurally on s-expressions, allowing arbitrary code generation while preserving the language's homoiconic nature. Lisp macros originated in early dialects like Lisp 1.5 and evolved significantly in MacLisp and Common Lisp, becoming essential for extending the language's syntax and creating domain-specific abstractions. Lisp macros are generally non-hygienic, meaning they can suffer from variable capture issues where identifiers in the macro body might shadow or be shadowed by those in the calling context; programmers must handle this manually, often by generating fresh symbols with functions like gensym. For example, a macro defined as (defmacro when (condition &body body) (if ,condition (progn ,@body)))expands a call like(when (> x 0) (print x))to(if (> x 0) (progn (print x)))` at compile time, inlining the conditional logic and avoiding any function call overhead, but care must be taken to avoid capture in more complex cases. In contrast, languages like Scheme implement hygienic macros, which automatically address variable capture by renaming identifiers in the macro body to fresh names during expansion and tracking syntactic environments to ensure local bindings do not accidentally shadow or be shadowed by those in the calling environment; this is achieved through algorithms that perform linear-time rewriting. The C preprocessor implements call by macro expansion through function-like macros, processed during translation phase 4, where the preprocessor scans for invocations, collects arguments as token sequences within matching parentheses, fully expands those arguments for further macros, and substitutes them textually into the replacement list without evaluating them. This purely lexical replacement treats parameters as placeholders, inserting the argument tokens verbatim while preserving source structure like whitespace and comments, followed by rescanning the result for additional expansions. An example is #define SQUARE(x) ((x) * (x)), which expands SQUARE(a + b) to ((a + b) * (a + b)), potentially recomputing a + b twice in the compiled code but resolving all substitutions statically. This approach, standardized in C since its early specifications, promotes efficiency by enabling inline code generation and eliminating indirect calls for simple operations. The primary advantages of call by macro expansion include enhanced code generation efficiency, as the compiler can optimize the fully expanded form, and reduced runtime costs from avoiding argument passing and evaluation delays. In both Lisp and C contexts, it supports powerful metaprogramming while remaining confined to compile-time processing.

Call by future

Call by future is a non-strict evaluation strategy that integrates concurrency into argument passing by delegating the evaluation of each function argument to a separate future, or promise, which computes the value asynchronously in parallel with the function body. Upon invocation, the function receives placeholders for these futures rather than waiting for the arguments to evaluate, allowing the caller to proceed immediately without blocking. When the function body requires an argument's value, it accesses the future, which blocks the accessing thread only if the computation is incomplete; otherwise, the result is returned transparently. This mechanism, first formalized in Multilisp, ensures that futures replace themselves with their computed values upon completion, enabling seamless integration into sequential code while exposing parallelism. The strategy's key properties include its support for concurrency through speculative, eager of arguments in , combined with demand-driven access that avoids unnecessary computations if values go unused. Unlike purely sequential , call by overlaps the evaluation of independent arguments, potentially reducing overall execution time in environments, though it introduces non-determinism due to scheduling and potential blocking. It maintains in functional settings by treating futures as first-class values, but requires careful handling of side effects to avoid races. This approach has been implemented in languages such as Multilisp, where the future construct spawns dedicated processes for each argument, and Alice ML, which uses futures for data-driven in a typed concurrent functional setting. A primary advantage of call by future is its ability to automatically overlap independent computations, maximizing processor utilization without explicit programmer intervention for parallelism. For instance, in Multilisp, implementations demonstrated near-linear speedup for tasks like on multiprocessors, where futures for recursive sublist run concurrently across multiple levels of . In a parallel map operation, such as applying a to each element of a list, call by future binds each application to a separate future, allowing all elements to compute simultaneously; the mapping thread then touches these futures as needed, blocking only for unfinished ones while proceeding with completed results. This is particularly effective for data-parallel workloads, as seen in Alice ML's thread-spawning primitives that create futures for concurrent list . As a non-strict , it ensures arguments do not block the caller, deferring until demanded within the .

Optimistic evaluation

Optimistic evaluation is an adaptive strategy designed for non-strict languages, where expressions are speculatively evaluated eagerly at , assuming their results will be needed, but with a to abort and if the computation proves too expensive or unnecessary. This approach complements traditional (such as call-by-need) by dynamically switching between lazy and eager modes for individual thunks—unevaluated expressions—based on observations from an online profiler. The profiler monitors execution and adjusts speculation decisions, typically initializing with an eager bias and refining it to avoid wasteful computations. In terms of mechanics, optimistic evaluation operates on let expressions, such as let x = <rhs> in <body>, by introducing a runtime switch that determines whether to evaluate the right-hand side (<rhs>) speculatively using call-by-value semantics or to defer it lazily as a . If speculation succeeds—meaning the value is used and the cost is low—the computation commits seamlessly, avoiding the overhead of thunk creation and blackholing associated with pure . Upon detecting excessive cost (e.g., via time or space thresholds), the system aborts the speculative work by capturing the computation state into a , resuming the lazy strategy only if the value is later demanded. This ensures semantic to standard non-strict while enabling eager optimizations. For data structures, a variant called chunky evaluation speculatively computes fixed-size chunks (e.g., 10 elements) of lists, balancing speculation with controlled to prevent unbounded eager expansion. The properties of optimistic evaluation center on its speculative nature, where commit occurs implicitly on successful use and abort handles failures gracefully, maintaining the non-strict guarantees of the language without introducing strictness errors. It introduces minimal overhead for decisions, typically via lightweight runtime checks, and preserves . This strategy is particularly effective in low-contention scenarios, such as programs where most are eventually forced or are inexpensive to evaluate, yielding high throughput by reducing thunk allocation and forcing costs—key bottlenecks in . Empirical results from implementations show average speedups of 5-25%, with up to 50% improvements in space-leak prone programs, and no program slowing by more than 15%. Optimistic evaluation was implemented experimentally in an early version of the (GHC) for the language in 2003, requiring modifications to the and to support switches and abortion mechanisms. For example, consider a that repeatedly sums from a large generated list; under pure , thunks accumulate, risking space leaks, but optimistic evaluation speculatively computes list elements as needed, running in near-constant space by committing only useful results and aborting unused speculations. This makes it suitable for performance-critical non-strict code, adapting to workload characteristics without static analysis alone.

Modern Applications and Variations

Evaluation in functional languages

In languages, evaluation strategies emphasize non-strict semantics to support higher-order functions, , and , with —specifically call-by-need—serving as the default in influential languages like . This approach delays computation until a value is required, enabling the construction of infinite data structures and avoiding unnecessary work, while sharing computed results across multiple uses to optimize performance. However, functional languages also provide mechanisms for strict to mitigate potential inefficiencies, such as Haskell's seq function, which forces an argument to weak head normal form before proceeding, and bang patterns (!), which enforce strictness in pattern bindings. Implementations of these strategies in functional languages often rely on graph reduction techniques, where expressions are represented as directed graphs that are incrementally reduced to normal form. In the Glasgow Haskell Compiler (GHC), the dominant Haskell implementation, code is compiled to the Spineless Tagless G-machine (STG), a virtual machine that performs lazy graph reduction without explicit pointers, facilitating efficient handling of higher-order functions through closure representation and updateable references. This model supports lazy evaluation by constructing thunks—unevaluated expressions—that are shared and updated in place upon demand, reducing redundant computations in polymorphic and higher-order contexts. Key challenges in these implementations include space leaks arising from unevaluated thunks that accumulate in memory without being garbage-collected, potentially leading to excessive heap usage in long-running programs. To address this, strictness analysis techniques determine whether a function requires its arguments to produce a defined result, allowing compilers to optimize by evaluating strict arguments eagerly and enabling parallelization or specialization. Seminal work by Wright and Felleisen formalized strictness via abstract interpretation and type-based inference, providing a foundation for precise annotations that balance laziness with efficiency. A representative example is the handling of infinite streams: in lazy languages like , streams can be defined recursively without termination issues, as in naturals = 0 : map (+1) naturals, where only demanded elements are computed, supporting applications like symbolic computation or event streams. In contrast, strict functional languages like require explicit forcing for via annotations or , evaluating arguments eagerly by default to ensure predictable resource use but necessitating manual intervention for infinite structures. The evolution of evaluation strategies in functional languages traces from David Turner's in the 1980s, which popularized through non-strict semantics and polymorphic types, influencing subsequent designs by enabling concise expressions for complex algorithms. This laziness became central to Haskell's 1990 specification, prioritizing expressiveness over strictness. Modern languages like F#, however, adopt strict-by-default (eager) evaluation to align with imperative ecosystems, using explicit lazy keywords for deferred computation while maintaining functional purity.

Parallel and concurrent evaluation

In parallel and concurrent computing environments, evaluation strategies are adapted to leverage multiple processors or threads while managing shared resources and avoiding unnecessary computations. Futures and promises serve as key mechanisms for parallelism, where a future acts as a placeholder for a value computed asynchronously, allowing the main thread to continue without blocking until the result is needed. Promises complement this by enabling the completion of a future from another thread, facilitating coordination in concurrent programs. Non-strict evaluation strategies, such as lazy evaluation, are particularly useful in actor-based systems to prevent blocking, as messages between actors can carry unevaluated expressions that are computed only when required by the recipient. Strict in concurrent settings introduces challenges like race conditions, where multiple threads simultaneously access and modify shared state during immediate argument , leading to nondeterministic outcomes and potential . In contrast, lazy strategies enable speculative parallelism by delaying and allowing multiple threads to explore alternative computation paths concurrently, with results discarded if not needed, thus mitigating blocking and improving resource utilization in functional languages. Languages like Erlang employ eager within isolated , where each maintains its own and garbage collector, ensuring fault and preventing one 's from interfering with others through rather than . Scala integrates futures for parallel execution with lazy vals, which defer initialization until first access and are thread-safe due to , combining non-strict semantics with concurrent computation. A prominent technique for parallel lazy evaluation is work-stealing, implemented in the (GHC) runtime, where idle threads steal unevaluated thunks (lazy computations) from other threads' queues to balance load and maximize parallelism without explicit programmer intervention. For example, a parallel operation on a can use futures to compute partial reductions concurrently: in , the is split into segments, each folded asynchronously via Future, then combined with a final on the results, achieving over sequential call-by-need where the entire structure is built lazily before reduction. This contrasts with pure call-by-need, which avoids redundant computations but processes sequentially unless annotated for parallelism.

Optimizations and implementations

In strict evaluation strategies such as call by value, inlining replaces a function call with the body of the called , reducing invocation overhead and enabling further optimizations like constant propagation and . This technique is particularly effective in higher-order languages, where it can inline polymorphic or generic functions after monomorphization, minimizing indirect calls and improving cache locality. For call by need, tables store computed values of arguments to prevent redundant evaluations, ensuring each is reduced at most once while sharing results across multiple uses. Implemented via updateable references in the runtime, this approach avoids the exponential time costs of pure call by name, with Haskell's implementation using black holes to detect and prevent infinite loops during updates. Strictness inference analyzes program structure to determine when lazy expressions can be safely evaluated eagerly, allowing compilers to generate more efficient code by avoiding thunk creation for strict positions. Based on or type-based analysis, this optimization can eliminate unnecessary in functional programs, as demonstrated in analyses of higher-order strictness for data structures like . The (JVM) employs eager evaluation aligned with call by value, leveraging just-in-time () compilation to apply optimizations such as method inlining, , and on hot paths detected at . HotSpot's tiered compilation starts with and progresses to optimized native code, achieving near-native performance for long-running applications by adapting to observed call frequencies. GHC, the , implements through graph reduction on the Spineless Tagless G-machine (STG), where expressions form sharing graphs updated in place during reduction to support call by need. This model reduces memory duplication by sharing subexpressions, with the runtime collector managing graph nodes for efficient traversal. In strict languages, tail call optimization (TCO) transforms recursive s into loops, preventing and enabling constant stack space for deep , though support varies; for instance, some JIT compilers recognize and optimize tail recursion via loop fusion. In , space reclamation occurs through generational garbage collection that promotes evaluated s and sweeps unevaluated ones, mitigating space leaks from long thunk chains, as seen in Haskell's parallel GC variants. Hybrid approaches balance strict and , such as strict-by-need in dialects like , where computations are forced eagerly but results memoized to avoid recomputation, reducing overhead in goal-directed programming while preserving sharing benefits. These trade-offs allow selective strictness in lazy systems via inference, improving speed at the cost of analysis time, or lazy regions in strict systems for infinite data handling. Post-2010 advancements in compilation, such as in , enable dynamic adaptation of evaluation strategies by profiling runtime behavior and partially evaluating code paths, switching between eager and lazy modes for hot methods to optimize for specific workloads like numerical computations.

References

  1. [1]
    Functional Programming - 5. Definition and Evaluation
    Feb 24, 2014 · An evaluation strategy is a set of rules for evaluating expressions. Evaluation strategies are important in expressions, for example (1-x)(1+x)
  2. [2]
    [PDF] Evaluation Strategies - CSE, IIT Delhi
    That is why this strategy is called call by value. ▫ It is used in imperative languages such as Java, C,. Pascal and functional programming languages such. SML ...
  3. [3]
    Evaluation strategies for functional logic programming - ScienceDirect
    This paper discusses the elements that play a relevant role in evaluation strategies for functional logic computations, describes some important classes of ...<|control11|><|separator|>
  4. [4]
    [PDF] Programming Language Pragmatics - WordPress.com
    Scott has met his goal of improving Programming Language Pragmatics by bringing the text up-to-date and making the material more accessible for students ...
  5. [5]
    [PDF] The mechanical evaluation of expressions
    The mechanical evaluation of expressions. By P. J. Landin. This paper is a contribution to the "theory" of the activity of using computers. It shows how.
  6. [6]
    4.2 Variations on a Scheme — Lazy Evaluation
    In a purely applicative-order language, all procedures are strict in each argument. In a purely normal-order language, all compound procedures are non-strict in ...
  7. [7]
    [PDF] The FORTRAN Automatic Coding System for the IBM 704 EDPM
    It describes the system which will be made available during late 1956, and is intended to permit planning and FORTRAN coding in advance of that time. An ...
  8. [8]
    [PDF] Preliminary report: international algebraic language
    On 18 April 1958 the committee appointed a subcommittee to prepare a report giving the technical specifications of a proposed language. A comparison of the ACM ...
  9. [9]
    [PDF] Recursive Functions of Symbolic Expressions and Their ...
    In the course of its development the LISP system went through several stages of simplification and eventually came to be based on a scheme for rep- resenting ...Missing: strategy | Show results with:strategy
  10. [10]
    The semantics of lazy (and industrious) evaluation
    Since the publication of two influential papers on lazy evaluation in 1976 [Henderson and Morris, Friedman and Wise], the idea has gained widespread ...
  11. [11]
    [PDF] Fundamentals of the Icon Programming Language
    Icon variables have no type associated with them. Instead, types are ... Icon uses call-by-value for transmission of argument values to a procedure ...
  12. [12]
    [PDF] Why Functional Programming Matters
    Because lazy evaluation's power depends on the programmer giving up any direct control over the order in which the parts of a program are executed, it would ...
  13. [13]
    [PDF] A History of Haskell: Being Lazy With Class - Microsoft
    Apr 16, 2007 · Technically, Haskell is a language with a non-strict semantics; lazy evaluation is sim- ply one implementation technique for a non-strict ...
  14. [14]
    [PDF] Programming Languages Session 4 - NYU
    Programming Language Pragmatics (3rd Edition). Michael L. Scott, Copyright ... ▫ All parameter-passing by value. ▫ no assignment. ▫ local declarations ...
  15. [15]
    [PDF] Lambda Calculus
    • An example of a non-terminating expression. • Reduces to itself in ... • Call-by-value may fail to terminate even if call-by-name terminates. • Does ...
  16. [16]
    [PDF] Subroutines & Parameter Passing
    Oct 20, 2010 · ▫ Call by value and call by reference parameter passing. ▫ Call by value is similar to C. ▫ Call by reference: use reference (&) operator.Missing: comparison | Show results with:comparison
  17. [17]
    Programming Languages - Lecture 17 - Williams College CS334
    Parameter passing. Like most pure object-oriented languages, Java only supports by "call-by-sharing". At the implementation level, this is the same as call ...
  18. [18]
    [PDF] An Introduction to Functional Programming Through Lambda Calculus
    Thus, it combines the advantages of normal order and applicative order evaluation. With lazy evaluation, an expression is evaluated when its value is needed ...
  19. [19]
    [PDF] CS 6110 S16 Lecture 4 Reduction Strategies and Equivalence
    Feb 3, 2016 · Two common reduction strategies for the λ-calculus are normal order and applicative order. ... This puts the evaluator into an infinite loop.
  20. [20]
    [PDF] CS 6110 S18 Lecture 4 Reduction Strategies and Equivalence 1 ...
    Two common reduction strategies for the λ-calculus are normal order and applicative order. Under the normal order reduction strategy, the leftmost-outermost ...
  21. [21]
    [PDF] Church numerals Normal-order evaluation
    In the normal-order reduction sequence of a term at each step the contracted redex is the leftmost outermost one. (λy.yy ((λx.xx) ∆))K 7→ K K.Missing: rosette | Show results with:rosette
  22. [22]
    [PDF] Thunks and the λ-Calculus - BRICS
    Optimizations associated with lazy evaluation (e.g., overwriting a forced expression with its resulting value) are encapsulated in the thunk. They give several ...
  23. [23]
    [PDF] Lambda Calculus COMP 150-AVS Fall 2018
    normal-order (with slightly different meanings). □ Eager: Given (λx.e1) ... let z = cond true 42 (loop ()). Haskell cond p x y = if p then x else y loop ...
  24. [24]
    [PDF] Structure and Interpretation of Computer Programs - MIT
    is is the second edition book, from Unofficial Texinfo Format. You are probably reading it in an Info hypertext browser, such as the. InfomodeofEmacs.
  25. [25]
    None
    **Summary of Strict Evaluation (Applicative Order) from Song, Nayeong (2020)**
  26. [26]
    [PDF] Applicative vs Normal Order evaluation - People | MIT CSAIL
    In applicative order execution (like regular Scheme), all procedure arguments are evaluated be- fore applying the procedure. In normal order execution, ...Missing: termination efficiency
  27. [27]
    [PDF] Report on the Algorithmic Language ALGOL 60
    Value assignment (call by value). All formal parameters quoted in the value part of the procedure declaration heading are assigned the values. (cf. section ...
  28. [28]
    None
    Below is a merged summary of function argument passing and call by value in the C99 Draft (WG14/N1256), consolidating all provided segments into a single, comprehensive response. To retain all information in a dense and organized manner, I will use a table in CSV format for key details, followed by a narrative summary that integrates additional context and explanations. This approach ensures all details are preserved while maintaining readability.
  29. [29]
    Passing Information to a Method or a Constructor (The Java ...
    You can use a construct called varargs to pass an arbitrary number of values to a method. You use varargs when you don't know how many of a particular type of ...
  30. [30]
    Call-by-value | PLS Lab
    Call-by-value (CBV) is a reduction strategy where identifiers always refer to values, and only values can be substituted, not arbitrary terms.
  31. [31]
    Compilers: Vocabulary - UT Computer Science
    call by pointer: parameter transmission in which a copy of the address of the actual parameter is transmitted to the subprogram. Although the pointer itself ...
  32. [32]
    [PDF] Fundamental Concepts in Programming Languages
    FORTRAN calls all its parameters by reference and has a special rule for providing. R-value expressions such as a +b with a temporary L-value. ALGOL 60, on the ...
  33. [33]
    Call by Reference - an overview | ScienceDirect Topics
    The evaluation order matters for parameters that might have side effects. For example, a program that used push and pop to manipulate a stack would produce ...<|separator|>
  34. [34]
    Parameter Passing - cs.wisc.edu
    Compared with call-by-reference, the code in the called method is faster because there is no need for indirection to access formals. Call-by-Reference. More ...
  35. [35]
    Variable parameter - Free Pascal wiki
    Feb 24, 2022 · In FPC, variable parameters are implemented by passing a reference to the variable at the call site (call by reference). For this reason, ...
  36. [36]
    Procedures and Functions - Essential Pascal on marcocantu.com
    Pascal routines allow parameter passing by value and by reference. ... Here is an example of passing a parameter by reference using the var keyword:
  37. [37]
    Reference and Value Semantics, C++ FAQ - Standard C++
    When the object is referenced via a pointer or a reference, a call to a virtual function generally cannot be inlined, since the call must be resolved ...
  38. [38]
    Why are so many languages passed by value?
    Jun 19, 2012 · Call-by-value was for things that were not supposed to be changed (input parameters). Call-by-name was for output parameters. Call-by-name ...Are "normal order" and "call-by-name" the same thing?Untyped lambda calculus: Why is call-by-value strict?More results from softwareengineering.stackexchange.comMissing: comparison | Show results with:comparison
  39. [39]
    6.1. Parameter-Passing Mechanisms — Programming Languages
    In the next section, we will examine call-by-name versus call-by-need in the context of a specific example known as a lazy list. However, before proceeding, ...
  40. [40]
  41. [41]
    [PDF] CLU Reference Manual - CSAIL Publications
    We call the argument passing technique call by sharing, because the argument objects are shared between the caller and the called routine. The technique ...
  42. [42]
    ECMA-262-3 in detail. Chapter 8. Evaluation strategy
    Apr 10, 2010 · By sharing as a special case of by value. Strategy by sharing is used in many languages: Java, ECMAScript, Python, Ruby, Visual Basic, etc.
  43. [43]
    Programming FAQ — Python 3.14.0 documentation
    Remember that arguments are passed by assignment in Python. Since assignment just creates references to objects, there's no alias between an argument name in ...
  44. [44]
    Pass by Value Vs Pall by Reference in C - GeeksforGeeks
    Sep 22, 2025 · Please note that the C language only supports pass by value. We achieve pass by reference effect with the help of pointer feature of C. In C++, ...
  45. [45]
    C Pass Addresses and Pointers to Functions - Programiz
    In C, addresses are passed to functions using pointers, which store addresses. Pointers are used to accept these addresses in the function definition.
  46. [46]
    How data is passed - IBM
    C passes all parameters by value, but PL/I (by default) passes parameters by address. PL/I also supports passing parameters by value except for arrays, ...
  47. [47]
    [PDF] transliterating prolog into scheme
    Prolog's distinguishing features include computation by backtracking and call-by-unification. The latter is based on the unification procedure and logical ...
  48. [48]
    [PDF] A Maehine-Orlented Logic Based on the Resolution Principle
    It is called the resolution principle, and it is machine-oriented, rather than human- oriented, in the sense of the preceding remarks. The resolution principle ...
  49. [49]
    [PDF] Logic Programming View Evaluation
    Unification is the process of determining whether two expressions can be unified, i.e. made identical by appropriate substitutions for their variables. Example: ...
  50. [50]
    [PDF] The birth of Prolog - Alain Colmerauer
    Abstract. The programming language, Prolog, was born of a project aimed not at producing a programming language but at processing natural languages; in this.
  51. [51]
    [PDF] Revised report on the algorithm language ALGOL 60
    In the language the primary constituents of the pro- grams describing algorithmic processes are arithmetic,. Boolean, and designational expressions.
  52. [52]
    [PDF] Theoretical Computer Science (1975) 125459. (Q Nort%IHolland ...
    This paper examines the relationship between ISWIM and the &calculus, using call-by-value and call-by-name, and introduces a new &calculus.
  53. [53]
    History of LISP - ACM Digital Library
    Paper: History of LISP facility resembles ALGOL's call by name but is more flexible, because eval is explicitly available. A first-order logic treatment of ...
  54. [54]
    [PDF] Lazy Evaluation & Infinite Data - cs.Princeton
    It takes exponential time, the same way this function does: ... The term “thunk” was introduced in 1960 to describe a way of implementing call-by-name function- ...
  55. [55]
    [PDF] CS 251: Programming Languages Fall 2015 ML ... - Computer Science
    here, variations on this technique are sometimes called call-by-name. We ... Unfortunately, this function takes exponential time to run. We might start ...
  56. [56]
    [PDF] Lazy Evaluation for the Lazy: Automatically Transforming Call-by ...
    ALGOL60, for instance, was the first language to introduce call-by-name semantics [Backus et al. 1960]. However, call-by-name led to confusing behavior, as ...
  57. [57]
    [PDF] Call-by-Name, Call-by-Need and Graph Reduction - HAL Inria
    May 24, 2006 · Here, we complete our exploration of the design space of implementations by studying call-by-name, call-by-need and graph reduction. We ...
  58. [58]
    None
    ### Summary of Evaluation Strategy in Haskell 2010
  59. [59]
    [PDF] Guy L. Steele Jr. Thinking Machines Corporation 245 First Street ...
    The simple but powerful macro facility on which DEFMACRO is based was introduced in MacLisp in the mid-1960's. See section 3.3. Other major improvements over ...
  60. [60]
    Macros that work | Proceedings of the 18th ACM SIGPLAN-SIGACT ...
    William Clinger and Jonathan Rees, editors. Revised4 report on the ... Index Terms. Macros that work. Software and its engineering · Software creation ...
  61. [61]
    None
    Below is a merged summary of Section 6.10.3 'Macro Replacement' from N1570.pdf, consolidating all the information from the provided segments into a concise yet comprehensive response. To handle the dense and repetitive details efficiently, I will use a table in CSV format to capture key aspects (e.g., process, details, and source information) while providing a narrative overview for the main concepts. This ensures all information is retained and easily accessible.
  62. [62]
    MULTILISP: a language for concurrent symbolic computation
    Index Terms. MULTILISP: a language for concurrent symbolic computation ... View or Download as a PDF file. PDF. eReader. View online with eReader . eReader ...
  63. [63]
    [PDF] A Concurrent Lambda Calculus with Futures
    Abstract. We introduce a new concurrent lambda calculus with futures, λ(fut), to model the operational semantics of Alice, a concurrent exten- sion of ML.
  64. [64]
    [PDF] An Adaptive Evaluation Strategy for Non-Strict Programs - Microsoft
    By contrast, the optimistic version evaluates the addition speculatively, so the program runs in constant space. Optimistic evaluation speeds this program.
  65. [65]
    [PDF] Adaptive evaluation of non-strict programs
    Optimistic Evaluation blends lazy and eager evaluation under the guidance of an online pro- filer. The online profiler observes the running program and decides ...
  66. [66]
    Optimistic evaluation | Proceedings of the eighth ACM SIGPLAN ...
    Optimistic evaluation: an adaptive evaluation strategy for non-strict programs. Lazy programs are beautiful, but they are slow because they build many thunks.
  67. [67]
    Call-by-Need Is Clairvoyant Call-by-Value - ACM Digital Library
    Call-by-need evaluation, also known as lazy evaluation, provides two key benefits: compositional programming and infinite data. The standard semantics for ...
  68. [68]
    6.14. Bang patterns and Strict Haskell
    GHC supports three extensions to allow the programmer to specify use of strict (call-by-value) evaluation rather than lazy (call-by-need) evaluation.
  69. [69]
    [PDF] Space leaks exploration in Haskell - Stanford Computer Science
    A space leak occurs when a program uses more memory than necessary, and in Haskell, the memory is released later than expected.Missing: unevaluated | Show results with:unevaluated
  70. [70]
    Strictness analysis aids time analysis - ACM Digital Library
    In a strict functional language, analysis of time complexity is straightforward, because of the following compositional rule: The time to evaluate (ƒ(g x)) ...
  71. [71]
    [PDF] Functional Programming in CLEAN
    The purpose of this book is to teach practical programming skills using the state-of-the art pure functional language CONCURRENT CLEAN. CLEAN has many aspects ...
  72. [72]
    Lazy Expressions - F# | Microsoft Learn
    Oct 3, 2025 · Lazy expressions are expressions that are not evaluated immediately, but are instead evaluated when the result is needed.
  73. [73]
    Futures and Promises | Scala Documentation
    A Future is a placeholder object for a value that may not yet exist. Generally, the value of the Future is supplied concurrently and can subsequently be used.
  74. [74]
    Futures and promises in Haskell and Scala - ACM Digital Library
    Futures and promises are a high-level concurrency construct to aid the user in writing scalable and correct asynchronous programs.Abstract · Information & Contributors · Published In
  75. [75]
    What Is a Race Condition? | Baeldung on Computer Science
    Mar 26, 2025 · A race condition is a condition of a program where its behavior depends on relative timing or interleaving of multiple threads or processes.Missing: evaluation parallelism
  76. [76]
    Implementation of speculative parallelism in functional languages
    The performance of speculative evaluation is compared with that of lazy evaluation, and the necessary conditions under which speculative evaluation performs ...
  77. [77]
    [PDF] Runtime Support for Multicore Haskell - Microsoft
    Purity-by-default means that there should be a wealth of inherent parallelism in. Haskell code, and the ubiquitous lazy evaluation model means that, in a ...
  78. [78]
    Easy Parallel Programming with Scala Futures
    Sep 3, 2019 · This blog post dives into Scala's Futures: how to use them, how they work, and how they can give you much more flexibility to leverage parallelism in your code.
  79. [79]
    parallel "Folding" in Haskell - Stack Overflow
    Oct 1, 2013 · A parallel fold splits its input list into chunks, folds each chunk separately (in parallel), and then combines the results to derive the final ...Fold/reduce over List of Futures with associative & commutative ...What is the benefit of using Futures over parallel collections in scala?More results from stackoverflow.com
  80. [80]
    Practical and effective higher-order optimizations - ACM Digital Library
    Inlining is an optimization that replaces a call to a function with that function's body. This optimization not only reduces the overhead of a function call ...
  81. [81]
    (PDF) Techniques and Trade-Offs in Function Inlining Optimization
    Nov 18, 2023 · This paper reviews the different techniques used to optimize function inlining, including simple textual substitution, profile-guided inlining, ...
  82. [82]
    Lazy type inference and program analysis - ScienceDirect.com
    The key idea is the introduction of laziness into type inference. We present the basic notions in the context of a higher-order strictness analysis of list- ...
  83. [83]
    How the JIT compiler boosts Java performance in OpenJDK
    Jun 23, 2021 · Just-in-time (JIT) compilation is central to peak performance in modern virtual machines, but it comes with trade-offs.
  84. [84]
    [PDF] Lazy evaluation illustrated for Haskell divers - GitHub Pages
    To avoid unnecessary computation, normal order reduction chooses to apply the function rather than first evaluating the argument. Normal order reduction ...
  85. [85]
    Laurence Tratt: Tail Call Optimization
    Dec 21, 2004 · The standard argument in favour of tail call optimization runs along the lines of “it's so useful, and it doesn't cost anything to implement”.
  86. [86]
    [PDF] Algorithms and Data Structures for Efficient Free Space Reclamation ...
    Mar 2, 2017 · 2.2 Lazy Reclamation of Free Space. Traditionally, file systems keep their free space metadata up to date. For instance, if objects in the ...
  87. [87]
    A dynamic optimization framework for a Java Just-In-Time compiler
    Aug 7, 2025 · Abstract. The high performance implementation of Java Virtual Machines (JVM) and just-in-time (JIT) compilers is directed toward adaptive ...