Fact-checked by Grok 2 weeks ago

Jensen's device

Jensen's device is a technique devised by Danish Jørn Jensen for the programming language, exploiting its call-by-name parameter passing mechanism to enable the repeated evaluation of expressions tied to a within a . The device allows for concise implementation of iterative computations, such as summing a series where each term depends on an external updated during iteration, without embedding the loop logic directly in the calling code. A classic example computes the 100th (approximately 5.187) by calling a generic with the bound i, the 1 to 100, and the term 1/i, all passed by name except the fixed limits. In , call-by-name treats the actual as a textual substitute for the formal , re-evaluating it each time the formal is referenced inside the , which effectively makes the term a function of the bound variable. This is demonstrated in the following code for the summation :
real [procedure](/page/Procedure) sum(n, term); integer n; real term;
value n;
begin
  integer i; real s;
  s := 0;
  for i := 1 step 1 until n do s := s + term;
  sum := s
end
When invoked as sum(100, 1/i), the expression 1/i is substituted and re-evaluated for each value of i from 1 to 100, yielding the correct harmonic . If the term were passed by value instead, it would evaluate only once using the uninitialized i from the caller, leading to undefined or erroneous results (such as or garbage values). Jensen's device, first documented around the development of early implementations, served as a for correctness and highlighted the power—and potential inefficiency—of call-by-name, as each evaluation incurs overhead similar to expansion. It influenced discussions on parameter passing in subsequent languages, though call-by-name was largely abandoned in favor of call-by-value or call-by-reference due to performance concerns. Modern emulations appear in languages like C++ using templates or lambdas to approximate the behavior.

Background

Call-by-Name Parameter Passing

Call-by-name parameter passing is an in which the argument expression provided to a function is not evaluated prior to the function call; instead, the actual parameter is textually substituted for each occurrence of the corresponding formal parameter within the function body, with evaluation occurring only when the substituted expression is needed during execution. This substitution effectively delays the computation of the argument until its value is required, treating the parameter as a symbolic name that is resolved in the caller's context each time it is referenced. In contrast to call-by-value, where arguments are fully evaluated before the call and a copy of the resulting value is passed to the —ensuring no side effects on the original arguments—call-by-name avoids premature evaluation, potentially leading to multiple computations of the same expression if the parameter is used more than once. Unlike call-by-reference, which passes the of the argument to allow direct modification of the original variable without re-evaluation, call-by-name performs fresh evaluations of the substituted expression in the caller's , which can introduce inconsistencies if the expression involves side effects or mutable state. A key feature of call-by-name is that it permits the argument expression to be re-evaluated each time the formal parameter is encountered, which can result in different values if the expression depends on changing variables or produces side effects, such as incrementing a counter or modifying global state. This lazy evaluation mechanism is often implemented using thunks—unevaluated closures that capture the argument expression for deferred computation. To illustrate, consider a simple procedure defined as follows in pseudocode:
procedure double(x);
  x := x * 2
end
When called with double(a + b), the call-by-name strategy substitutes the expression a + b directly for x in the body, yielding the effective code (a + b) := (a + b) * 2. Here, a + b is evaluated twice—once for the right-hand side and once for the left-hand side assignment—potentially yielding different results if a or b changes between evaluations.

Historical Context in ALGOL 60

ALGOL 60 emerged from an international collaborative effort in the late 1950s and early 1960s, spearheaded by the Association for Computing Machinery (ACM) in the United States and the Gesellschaft für Angewandte Mathematik und Mechanik (GAMM) in Europe, aimed at establishing a standardized language for expressing algorithms in a machine-independent manner. The project built upon the earlier International Algebraic Language (IAL) proposal from the 1958 conference, involving delegates from multiple countries who convened key meetings, including the pivotal conference in January 1960, to refine the language's syntax and semantics. This standardization drive addressed the proliferation of machine-specific programming languages, seeking to create a universal tool for scientific and mathematical computation. A deliberate design choice in was the inclusion of call-by-name passing alongside call-by-value, selected to facilitate the direct translation of mathematical expressions into code and to enable forms of delayed or , drawing conceptual inspiration from principles of substitution. This mechanism allowed procedures to treat formal parameters as textual substitutes for actual arguments, promoting expressiveness in algorithmic descriptions without premature evaluation, though it introduced complexities in implementation. The feature was formalized during the Paris revisions to resolve ambiguities in handling from the IAL draft, ensuring compatibility with recursive and block-structured programming. Peter Naur, a Danish computer scientist, served as the editor of the seminal Report, published in May 1960 in Communications of the ACM, which provided a rigorous, formal definition of the language using his refined Backus-Naur Form (BNF) notation. Early implementations followed swiftly, including the GIER ALGOL compiler developed by Regnecentralen in for the GIER computer, released in 1962 and notable for its faithful handling of ALGOL 60's advanced features like and parameter passing. In the early 1960s, Danish programmer Jørn Jensen, working at Regnecentralen alongside Naur on the GIER compiler, began exploring innovative applications of call-by-name, leading to the devising of what would later be known as Jensen's device as a technique for efficient numerical computations through parameter substitution. This work highlighted the practical potential of ALGOL 60's design choices in real-world programming challenges.

Core Mechanism

Thunk-Based Evaluation

In the implementation of call-by-name parameter passing, a serves as a delayed , essentially a subroutine or that encapsulates an unevaluated expression from the actual . This structure allows the expression to be evaluated dynamically only when the corresponding formal is referenced within the body, capturing the current state of variables at each invocation. Thunks thus enable the re-evaluation of parameters on each use, distinguishing call-by-name from eager evaluation strategies like call-by-value. The process begins at procedure invocation, where each call-by-name is transformed into a by the or translator. This is generated as an implicit subroutine that, upon activation, computes and returns the value of the original expression under the prevailing bindings. For instance, if the actual is an expression involving variables that may change during execution, such as the local loop variable in a , the ensures fresh computation each time, which can introduce side effects if the expression modifies shared state. In implementations, are often represented as code blocks with entry and exit operations, such as block entry (BE) and end implicit subroutine (EIS), facilitating recursive activations without premature evaluation. In the classic Jensen's device, the evaluates the term in the procedure's environment, resolving references like the local index variable. To illustrate, consider a declaration like:
real [procedure](/page/Procedure) sum(low, high, expr);
value low, high;
[integer](/page/Integer) low, high;
real expr;
begin
  [integer](/page/Integer) i; real s;
  s := 0;
  for i := low step 1 until high do
    s := s + expr;
  sum := s
end
Here, expr is treated as a . When the procedure body references expr inside the , the thunk is executed anew for each iteration, substituting the current value of the local i into the original expression (e.g., called as sum(1, 100, 1/i)). This substitution occurs at , preserving the semantics of call-by-name. This thunk-based approach bears a close relation to the , where thunks function analogously to unevaluated lambda abstractions that bind free variables from the calling environment. Peter Landin's semantic mapping of constructs to Church's lambda notation highlights this correspondence, portraying call-by-name parameters as lambda expressions delayed until application, thereby providing a formal foundation for the delayed evaluation thunks implement.

Parameter Substitution and Side Effects

In Jensen's device, the core mechanism relies on call-by-name parameter passing, where the actual expression is textually substituted into the body each time the formal parameter is referenced, rather than being evaluated once at the call site. This substitution occurs dynamically during execution, allowing the procedure to treat the parameter as if it were inline code. For instance, consider a summation procedure defined as:
real [procedure](/page/Procedure) sum(n, [term](/page/Parameter));
value n;
integer n;
real [term](/page/Parameter);
begin
    integer i; real s;
    s := 0;
    for i := 1 step 1 until n do s := s + [term](/page/Parameter);
    sum := s
end
When invoked as sum(100, 1/i), the formal term is replaced by 1/i in the procedure body. The for loop defines a local i iteratively, and each of term reflects the current value of i, effectively computing the \sum_{i=1}^{100} 1/i. The key of this approach lies in its ability to enable the substituted expression term to depend on the local variable i or other constructs within the procedure, which can itself contain conditional or nested computations that further alter . This dynamic re-evaluation during the loop iterations allows for sophisticated behaviors, such as skipping s or incorporating dependencies that would be cumbersome in call-by-value languages. Call-by-name is typically implemented using thunks—delayed computations that encapsulate the parameter expression—but the directly facilitates these modifications without requiring explicit passing of mutable references. A passes the by name (e.g., sum(k, 1, 100, a[k]) where k is a caller's ), allowing non-local modification but introducing side effects on the caller. However, this substitution introduces significant side effects, as multiple evaluations of the can unexpectedly alter external program state. For example, if term includes assignments or calls that modify variables outside the , such as incrementing a counter or changing elements based on the current i, the overall computation may deviate from intuitive expectations, leading to non-deterministic or erroneous results depending on the order of evaluations. In full ALGOL 60, function designators and procedures permit such alterations to external variables, exacerbating these issues, whereas the language's SUBSET restricts side effects to promote predictability.

Examples

Basic Array Summation

The canonical example of Jensen's device involves a procedure that computes the sum of array elements from a lower to an upper bound, leveraging call-by-name parameter passing to substitute and re-evaluate the index variable and term expression within the loop. In ALGOL 60, this is implemented as follows:
real procedure sum(i, low, high, expr);
    value low, high;
    integer i, low, high;
    real expr;
begin
    real s;
    s := 0;
    for i := low step 1 until high do
        s := s + expr;
    sum := s
end
This defines i and expr as call-by-name parameters, while low and high are passed by value; the increments i from low to high in steps of 1, accumulating the value of expr each time. A call such as result := [sum](/page/Sum)(j, 1, n, a[j]) substitutes the caller's variable j for the formal i and the expression a[j] for expr, enabling the \sum_{k=1}^{n} a_k without the directly accessing the . During execution, call-by-name substitution replaces each occurrence of the formal parameters with their actual arguments in the caller's , causing re-evaluation on every access. For the call result := sum(j, 1, 3, a[j]) with array a = [1, 2, 3] and assuming j is a local variable initialized appropriately:
  • The first assigns j := 1 (substituting for i := low), evaluates expr as a[1] (yielding 1), and adds it to s (now s = 1).
  • It then increments j to 2, re-evaluates expr as a[2] (yielding 2), and adds it (s = 3).
  • Finally, j becomes 3, expr evaluates to a[3] (yielding 3), adding it to give s = 6.
  • The procedure returns 6, stored in result.
This process demonstrates how the index j updates across iterations due to textual substitution, effectively summing the array elements.

Nested or Double Summations

Jensen's device can be extended to compute nested or double summations, such as the sum \sum_{i=1}^{m} \sum_{j=1}^{n} a[i,j], where a is a two-dimensional array, by using recursive calls to the summation procedure with indices passed by name. This approach demonstrates the technique's composability for multi-dimensional computations without directly passing array data to a single procedure. The caller invokes nested summations, such as total := sum(i, 1, m, sum(j, 1, n, a[i,j])), where i and j are dummy integer variables, m and n define the bounds, and a[i,j] is the term to sum. The inner sum computes \sum_{j=1}^{n} a[i,j] for the current i, and the outer sum aggregates those over i from 1 to m. During execution, ALGOL 60's call-by-name semantics perform textual substitution of the actual parameters into the procedure bodies each time they are referenced. For a fixed i, the inner loop varies j from 1 to n, evaluating a[i,j] with the current bindings, while the outer loop updates i and re-evaluates the inner sum. Changes to j in the inner call do not affect the outer i, enabling correct nested computation equivalent to explicit double loops. This recursive structure highlights the power of call-by-name for composing iterations.

General Problem Solver (GPS) Application

In the paper "ALGOL 60 Confidential," Donald E. Knuth and Jack N. Merner introduced a procedure called the (GPS) as a demonstration of 's call-by-name parameter passing mechanism, which underpins Jensen's device. This GPS procedure exploits the delayed evaluation of formal parameters to enable iterative computations with side effects within a compact framework, allowing it to solve diverse problems by repeatedly substituting and evaluating expressions passed by name. Unlike straightforward numerical summations, GPS facilitates more intricate tasks by treating parameters as thunks—unevaluated expressions—that can be dynamically referenced during execution, effectively simulating search-like behaviors through side effects. The is defined as follows:
real [procedure](/page/Procedure) GPS(I, N, Z, V);
real I, N, Z, V;
begin
    for I := 1 step 1 until N do Z := V;
    GPS := 1
end
Here, the parameters I, N, Z, and V are passed by name, meaning they are substituted as expressions into the each time referenced, allowing side effects in V to like Z or external variables over N iterations. A key example is , where GPS is nested to compute C = A \times B:
for I := 1 step 1 until p do
    for J := 1 step 1 until q do
        C[I,J] := GPS(I, 1, C[I,J], 0) × GPS(K, n, 1, A[I,K] × B[K,J])
More complex applications include finding the m-th prime by nesting GPS calls in conditional expressions within parameters to increment a until it satisfies primality, leveraging recursive to explore values efficiently. The outcome is a concise for stateful computations like prime generation or linear in a single procedure style, demonstrating how call-by-name transforms iteration into goal-oriented problem-solving and influencing concepts like .

Implementations

Simulation in C Using Macros

Jensen's device, originally a feature of 60's call-by-name passing, can be emulated —a strictly call-by-value language—through the use of macros that perform textual substitution of expressions. This approach approximates the deferred evaluation of call-by-name by expanding the body with the actual text each time it is referenced, effectively re-evaluating the expression in during compilation. A common example is a for computing the of an or , mimicking the central to Jensen's device. The substitutes the expression directly into a , allowing flexible operations like dot products or reciprocals without passing functions or lambdas. Consider the following definition in C:
c
#define SUM(Expr, Term, Lbound, Ubound, Inc, Total) \
    for (Total = 0, Term = Lbound; Term <= Ubound; Term += Inc) \
        Total += (Expr)
To compute the dot product of two arrays a and b of length 7, one would invoke SUM(a[X]*b[X], X, 0, 6, 1, T); where T holds the result. The preprocessor replaces Expr with a[X]*b[X] and Term with X, expanding to a that evaluates the multiplication for each index X from 0 to 6, simulating the textual substitution of ALGOL 60's name parameters. Similarly, for summing reciprocals, SUM(1.0/a[X], X, 0, 6, 1, T); substitutes and evaluates the division anew each iteration. This macro-based simulation differs fundamentally from true thunks in , as the C preprocessor performs static textual replacement without runtime mechanisms for lazy evaluation or automatic variable renaming to avoid name clashes. Consequently, it lacks native support for deferred computation, potentially leading to inefficient multiple evaluations of side-effecting expressions and compile-time errors from identifier conflicts, unlike the dynamic thunk resolution in the original implementation. The technique gained popularity in the late 1970s and 1980s for pedagogical purposes in early C programming texts, where macros illustrated advanced concepts like generic programming despite C's limitations.

Support in Functional Programming Languages

Functional programming languages, particularly those with built-in support for lazy evaluation, provide native mechanisms that emulate the deferred computation and repeated evaluation aspects of Jensen's device without relying on explicit call-by-name parameters. In Haskell, lazy evaluation is implemented via call-by-need semantics, where unevaluated expressions are represented as thunks—suspended computations that are shared and evaluated only upon demand, avoiding the inefficiencies of call-by-name's multiple evaluations. This approach allows for seamless integration of delayed computations in summations and other iterative constructs, achieving similar expressive power to Jensen's device while maintaining purity and efficiency. A representative example in Haskell demonstrates this through higher-order functions, where the summation term is passed as a lambda expression that is lazily applied within a list comprehension. Consider the following definition:
haskell
sumTerm :: (Int -> [Double](/page/Double)) -> [Int](/page/INT) -> [Int](/page/INT) -> [Double](/page/Double)
sumTerm term low high = sum [term j | j <- [low .. high]]

-- Usage for [harmonic](/page/Harmonic) series approximation
[harmonic](/page/Harmonic) :: [Int](/page/INT) -> [Double](/page/Double)
[harmonic](/page/Harmonic) n = sumTerm (\j -> 1.0 / fromIntegral j) 1 n
Here, the lambda \j -> 1.0 / fromIntegral j acts as a thunk-like delayed , evaluated for each j in the range without textual , mirroring the effect of substituting the term into the loop body in ALGOL 60. In Lisp variants, such as Common Lisp, native call-by-name is absent, but macros enable simulation of Jensen's device by performing hygienic expansion that wraps the term in a lambda, effectively creating a closure for repeated evaluation. For instance, a macro can define a summation form where the term is substituted and lambdafied:
lisp
(defun %sum (lo hi func)
  (loop for i from lo to hi sum (funcall func i)))

(defmacro sum (var lo hi term)
  `(%sum ,lo ,hi (lambda (,var) ,term)))
This expands to a call to %sum with a lambda that binds var and evaluates term in that context, then loops applying the function, akin to call-by-name, though it requires careful hygiene to avoid variable capture. The influence of Jensen's device and call-by-name from ALGOL 60 extended to post-ALGOL functional languages, shaping the adoption of laziness in systems like Miranda, which introduced lazy evaluation in 1985 using graph reduction for efficient thunk handling, and subsequently Haskell in 1990, which refined this with polymorphic types and call-by-need to mitigate repeated computations. In Scheme, a Lisp dialect, explicit delays via delay and force primitives allow programmer-controlled laziness for similar effects, though without default lazy semantics. These features enable functional languages to support Jensen's device-like patterns natively, prioritizing referential transparency over the unpredictable substitutions of call-by-name.

Criticism and Limitations

Unpredictable Side Effects

Jensen's device relies on call-by-name parameter passing, where actual parameters are re-evaluated each time the corresponding formal parameter is referenced within the procedure body. This mechanism, while enabling powerful abstractions like iterated summations, introduces significant risks when combined with imperative features such as assignments that produce side effects. Multiple evaluations of a by-name parameter can unexpectedly alter shared variables in the caller's environment, leading to non-intuitive program behavior. For instance, consider a summation procedure sum(e, i, low, high) that initializes i to low, increments it in a loop up to high, and accumulates the value of expression e each time. If called as result := sum(A[j], j, 1, 10), the expression A[j] is re-evaluated on each iteration; if the summation procedure modifies j (e.g., through an internal increment), subsequent evaluations access and potentially alter different array elements prematurely, distorting the intended computation. Debugging these issues is particularly challenging due to the textual semantics underlying call-by-name, which effectively inlines the actual parameter's code at each use site without explicit indication in the source. This obscures the , as the expanded code may interleave with the procedure's body in ways that are not apparent from the high-level structure, making it difficult to trace variable modifications or predict execution order. Programmers must mentally simulate multiple expansions to uncover hidden dependencies, often leading to subtle bugs that surface only under specific calling conditions. Call-by-name evaluation amplifies the of imperative side effects, turning simple passing into a source of errors.

Performance and Efficiency Drawbacks

One primary efficiency drawback of Jensen's device stems from its reliance on call-by-name passing, where the formal is treated as an unevaluated that is re-evaluated each time it is referenced within the procedure body. This repeated computation can lead to significant runtime overhead, particularly when the expression passed as the involves complex or costly operations. For instance, in a procedure iterating n times, a simple expression like i * i is evaluated n times, but if the expression includes -time computations—such as another inner loop or recursive call—the total can degrade to O(n²), as each of the n iterations triggers redundant work. Historical implementations of , such as those on the Electrologica X1 and X8 computers, highlighted these costs through measured overheads. In one analysis, programs compiled for the X8 resulted in about 21% longer code compared to the X1, and showed significant reductions in instruction counts, such as 93% fewer instructions for a Runge-Kutta example. Implementations utilized techniques like execute instructions to handle thunks more efficiently, but re-evaluations still incurred notable performance penalties. In performance-critical applications, this inefficiency contrasts sharply with call-by-value semantics, where arguments are evaluated once before the procedure call and passed as constants, avoiding redundant computations and enabling better compiler optimizations like inlining. As a result, call-by-value became the preferred mechanism in subsequent languages like Pascal and C, prioritizing predictable and efficient execution over the flexibility of call-by-name.

References

  1. [1]
    Jensen's device - Comp.compilers: Re
    algol60, history. The discussion about call-by-name and Jensen's device in Algol-60 ... Jørn Jensen was a programmer at Regnecentralen and worked closely ... Jørn ...
  2. [2]
    C++: Jensen's Device - TFE Times
    Jensen's Device is a computer programming technique devised by Danish computer scientist Jørn Jensen after studying the ALGOL 60 Report. The following program ...
  3. [3]
    An implementation of ALGOL 60 procedures
    Cite this article. Jensen, J., Naur, P. An implementation of ALGOL 60 procedures. BIT 1, 38–47 (1961). https://doi.org/10.1007/BF01961950. Download citation.
  4. [4]
    The C preprocessor and Jensen's device - ACM Digital Library
    Apr 1, 1990 · The C preprocessor and Jensen's device. Software and its engineering · Software notations and tools · General programming languages · Language ...Missing: C++ | Show results with:C++
  5. [5]
    Pass-By-Name Parameter Passing
    Mar 6, 2002 · The advantages of pass-by-name are: It has a simple semantic model as textual substitution. Modification and re-evaluation of argument ...
  6. [6]
    6.1. Parameter-Passing Mechanisms — Programming Languages
    In call-by-value, the argument for a function parameter is a copy of the value of the argument whereas, in call-by-reference, the function is given the address ...
  7. [7]
    [PDF] CSE305 Week 10: Subprograms (Sebesta chs. 9
    Call-By-Name (Algol-60): The subprogram code is run as if it used the name ... (1,2). Parameter Passing Example. Here is an example using call by value ...
  8. [8]
    [PDF] The History of the ALGOL Effort - Heer de Beer.org
    Eventually, under great time pressure, the distinction be- tween call-by-name (enabling the so-called Jensen's device) and call-by-value was invented.37 Every ...
  9. [9]
    Papers on the history of ALGOL - Software Preservation Group
    Jan 3, 2025 · This article shows how the relentless pursuit of a still better language that came to dominate the agenda of the Algol project brought to the fore the tension.
  10. [10]
    [PDF] the remaining troublespots in algol 60 - People @EECS
    A preliminary list of all the known trouble spots was compiled by the author for use by the ALGOL subcommittee of the ACM Programming. Languages committee in ...
  11. [11]
    Report on the algorithmic language ALGOL 60 - ACM Digital Library
    Peter Naur (Less)Authors Info & Claims. Communications of the ACM, Volume 3, Issue 5. Pages 299 - 311. https://doi.org/10.1145/367236.367262. Published: 01 May ...
  12. [12]
    [PDF] The design of the GIER ALGOL compiler Part I
    Jensen, J., and Naur, P., An Implementation of ALGOL 60 Procedures, BIT 1 (1961),. 38. 4. Jensen, J., Mondrup, P., and Naur, P., A Storage Allocation Scheme .
  13. [13]
    [PDF] ALGOL 60 Implementation - Software Preservation Group
    This book describes an ALGOL 60 compiler, its techniques, and a survey of other ALGOL compilers and translation techniques.<|control11|><|separator|>
  14. [14]
    [PDF] Die Gruncllehren cler mathematischen Wissenschaften - Algol 60
    Preface. Automatic computing has undergone drastic changes since the pioneering days of the early Fifties, one of the most obvious being that.
  15. [15]
    [PDF] Subroutines & Parameter Passing
    Oct 20, 2010 · ▫ Jensen's device utilizes call by name: real procedure sum(expr, i, low, high); value low, high; low and high are passed by value real ...
  16. [16]
    [PDF] 1 - Parameter Passing
    Sum := S; end;. Page 19. 19 - Jensen's device, continued. • notice -- there's NO array in Sum! • BUT -- consider the call: x := Sum(i, 1, n, V[i]) k st u ak.Missing: original | Show results with:original
  17. [17]
    ALGOL 60 confidential | Communications of the ACM
    ALGOL 60 confidential. Authors: Donald E. Knuth. Donald E. Knuth. California Institute of Technology, Pasadena. View Profile. , Jack N. Merner. Jack N. Merner.
  18. [18]
  19. [19]
    [PDF] Some History of Functional Programming Languages
    The use of call by name allowed an ingenious programming technique: Jensen's Device. See http://en.wikipedia.org/wiki/Jensen's_Device. Algol 60 allowed ...
  20. [20]
    [PDF] Block-structured procedural languages Algol and Pascal
    Example: Named parameter associations. Normally the actual parameters in a subprogram call are just listed and the matching with the formal parameters is done ...<|control11|><|separator|>
  21. [21]
    [PDF] Comp 411 Principles of Programming Languages Lecture 15 Boxes ...
    Imperative call-by-name is deservedly dead, except in Scala. But in Scala, combining visible side-effects and call-by- name is considered obscene. In the ...
  22. [22]
    [PDF] The Craft of Programming - CMU School of Computer Science
    Jun 1, 1981 · ... Jensen's device, turns the phenomena of repeated evaluation and ... swap(x, y) {y=x0 and *=y0 } ). & (V exp-like e) swap#e . Notice the ...
  23. [23]
    [PDF] Anatomy of Programming Languages - Texas Computer Science
    Jan 20, 2013 · In languages like Haskell this drawback is avoided by using an optimization of call-by-name called call-by-need. call-by-need In call-by-need, ...
  24. [24]
    [PDF] A comparison between the ALGOL 60 implementations on the ...
    Sep 16, 2008 · ... called 'Jensen's device'). For this statement code–length overhead is 4 upon 19 instructions10, i.e. 21% and execu- tion overhead is 10 upon ...