Fact-checked by Grok 2 weeks ago

In-place algorithm

In , an in-place algorithm is a procedure that transforms its input directly to produce the output, using only a amount of additional —typically O(1) extra space—beyond the space required to store the input itself. This is achieved by overwriting or permuting elements within the original , such as an , rather than allocating new auxiliary storage proportional to the input size. The concept emphasizes space efficiency while maintaining the algorithm's core functionality, distinguishing it from algorithms that require significant temporary space for operations like merging or copying. In-place algorithms are particularly valuable in scenarios where memory is limited, such as embedded systems, large-scale on single machines, or external-memory computing where must fit within available . By minimizing space overhead, they enable handling larger inputs without swapping to disk or failing due to exhaustion, and they often improve locality by reducing movement. However, this constraint can complicate implementation, as algorithms must carefully manage mutations to avoid corruption, sometimes leading to higher constant factors in time complexity compared to space-liberated variants. Classic examples of in-place algorithms include and for sorting arrays, both of which rearrange elements through swaps and partitions without extra arrays. Heapsort builds a max-heap in the array and repeatedly extracts the maximum element, achieving O(n log n) time in the worst case. Quicksort, on average also O(n log n), selects a pivot and recursively partitions the array around it, making it a staple in standard libraries despite its worst-case O(n²) risk. Beyond sorting, in-place techniques appear in geometric computing, such as algorithms for 3D convex hull computation in O(n log³ n) time or 2D Delaunay triangulation, which implicitly encode results via permutations of the input points. Research on in-place algorithms has seen renewed interest since the late , driven by the need for space-efficient solutions in and . Early work focused on and , but extensions to models, geometric problems, and even paradigms continue to evolve, with recent advances achieving polylogarithmic span in shared-memory settings. Despite challenges like in or handling sparse , in-place methods remain foundational for optimizing resource use in algorithm design.

Definition and Properties

Formal Definition

An in-place algorithm is one that transforms the input directly by modifying its elements in their original locations, requiring only a constant amount of auxiliary space, denoted as O(1), irrespective of the input size n. This auxiliary space bound ensures that the algorithm does not allocate additional memory proportional to the input, such as a full copy of the data, thereby operating within the confines of the given storage. In contrast to out-of-place algorithms, which generate a separate output structure while preserving the original input intact, in-place algorithms overwrite and restructure the input itself to produce the result. The core principle emphasizes efficiency in memory usage by avoiding the creation of extraneous data structures, allowing the computation to proceed with minimal overhead beyond the input array or structure. The concept emerged in the context of early sorting algorithms developed in the 1960s, such as heapsort. Donald Knuth discussed related ideas using the term "in situ" in The Art of Computer Programming, Volume 3: Sorting and Searching (1973), highlighting direct manipulation for resource-constrained systems. Mathematically, the space complexity of an in-place algorithm is bounded by O(1), where the hidden constant accounts for implementation details like temporary variables, pointer storage, or bounded depth that do not scale with n. This notation underscores the algorithm's independence from input scale in terms of extra , distinguishing it from higher-order space requirements in non-in-place approaches.

Memory Characteristics

In-place algorithms are characterized by their use of constant auxiliary space, O(1), which encompasses both explicit extra variables—such as temporary storage for element exchanges—and implicit space requirements like the or system-level effects such as utilization. This bound aligns with the formal definition of in-place operations, limiting additional beyond the input structure to a fixed amount independent of input size. Explicit auxiliary components typically involve minimal variables, ensuring that computations the input without allocating proportional extra storage. A key aspect of in-place memory management is the handling of temporary swaps during element exchanges, which often employs a single extra variable to hold one value while overwriting positions in the array. This approach maintains the O(1) space guarantee, as the temporary variable remains constant regardless of array size. For instance, swapping two elements a and b can be achieved with: \text{temp} = a; \quad a = b; \quad b = \text{temp} Such operations avoid the need for additional arrays, preserving the input's memory footprint. However, certain in-place algorithms encounter edge cases where auxiliary space exceeds O(1) due to implicit costs, particularly in recursive implementations. For example, the standard quicksort algorithm, while operating directly on the input array, requires O(log n) space on average for the recursion stack to track subarray boundaries, though worst-case scenarios can reach O(n). This stack usage arises from the depth of recursive calls, highlighting how recursion can undermine the strict in-place ideal despite no explicit extra data structures. The memory characteristics of in-place algorithms yield significant practical benefits, especially for large datasets where reduced auxiliary space enables processing inputs that exceed available in out-of-place counterparts. Additionally, their lower minimizes cache misses and page faults, improving runtime performance by enhancing data locality within the . These advantages make in-place methods particularly valuable in resource-constrained environments, such as systems or applications.

Practical Examples

Sorting Algorithms

Insertion sort is a fundamental in-place that builds a sorted subarray incrementally by inserting each subsequent into its correct within the already sorted portion of the . The process involves shifting larger elements one to the right to make for the inserted , ensuring all operations occur within the original without additional beyond a constant amount for indices and temporary variables. This results in O(1) extra usage, making it truly in-place. The algorithm's exemplifies this:
for i from 1 to n-1
    key = array[i]
    j = i - 1
    while j >= 0 and array[j] > key
        array[j+1] = array[j]
        j = j - 1
    array[j+1] = key
This implementation, described in detail in early computing literature, demonstrates the element shifts that maintain the 's integrity. , introduced by J. W. J. Williams in , operates entirely in-place by first constructing a max- from the input and then repeatedly extracting the maximum element to build the sorted output. The heap phase rearranges elements within the array to satisfy the heap , where for any node at index i, its parent at floor((i-1)/2) is greater than or equal to it. The sift-down (or heapify) operation restores the heap property after extraction by swapping the root with its largest and recursively sifting down, while sift-up is used during building to promote smaller elements. These operations ensure O(1) extra space, as all modifications occur via swaps in the original . The full process achieves O(n log n) time in the worst case. The in-place variant of , as originally devised by C. A. R. Hoare in , partitions the around a using swaps to rearrange elements such that all smaller than the pivot are on its left and larger on its right. This partitioning step, performed iteratively or recursively, avoids extra array allocation by exchanging elements directly in the input ; for example, two pointers traverse from opposite ends, swapping when a mismatch is found relative to the . While the recursive version uses O(log n) average stack space, quicksort is generally considered in-place as the extra space is not proportional to the input size. Iterative implementations typically also use O(log n) space but can be optimized to reduce stack usage. The algorithm's efficiency stems from expected O(n log n) time, with in-place swaps ensuring minimal . In-place merge sort variants address the space inefficiency of traditional by performing merges without a full auxiliary , using block-based or techniques to merge sorted runs directly in the original . A notable example is the practical in-place mergesort by Katajainen, Pasanen, and Teuhola from , which employs a bottom-up approach with small block merges and rotations to minimize element moves while maintaining O(1) extra space. This method recursively divides the and merges adjacent runs by swapping blocks and handling overlaps in-place, achieving O(n log n) time, though it is practically slower than out-of-place versions due to increased comparisons. Such variants are particularly useful in memory-constrained environments but trade some speed for space savings. Regarding stability, preserves the relative order of equal elements, making it , whereas and are unstable due to their mechanisms that can reorder equal keys. This distinction highlights how in-place designs often sacrifice stability for efficiency, contrasting with out-of-place sorts like traditional .

Array and List Manipulations

In-place reversal is a core data transformation technique that rearranges elements such that the order is inverted without allocating auxiliary storage beyond a constant amount. The algorithm initializes two pointers: one at the 's start (left) and one at the end (right). It iteratively the elements at these positions and advances the left pointer inward while retreating the right pointer until they cross, effectively performing roughly n/2 for an of length n. This yields a of O(n) and of O(1), making it ideal for large datasets where memory is limited. Array rotation shifts all elements by a specified distance d, either left or right, while preserving their relative order and operating solely on the original . A popular reversal-based method achieves this by first reversing the subarray from index 0 to d-1, then from d to n-1, and finally the entire from 0 to n-1; each reversal step leverages the two-pointer swap technique described above. This approach completes in O(n) time and O(1) space, suitable for scenarios requiring cyclic shifts without temporary buffers. Alternatively, the juggling algorithm, detailed in Jon Bentley's seminal work Programming Pearls, decomposes the rotation into disjoint cycles determined by the of d and n, rotating elements along each cycle via temporary storage of one element at a time. This method also runs in O(n) time and O(1) space, offering efficiency when d and n share common factors. Removing duplicates from a sorted in-place compacts unique elements to the front while shifting others rightward, returning the count of uniques without extra space for copies. Employing a two-pointer , a "slow" pointer tracks the position for the next unique element, starting at 0, while a "fast" pointer scans from 1 onward. If the fast pointer encounters a value different from the one at the slow pointer, the slow pointer advances and the value is copied there; otherwise, it skips duplicates. Upon completion, the array prefix up to the pointer holds uniques in sorted order, achieving time and O(1) space. This technique assumes the input is pre-sorted, emphasizing its utility in data cleaning pipelines. For linked lists, in-place manipulations adjust node pointers directly to achieve transformations like reversal, avoiding the creation of new nodes or lists. The standard reversal algorithm iterates through the list using three pointers: previous (initially null), current (starting at head), and next (to store the subsequent node). For each current node, its next pointer is redirected to previous, previous is updated to current, and current advances to next, effectively flipping link directions from head to tail in a single pass. This operates in O(n) time and O(1) space, with the former head becoming the new tail (requiring a separate pointer update if needed). Such pointer manipulations exemplify in-place efficiency for dynamic structures where node allocation is costly. These and operations find practical application in memory-constrained environments, such as embedded systems, where additional space allocation can exceed limits or increase consumption. By modifying in-place, they enable efficient on devices like microcontrollers or sensors, supporting tasks from to buffer management without risking out-of-memory errors.

Theoretical Implications

Space Complexity Analysis

In the formal for analyzing in-place algorithms within theory, Turing machines are typically equipped with a read-only input containing the input of length n and a separate read-write work whose usage measures the auxiliary . In this framework, in-place algorithms correspond to those employing O(1) on the work , allowing the input to be treated as read-write for modifications while strictly limiting additional ; this distinguishes auxiliary from the input , ensuring that computations do not allocate proportional extra . In-place algorithms align closely with low-space complexity classes such as L (deterministic log-space), which consists of languages decidable by deterministic Turing machines using O(\log n) work space, and NL (nondeterministic log-space), the nondeterministic analog. Many in-place procedures, particularly those involving pointer manipulations or reversible operations on arrays, fall within these classes, as they rely on logarithmic or constant auxiliary space to traverse and update the input without copying it. In contrast, higher classes like PSPACE, which allow polynomial work space, encompass problems that inherently demand extensive temporary storage, rendering purely in-place solutions infeasible for PSPACE-complete tasks such as quantified Boolean formula evaluation. Savitch's theorem provides a key bridge between nondeterministic and deterministic space bounds, stating that any language in NL is decidable in deterministic O((\log n)^2) space via a recursive simulation that checks reachability in configuration graphs. This result implies that nondeterministic in-place algorithms, which might leverage guessing within log-space bounds, can be derandomized or determinized with a modest quadratic increase in space, thereby expanding the feasibility of in-place implementations for NL problems without exceeding logarithmic overhead. The theorem's impact on in-place feasibility is profound, as it ensures that nondeterminism does not fundamentally escape the realm of low auxiliary space, allowing deterministic in-place variants for a broad class of graph and connectivity problems. P-complete problems, which are complete for P under log-space reductions—such as the circuit value problem—pose challenges to pure in-place claims, as they are unlikely to be solvable in o(\log n) space unless L = P. Known lower bounds demonstrate that certain problems in P require \Omega(\log n) auxiliary space in the model, arising from the need to maintain pointers or counters during computation, thus questioning whether all polynomial-time tasks admit truly constant-space in-place algorithms. This separation highlights that while many practical in-place algorithms achieve O(1) space, theoretically hard problems in P demand at least logarithmic work space, complicating universal in-place strategies. The development of theory, including the formalization of classes like and and their relation to in-place computation, emerged prominently in the 1970s through foundational works by researchers such as John E. Hopcroft and Jeffrey D. Ullman. Their 1979 textbook synthesized early results on tape complexities and hierarchy theorems, establishing the analytical framework for distinguishing auxiliary space usage in Turing machines and influencing subsequent studies on low-space algorithms. This era's contributions, building on Savitch's 1970 breakthrough, solidified the theoretical underpinnings for evaluating in-place algorithms against space hierarchies.

Time-Space Tradeoffs

In-place algorithms frequently entail a between computational time and usage, where maintaining constant extra often elevates the compared to algorithms that utilize additional . For instance, performs in-place using O(1) auxiliary but incurs O(n²) time in the worst and average cases, while achieves O(n log n) time across all cases at the expense of O(n) extra for temporary arrays during merging. For comparison-based in-place sorting algorithms, the fundamental lower bound remains Ω(n log n) time complexity, matching the bound for general sorts and derived from the . The height of the binary decision tree for distinguishing n! permutations is at least ⌈log₂(n!)⌉, which by simplifies to Ω(n log n), implying that any such algorithm requires at least this many comparisons in the worst case, even when restricted to O(1) extra space. Insights from the Ford–Johnson algorithm, which minimizes comparisons to near this bound for small n, underscore that in-place implementations cannot evade this temporal minimum without non-comparison operations. To achieve truly O(1) space in while approaching optimal time, models like block-interchanges—allowing swaps of arbitrary disjoint blocks—enable algorithms that reduce auxiliary memory to a constant but increase the number of data movements or swaps. In related constrained models, such as where direct input modification is prohibited, and Raman (1996) showed with minimum data movements in average-case O(n log log n) time using constant extra storage, though this differs from standard in-place settings that permit modifications. Recent theoretical advances have extended these tradeoffs to . For example, parallel in-place algorithms in shared-memory models achieve polylogarithmic span with O(log n) auxiliary space for problems like and , enabling efficient on multicore systems without proportional extra . In practical scenarios, where constraints limit sizes, the time penalty of in-place algorithms is often acceptable to larger inputs on resource-limited systems, prioritizing over raw speed.

Variations and Challenges

Randomized Approaches

Randomized approaches introduce elements of chance into in-place algorithms to provide probabilistic efficiency guarantees, particularly in achieving favorable performance across all inputs without additional space overhead. A prominent example is randomized , where the is selected uniformly at random from the current subarray during each partitioning step. This ensures that the expected running time is O(n \log n) for any input of size n, while maintaining O(1) extra space beyond the input array (ignoring the recursion stack). The random choice disrupts potential adversarial ordering, preventing the degenerate O(n^2) behavior that can plague deterministic selections like always choosing the first or last element. In-place selection algorithms also benefit from randomization to attain linear expected time. The Floyd-Rivest algorithm, introduced in 1975, finds the k-th smallest element in an unsorted by first drawing a biased random sample to estimate a good partitioning element, then performing in-place partitioning similar to . This yields an expected running time of O(n + \min(k, n-k)) and uses O(1) extra space, as the sampling and partitioning occur directly on the without auxiliary storage. The randomization in sampling helps balance the partition sizes in expectation, avoiding worst-case recursion depths. These randomized in-place methods often operate with high-probability space bounds, succeeding in O(1) extra space with failure probability at most $1/n. For instance, randomized partitions the array such that the depth exceeds c \log n (for constant c) with probability less than $1/n, ensuring the algorithm completes without in practice while remaining in-place. Such probabilistic guarantees allow the algorithms to fail rarely, with repetition or amplification techniques reducing the error further if needed. Randomization enhances in-place algorithms by better handling worst-case inputs compared to deterministic counterparts, which may require careful or median choices to avoid poor performance. In , randomized quicksort's expected O(n \log n) time holds regardless of input order, making it more robust and simpler to implement than deterministic that might degrade on sorted . This resilience stems from the unpredictability introduced by random choices, which thwarts input-specific attacks. Derandomization techniques can convert these randomized in-place algorithms to deterministic ones while preserving efficiency, often using limited independence to simulate full with fewer random bits. For example, pairwise-independent distributions suffice to replace the random pivot selections in or sampling in selection algorithms, yielding deterministic versions with matching worst-case time bounds and O(1) space, as the limited-independence generators require only polylogarithmic seed length that fits in constant space.

Implementation in Functional Programming

In functional programming languages such as , immutability is a core principle that prohibits side effects and ensures , making direct in-place mutations incompatible with the paradigm's emphasis on pure functions and persistent data. This conflict arises because algorithms designed for in-place operation, like swapping elements in an without extra , rely on modifiable , which functional languages avoid to prevent unpredictable and facilitate parallel execution. As a result, purely functional implementations must approximate in-place behavior through alternative mechanisms that preserve immutability while aiming for . One prominent simulation technique involves persistent data structures, which create the illusion of in-place updates by sharing unchanged parts of the data and copying only the modified paths. Chris Okasaki's seminal work on purely functional data structures demonstrates how balanced trees, such as red-black trees or finger trees, can support updates in O(log n) time and space by path copying, enabling operations that mimic without altering the original structure. For example, inserting into a persistent produces a new version of the tree while retaining access to prior versions, thus supporting versioned data in a space-efficient manner compared to full copies, though at the cost of logarithmic overhead over true O(1) mutations. To achieve more efficient, imperative-like mutations within a functional framework, monadic abstractions such as the ST monad are employed to thread mutable state locally without leaking effects to the pure outer context. Introduced by Launchbury and Peyton Jones, the ST monad allows creation of mutable references, arrays, and other structures that can be updated in-place during computation, after which the state is frozen and discarded, yielding a pure result. A practical example is implementing array-based algorithms in , such as , where mutable arrays in the ST monad enable partitioning and swapping with O(1) extra space, achieving performance comparable to imperative versions while maintaining functional purity; in contrast, a purely functional list reversal using folds constructs a new list in O(n) space, forgoing true in-place efficiency. These techniques introduce tradeoffs, where persistent structures incur O(log n) space for immutability's benefits like and easier composition, potentially increasing memory usage for large datasets compared to imperative in-place methods. In hybrid languages like , which integrate functional and imperative features, mutable collections such as ArrayBuffer allow direct in-place modifications for performance-critical algorithms, while immutable alternatives like support persistent updates, enabling developers to balance purity and efficiency contextually.

References

  1. [1]
    [PDF] Towards In-Place Geometric Algorithms and Data Structures
    Jan 18, 2006 · Although in-place sorting and searching algorithms were investigated a long time ago, there has been a resurgence of interest recently; ...
  2. [2]
    In-Place Sparse Suffix Sorting - ACM Digital Library
    As far as the requirement of re-writable text is concerned, the usual definition of in-place algorithm is “an algorithm which transforms the input into the.
  3. [3]
    In-place algorithms for sorting problems | ACM SIGACT News
    This is the first in-place algorithm that sorts a multiset stably in asymptotically optimal time. We present in-place algorithms for unstable and stable ...
  4. [4]
    [PDF] CS 106X - CSE, IIT Delhi
    Feb 3, 2017 · The in-place algorithm, which swaps elements. Page 55. Quicksort Algorithm: Naive. 6. 5. 9 12 3.
  5. [5]
    [PDF] Parallel In-Place Algorithms: Theory and Practice
    Abstract. Many parallel algorithms use at least linear auxiliary space in the size of the input to enable computations to be done.
  6. [6]
    The Art of Computer Programming (TAOCP)
    The Art of Computer Programming (TAOCP). by Donald E. Knuth. Click here to sign up for The Art of Computer Programming Newsletter , which features updates on ...
  7. [7]
    [PDF] Module 1: Analyzing the Efficiency of Algorithms
    Algorithms whose space complexity does not increase with input size ... In-place algorithms are said to have Θ(1) space complexity. • An algorithm ...
  8. [8]
    [PDF] ECE 2400 Computer Systems Programming Spring 2025 Topic 9
    space complexity. – In-Place Algorithms: Keep all elements stored in the input array; use input array for intermediate results; no temporary storage is ...
  9. [9]
    [PDF] CMSC 351: Algorithm Analysis - UMD MATH
    We require one additional auxiliary variable for the swapping, temp, and one for the loop i, and so this algorithm uses. 1+1= O(1) auxiliary space. ... in-place ...
  10. [10]
    [PDF] Quicksort in Constant Space
    Feb 2, 1991 · The standard Quicksort algorithm takes O(log N) space, e.g. stack space for recursion. algorithm correct.
  11. [11]
    [PDF] Theoretically-Efficient and Practical Parallel In-Place Radix Sorting
    Jun 22, 2019 · Reducing the amount of ad- ditional memory required by an algorithm can enable larger inputs to be processed in the memory of a machine, or a ...
  12. [12]
    10 Best Sorting Algorithms Explained, with Examples - SitePoint
    Apr 13, 2023 · Examples of in-place sorting algorithms include bubble sort, insertion sort, quicksort, and shell sort. Stable sorting algorithms. These ...
  13. [13]
    Insertion Sort Algorithm - GeeksforGeeks
    Jul 23, 2025 · Insertion sort is a simple sorting algorithm that works by iteratively inserting each element of an unsorted list into its correct position in a sorted portion ...Binary Insertion Sort · Insertion Sort · Time and Space Complexity of...
  14. [14]
    Algorithm 232: Heapsort | Communications of the ACM
    May 2, 2025 · Algorithm 232: Heapsort. Author: J.W.J. Williams. J.W.J. Williams ... June 1964. 71 pages. ISSN:0001-0782. EISSN:1557-7317. DOI:10.1145 ...Missing: paper | Show results with:paper
  15. [15]
    [PDF] Quicksort - By CAR Hoare - Rabbit
    In the implementation of a sorting method on a given computer, it is often possible to make adaptations which will ensure optimization of the innermost loops.
  16. [16]
    Practical in-place mergesort | Nordic Journal of Computing
    In practice, due to the overhead in the index manipulation, our fastest in-place mergesort behaves still about 50 per cent slower than the bottom-up heapsort.
  17. [17]
    Stable and Unstable Sorting Algorithms - GeeksforGeeks
    A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input data set.
  18. [18]
    Array Reverse - GeeksforGeeks
    Aug 8, 2025 · Reversing an array means rearranging the elements such that the first element becomes the last, the second element becomes second last and so on.
  19. [19]
    Reversal algorithm for Array rotation - GeeksforGeeks
    Jul 23, 2025 · 1st Step: Consider the array as a combination of two blocks. One containing the first two elements and the other containing the remaining ...
  20. [20]
    [PDF] Programming Pearls, Second Edition - TFE Times
    1: reversal algorithm. 2: juggling algorithm. 22: juggling algorithm with mod rather than if. 3: gcd algorithm. 4: slide (don't rotate): baseline alg for timing.
  21. [21]
    Juggling Algorithm for Array Rotation - GeeksforGeeks
    Aug 27, 2025 · The idea behind Juggling Algorithm is that we can rotate all elements of array using cycles. Each cycle is independent and represents a group of elements.Missing: Anantharamaiah source
  22. [22]
    Remove duplicates from Sorted Array - GeeksforGeeks
    Nov 19, 2024 · Given a sorted array arr[] of size n , the goal is to rearrange the array so that all distinct elements appear at the beginning in sorted order.
  23. [23]
    Reverse a Linked List - GeeksforGeeks
    Aug 30, 2025 · The idea is to reverse the linked list by changing the direction of links using three pointers: prev, curr, and next.Missing: source | Show results with:source
  24. [24]
    When to Use an In-Place Algorithm: A Comprehensive Guide
    An in-place algorithm is a type of algorithm that transforms input using no auxiliary data structure. In other words, it modifies the input directly, using only ...Missing: formal | Show results with:formal<|control11|><|separator|>
  25. [25]
    [PDF] Limits to Parallel Computation: P-Completeness Theory
    P-complete problems lack highly parallel solutions, as no feasible NC algorithms exist, raising the question: Does P equal NC?
  26. [26]
  27. [27]
  28. [28]
    [PDF] Probabilistic Analysis and Randomized Quicksort
    We will prove that for any given array input array I of n elements, the expected time of this algorithm E[T(I)] is O(nlog n). This is called a Worst-case ...
  29. [29]
    [PDF] Algorithm 489 - People | MIT CSAIL
    2. Floyd, Robert W., and Ronald L. Rivest. Expected time bounds tbr selection. Stanford CSD Rep. No. 349, Apr.Missing: randomized place
  30. [30]
    [PDF] Probability and Computing
    test has a one-sided error bounded by 1/2, we have Pr(B I E) = 1 ... ing the failure probability of algorithms and for establishing high probability bounds.
  31. [31]
    [PDF] Randomized Algorithms - CMU School of Computer Science
    A secondary advantage of randomized algorithms is that they are often much simpler than deterministic algorithms. In fact, many randomized algorithms sound ...
  32. [32]
    [PDF] State in Haskell - Microsoft
    J Launchbury & SL Peyton Jones [June 1994], \Lazy functional state threads," in. SIGPLAN Symposium on Programming Language Design and Implemen- tation (PLDI ...<|control11|><|separator|>
  33. [33]
    [PDF] Purely Functional Data Structures - CMU School of Computer Science
    This property is known as persistence, and is taken for granted in functional languages. On the surface, persistence and amortization appear to be incompatible, ...
  34. [34]
    [PDF] Lazy Functional State Threads - UFMG
    J Launchbury. & SL. Peyton. Jones [Feb. 1994],. “Lazy functional state threads,”. Technical report. FP-. 94-05, Department of Computing. Science,. Uni- versit y.
  35. [35]
  36. [36]
    Scala 2.8 Collections API -- Mutable and Immutable Collections
    Scala collections systematically distinguish between mutable and immutable collections. A mutable collection can be updated or extended in place. This means ...