Corecursion
Corecursion is a fundamental concept in computer science and functional programming, serving as the dual of recursion by defining functions that generate potentially infinite data structures through a process of unfolding from a base case outward, rather than consuming finite inputs inward.[1][2] Unlike recursion, which operates on datatypes defined as least fixed points (such as finite lists), corecursion targets codatatypes as greatest fixed points, enabling the construction of streams, infinite trees, or other coinductive objects via primitive corecursive schemes.[1][2]
This duality arises in the context of coalgebraic structures, where corecursion specifies a function by detailing the outcomes of applying destructors (or observers) to its results, often formalized as morphisms to terminal coalgebras.[1] For instance, in languages like Haskell, a corecursive definition such as generating an infinite list of natural numbers—countFrom n = n : countFrom (n+1)—leverages non-strict evaluation to produce unending sequences without termination issues inherent in recursive consumption.[2] Corecursion is closely linked to coinduction, a proof technique using bisimulation to establish behavioral equivalences on codata, contrasting with induction's use for inductive datatypes.[1]
The technique's importance stems from its role in handling non-terminating computations and infinite behaviors, which are prevalent in modeling reactive systems, process calculi, and advanced functional paradigms.[2] Proof methods for corecursive programs include fixpoint coinduction, approximation lemmas, and fusion laws, ensuring correctness and productivity (guaranteed finite progress in observable outputs).[2] While recursion has long dominated finite data processing, corecursion has gained prominence since the 1990s in both theoretical foundations and practical implementations, complementing tools like apomorphisms for monadic corecursion in effectful computations.[2]
Fundamentals
Definition
Corecursion is the dual of recursion, serving as a computational mechanism for defining functions that generate codata structures, such as infinite streams or trees, by incrementally producing outputs from an input state.[3] In contrast to recursion, which decomposes finite inductive data types through pattern matching and constructors, corecursion composes potentially infinite coinductive structures using destructors like head and tail, enabling the definition of processes that observe and extend data indefinitely.[4] This duality arises in coalgebraic settings, where recursion corresponds to initial algebras for building finite data, while corecursion leverages final coalgebras for observing behavioral equivalence in infinite or reactive systems.[3]
Operationally, corecursion is often realized through an unfolding process that specifies how to produce successive elements of the output. A corecursive function f applied to a seed value x can be defined as f x = \text{unfold}\, p\, g\, h\, x, where p is a predicate to check for termination (yielding an empty output if true), g computes the next state from the current one, and h extracts the output element to prepend.[5] This pattern ensures productivity by generating observable parts of the codata lazily, without requiring the entire structure to be computed upfront.
A common generalization in functional programming languages appears in the form:
unfold :: (s → (a, s)) → s → [a]
unfold :: (s → (a, s)) → s → [a]
Here, the step function maps a state s to a pair consisting of an output value a and the subsequent state, iteratively building a list (or stream for infinite cases) from an initial seed; for infinite codata, the process continues indefinitely without termination.[5] For instance, this can generate codata like infinite lists, where each application observes the current state to produce the next element.[4]
Relation to Recursion
Corecursion stands in a dual relationship to recursion, where recursion operates on finite inductive data structures via least fixed points, such as trees built bottom-up from base cases, while corecursion generates potentially infinite codata structures via greatest fixed points, such as streams that unfold indefinitely. This duality reflects the analytical nature of recursion, which decomposes data toward a base, contrasted with the synthetic nature of corecursion, which constructs data outward from an initial state.
In the typed lambda calculus, this duality manifests through fixed point operators: the recursive fixed point is denoted by μf.x, capturing the least fixed point for terminating computations on finite data, whereas the corecursive fixed point is denoted by νf.x, capturing the greatest fixed point for non-terminating or infinite unfoldings.[6] Equivalence between corecursive processes is verified using coinductive bisimulation, a relation that ensures two infinite structures match observationally by simultaneously advancing their states, dual to the inductive proofs used for recursive functions.
| Aspect | Recursion | Corecursion |
|---|
| Data Handling | Consumes finite input via pattern matching on constructors (e.g., case analysis on trees). | Produces infinite output via observation on destructors (e.g., head/tail on streams). |
| Fixed Point | Least fixed point (inductive, finite unfolding). | Greatest fixed point (coinductive, potentially infinite unfolding). |
| Computation Style | Terminates by reaching base case, building result inward. | Progresses productively by guarding recursive calls with constructors, avoiding immediate divergence. |
Corecursion plays a key role in guarded recursion schemes, where recursive calls must be immediately preceded by a constructor application to ensure productivity—meaning finite prefixes of infinite data are computable in finite time—thus enabling well-behaved infinite computations that plain recursion would render non-terminating or looping.[7]
Examples
Arithmetic Sequences
Corecursion provides a natural way to generate infinite sequences by unfolding state incrementally, contrasting with traditional recursion that computes finite results by folding inputs. In the case of arithmetic sequences, for example, the infinite stream of natural numbers can be generated corecursively. Starting from an initial value n, each step outputs n and advances to n+1. In Haskell, this can be implemented as:
haskell
countFrom :: Integer -> [Integer]
countFrom n = n : countFrom (n + 1)
countFrom :: Integer -> [Integer]
countFrom n = n : countFrom (n + 1)
Applying countFrom 0 yields the stream $0, 1, 2, 3, \dots, where the constant difference of 1 ensures an arithmetic progression. This definition is productive due to lazy evaluation, producing elements on demand without termination.[2]
More complex sequences like factorials can be defined corecursively to produce an infinite stream, adapting the standard recursive factorial function—which terminates after reaching zero—into a productive generator that streams values on demand without end. This shift emphasizes output production over input consumption, enabling lazy evaluation in functional languages to handle potentially infinite data safely.[8]
A corecursive stream of factorials can be defined using a state that tracks the current index n and the accumulated factorial fact_n. Starting from initial state (0, 1), each step outputs fact_n and advances to (n+1, fact_n \times (n+1)). In pseudocode, this unfolds as:
unfoldFact(state = (n, fact_n)) {
output fact_n
next_state = (n + 1, fact_n * (n + 1))
return unfoldFact(next_state)
}
unfoldFact(state = (n, fact_n)) {
output fact_n
next_state = (n + 1, fact_n * (n + 1))
return unfoldFact(next_state)
}
This generates the stream $1, 1, 2, 6, 24, \dots (i.e., $0! = 1, 1! = 1, 2! = 2, \dots), where productivity ensures only as many elements are computed as needed. Alternatively, in a more equation-based form using stream operations, the stream facA satisfies facA = 1 :: (facA \otimes (1 :: facA)), where \otimes denotes the shuffle product for element-wise multiplication defined as xs \otimes ys = (head(xs) \times head(ys)) :: ((tail(xs) \otimes ys) \oplus (xs \otimes tail(ys))), with \oplus the pointwise sum of streams; this corecursive equation produces the infinite factorial sequence starting from $1!.[8]
The Fibonacci sequence offers another illustrative example, where corecursion tracks two consecutive terms to produce an infinite stream. The standard recursive definition F_0 = 0, F_1 = 1, F_n = F_{n-1} + F_{n-2} for n > 1 is finite and inward-focused, but the corecursive form adapts it to outward generation by defining the stream f such that f = 0 :: 1 :: \mathrm{zipWith}(+) \, f \, (\mathrm{tail} \, f). Here, \mathrm{zipWith}(+) \, xs \, ys applies addition element-wise to corresponding elements of streams xs and ys, and \mathrm{tail} \, f discards the first element of f.
To derive this step by step:
- The stream begins with the base cases: head(f) = 0, second element = 1.
- The tail after the second element is \mathrm{zipWith}(+) \, f \, (\mathrm{tail} \, f), so the third element is head(f) + head(\mathrm{tail} \, f) = 0 + 1 = 1.
- The fourth element is second(f) + second(\mathrm{tail} \, f) = 1 + (0 + 1) = 1 + 1 = 2 (where second(\mathrm{tail} \, f) is the third element of f).
- Continuing, the fifth is 1 + 2 = 3, sixth is 2 + 3 = 5, and so on, yielding the infinite stream $0, 1, 1, 2, 3, 5, 8, \dots.
This equation ensures the stream is productive, as each new element depends only on previously generated ones. In Haskell, this can be implemented as:
haskell
import Data.List (zipWith)
fib :: [Integer]
fib = 0 : 1 : zipWith (+) fib ([tail](/page/Tail) fib)
import Data.List (zipWith)
fib :: [Integer]
fib = 0 : 1 : zipWith (+) fib ([tail](/page/Tail) fib)
Such corecursive forms highlight the duality to recursion: while recursion decomposes finite inputs to a base case, corecursion composes infinite outputs from initial seeds, facilitating streaming computations in functional programming.[9]
Data Structures
Corecursion provides a natural mechanism for traversing tree structures in a breadth-first manner, in contrast to the depth-first approach typically achieved through standard recursion. In depth-first traversal, the process recursively explores one subtree fully before moving to the next, often leading to stack-based implementations that mirror the tree's depth. Breadth-first traversal, however, processes nodes level by level, which can be elegantly expressed corecursively by maintaining a queue of pending subtrees as state and unfolding the next layer of nodes from that state. This approach generates a stream of node values in level-order without explicit recursion on the tree itself, instead advancing through the queue to observe roots and enqueue children progressively.[10]
A pseudocode example in a functional style for level-order traversal of a forest of trees (where a single tree is treated as a singleton forest) illustrates this corecursive unfolding:
levelf :: Forest a -> [a]
levelf = unfold [null](/page/Null) (map [root](/page/Root)) (concatMap children)
levelf :: Forest a -> [a]
levelf = unfold [null](/page/Null) (map [root](/page/Root)) (concatMap children)
Here, null tests if the forest is empty (base case for termination), map root extracts values from the current level's roots, and concatMap children advances the state by collecting all children into the next queue level. For a single tree t, the traversal is levelf [t], producing the level-order stream efficiently in linear time by avoiding redundant computations inherent in some fold-based alternatives.[10]
Corecursion also enables the generation of infinite tree structures by unfolding from an initial seed state, where the process observes the current node value and produces child states indefinitely, suitable for modeling fractal-like or unending hierarchical data. In this paradigm, the corecursive function destructs the state to yield a node and its successors, recursively constructing subtrees from those successors to build the overall tree bottom-up from observations. This duality to recursive consumption allows for productive computation of potentially non-terminating structures, as long as the unfolding respects coinductive productivity.[11]
A specific implementation of this is the corecursive unfoldTree function, generalized for multi-ary (rose) trees:
unfoldTree :: (s -> (a, [s])) -> s -> Tree a
unfoldTree f s = [Node](/page/Node) (fst (f s)) (map (unfoldTree f) (snd (f s)))
unfoldTree :: (s -> (a, [s])) -> s -> Tree a
unfoldTree f s = [Node](/page/Node) (fst (f s)) (map (unfoldTree f) (snd (f s)))
This observes the seed s via f to produce a value a and a list of child seeds [s], then corecursively unfolds each child seed into a subtree. For example, to generate an infinite balanced binary tree where each node holds its depth, one can define f depth = (depth, [depth+1, depth+1]) and apply unfoldTree f 0, yielding a full infinite binary tree with levels expanding symmetrically, useful in simulations of fractal geometries or endless search spaces.[11]
Mathematical Basis
Corecursion finds its mathematical foundation in the theory of coalgebras within category theory. For an endofunctor F on a category \mathcal{C}, an F-coalgebra is a pair (S, \alpha) where S is an object of \mathcal{C} and \alpha: S \to F S is a morphism. Corecursion arises as the unique coalgebra homomorphism from (S, \alpha) to the final F-coalgebra (\nu F, \eta), where \nu F is the terminal object in the category of F-coalgebras, satisfying the property that for any coalgebra (S', \alpha'), there exists a unique morphism h: S' \to \nu F such that the diagram commutes: \eta \circ h = F h \circ \alpha'. This unique homomorphism, often denoted as the anamorphism [\alpha]: S \to \nu F, defines the corecursive process by "unfolding" the state space S into the infinite structure captured by \nu F.[12][13]
A canonical example illustrates this in the category of sets, where F X = A \times X for a set A, modeling streams of elements from A. The final coalgebra (\nu F, \eta) has carrier \nu F = A^\mathbb{N}, the set of infinite sequences over A, with \eta decomposing each stream into its head (first element in A) and tail (remaining stream in \nu F). Given a coalgebra (S, \alpha: S \to A \times S), the corecursion [\alpha]: S \to A^\mathbb{N} produces, for each state s \in S, the infinite stream obtained by iteratively applying \alpha to generate successive heads and next states. This construction ensures that corecursion systematically builds potentially non-terminating or infinite data structures from finite descriptions.[12][13]
The carrier \nu F of the final coalgebra represents the greatest fixed point of the equation X \cong F X, denoting the largest solution in the lattice of subobjects or coalgebras that accommodates all possible behaviors modeled by F. This contrasts with the least fixed point \mu F, the initial algebra, which underpins recursion for finite structures. The coinduction principle complements corecursion by providing a proof method for behavioral equivalence: two states s, s' \in S are bisimilar if there exists a bisimulation relation R such that (s, s') \in R, where bisimulations are spans of morphisms lifting to the final coalgebra, ensuring that bisimilar states map to identical elements in \nu F. This principle, dual to induction on initial algebras, verifies properties of infinite objects via one-step approximations preserved across unfoldings.[12][13]
Productivity Conditions
In corecursion, productivity refers to the property that a corecursive process generates elements of an infinite data structure in finite time, ensuring that the computation does not diverge without producing output even when defining potentially infinite objects. This means that for any finite prefix of the output, there exists a finite number of computation steps to produce it, preventing non-termination while allowing the definition to be coinductively well-founded.[2]
A key mechanism to enforce productivity is guardedness, which requires that every corecursive call be protected by a constructor that produces an observable output before the recursion proceeds. In languages like Haskell, this is achieved syntactically: recursive calls must appear in positions guarded by data constructors, such as the cons operator (:) for lists, ensuring that the head element is computed finitely before recursing on the tail. For instance, the infinite list of natural numbers defined as nats = 0 : map (+1) nats is productive because the constructor produces 0 immediately, and the recursive call to map is guarded.[14]
Coinductive proofs verify productivity through bisimulation, a relation that establishes behavioral equivalence between infinite structures by matching heads and relating tails coinductively. To prove two streams equal, one defines a bisimulation relation R such that if xs \, R \, ys, then the heads are equal and the tails are related by R; coinduction then lifts this to full equality. Consider Fibonacci streams: a productive definition is fibs = 0 : 1 : zipWith (+) fibs (tail fibs), where the constructors guard the recursive calls, allowing finite computation of any prefix via bisimulation on the arithmetic progression. In contrast, a non-productive variant like badFibs = 0 : zipWith (+) badFibs badFibs fails guardedness, as the recursive calls are unguarded, leading to divergence without output since both tails depend on the full structure.[2]
For the common corecursive pattern encapsulated by the unfold combinator, defined as \text{unfold} \, p \, h \, t \, x = \text{if } p \, x \text{ then } [] \text{ else } h \, x : \text{unfold} \, p \, h \, t \, (t \, x), productivity holds under conditions that ensure state advancement in finite steps. Specifically, the predicate p must reject the initial state (for infinite outputs, p is constantly false), the head function h must compute an output value finitely, and the transition t must produce a new state without diverging. A theorem sketch: if h and t are total and productive functions (i.e., they terminate in finite steps for any input), then \text{unfold} \, p \, h \, t is productive, as each application produces a head via h before recursing on t's output, inducting on the finite depth to any prefix. This guarantees that the generated stream observes elements sequentially without stalling.[2]
Applications
Functional Programming
In functional programming languages, corecursion is facilitated by lazy evaluation mechanisms that allow the construction of potentially infinite data structures without immediate computation of all elements. In Haskell, corecursive definitions are commonly expressed using higher-order functions such as unfoldr from the Data.List module, which generates lists by repeatedly applying a step function until a base case, and iterate, which produces infinite lists by successive application of a function to an initial seed value. These functions leverage Haskell's laziness to ensure productivity, where each element is computed only when demanded. Similarly, in Scala, corecursion is supported through LazyList (previously Stream), where infinite sequences are built using the lazy cons operator #::, enabling recursive tail definitions that defer evaluation until elements are accessed. This synergy with lazy evaluation permits the handling of infinite data structures in total functional languages, avoiding runtime errors from non-termination as long as definitions remain productive.[15]
Corecursion provides practical benefits by allowing programmers to define composable infinite data structures without resorting to explicit imperative loops, promoting declarative code that models ongoing processes naturally. For instance, it enables the representation of event streams, where new events are appended lazily as they arrive, facilitating reactive programming paradigms. In parsing contexts, corecursive definitions can process unbounded input streams incrementally, composing parser combinators to handle potentially infinite token sequences without buffering the entire input upfront. These capabilities enhance modularity and reusability, as infinite structures can be transformed and filtered functionally, mirroring the duality of recursion for finite data deconstruction.
However, implementing corecursion poses challenges related to non-productivity, where a corecursive definition might loop indefinitely without producing output, leading to black-hole effects in lazy evaluators like Haskell's. To mitigate this, guarded corecursion enforces that recursive calls are protected by productive constructors, such as cons cells for streams, ensuring finite-time generation of initial elements before further unfolding.[14] In practice, developers also employ fuel limits—passing a decrementing counter to approximate finite prefixes of infinite structures for testing or bounded computation—though this trades totality for explicit termination control.[16]
A notable example of corecursion in functional programming is the stream-based Sieve of Eratosthenes for generating an infinite list of primes in Haskell. This approach defines the primes as a lazy stream where the first element is emitted, and the tail filters multiples of that prime from the remaining candidates, recursively. The code is:
haskell
primes :: [Integer]
primes = sieve [2..]
where
sieve (p:ns) = p : sieve [n | n <- ns, n `mod` p /= 0]
primes :: [Integer]
primes = sieve [2..]
where
sieve (p:ns) = p : sieve [n | n <- ns, n `mod` p /= 0]
This corecursive definition exploits laziness to produce primes on demand without computing the entire infinite list.
Theoretical Models
In process calculi, corecursion provides a foundational mechanism for modeling the infinite behaviors of concurrent systems, particularly through coinduction, which contrasts with induction by focusing on greatest fixed points rather than least ones. In the Calculus of Communicating Systems (CCS), corecursive definitions capture ongoing interactions among processes, enabling the specification of non-terminating computations such as perpetual synchronization or message passing. Similarly, in the π-calculus, which extends CCS with dynamic channel mobility, corecursion supports coinductive reasoning to define behavioral equivalences that account for evolving communication topologies in reactive environments. Bisimulation, a key coinductive relation in these calculi, equates processes based on their observable actions over infinite traces, formalized via guarded corecursion to ensure productivity in unfolding process definitions.[17][18]
Denotational semantics interprets corecursive definitions using domain theory, where types are modeled as complete partial orders (cpos) equipped with continuous functions to handle fixed points of infinite objects. In this framework, a corecursive specification, such as a process generating an infinite stream, is assigned the semantics of a greatest fixed point of a continuous function on a domain, contrasting with the least fixed point used for recursive (inductive) definitions. Domains, often Scott domains, provide the partial order structure needed to approximate infinite computations through chains of finite approximations, ensuring that corecursive programs denote well-defined elements representing coinductive data like infinite lists or trees. This approach, rooted in the mathematical theory of program correctness, allows proving properties of corecursive programs via coinductive invariants that hold over the entire domain.[2][2]
Corecursion finds significant applications in verification through coinductive logic, which is particularly suited to establishing properties of infinite-state systems, such as liveness in reactive systems where processes must respond indefinitely to inputs. Coinductive proofs leverage greatest fixed points to define predicates over infinite execution traces, enabling the verification of non-termination guarantees like eventual responsiveness or fairness in concurrent protocols. For instance, in systems modeled as transition systems with unbounded state spaces, coinduction verifies liveness by showing that certain actions recur infinitely often along all paths, using techniques like bisimulation up-to to compose proofs modularly. This is essential for reactive systems, where traditional inductive methods falter due to the absence of finite base cases.[19][20]
A specific theoretical model employs corecursive traces within the modal μ-calculus to characterize behavioral equivalences via greatest fixed points. The modal μ-calculus extends basic modal logic with least (μ) and greatest (ν) fixed-point operators, allowing formulas to define coinductive properties over labeled transition systems. Corecursive traces, representing infinite sequences of observable actions, are captured by formulas of the form νX. φ(X), where φ is a monotone operator on sets of states or paths, and the greatest fixed point denotes the largest set satisfying the recursive characterization—such as all traces bisimilar to a given process.
\nu X . \ \phi(X)
Here, satisfaction is coinductive: a state satisfies νX. φ(X) if it belongs to every approximant in the descending chain starting from the full powerset, ensuring equivalence under behavioral relations like strong bisimulation in process calculi. This model underpins automated verification tools, as the alternation hierarchy of fixed points corresponds to expressive power for distinguishing infinite behaviors.[21][22]
History
Early Concepts
The philosophical roots of corecursion trace back to ancient conceptions of infinity, notably Aristotle's distinction between potential and actual infinity in his works Physics and Metaphysics. Aristotle posited that the infinite exists only potentially—as an unending sequence or process that can always be extended further but never exists as a completed whole—rejecting actual infinity as incoherent and leading to paradoxes. This notion of potential infinity, exemplified by the unending division of a line or the perpetual generation of natural numbers, anticipates coinductive reasoning, where definitions unfold indefinitely through behavioral observation rather than finite exhaustion.[23]
Building on such ideas, early precursors to coinduction appeared in modal logic during the mid-20th century, where frameworks for reasoning about possibility and necessity involved fixed-point-like structures and dualities that echoed inductive processes in reverse. Saul Kripke's 1963 semantics for modal logic, using possible worlds and accessibility relations, provided a foundation for later coinductive proofs by allowing equivalence relations over infinite models without requiring termination. These developments in modal logic highlighted coinduction as a method for establishing behavioral equalities in potentially infinite domains.
In mathematics, category theory's advancements in the 1960s and 1970s introduced coalgebras as the categorical dual to algebras, emphasizing observational dynamics over generative construction and enabling the formal study of infinite and branching behaviors. Samuel Eilenberg and J. H. C. Whitehead's early work on homotopy theory (1940s) laid groundwork for categorical dualities, but it was the 1965 paper by Eilenberg and J. B. Moore on adjoint functors for automata that first explicitly explored coalgebraic structures for state-based systems. These ideas culminated in the 1980s with Peter Aczel's formalization of final coalgebras within non-wellfounded set theory, where the final coalgebra for the powerset functor models infinite, self-referential sets.[24]
Prior to computer science applications, coinduction in set theory addressed hypersets—non-wellfounded sets featuring circular membership chains—and circular definitions, resolving self-referential paradoxes through coiterative constructions rather than the well-founded iterative hierarchy of Zermelo-Fraenkel set theory. Aczel's Anti-Foundation Axiom (AFA), introduced in 1988 alongside earlier proposals like those by Marco Forti and Furio Honsell in 1983, ensures unique solutions to equations like \Omega = \{\Omega, \{\Omega, \Omega\}\}, modeling infinite descending chains without vicious regress. This approach treats sets coinductively, as bisimilar under membership relations, providing a rigorous basis for infinite objects in pure mathematics.[25]
Key Developments
The concept of corecursion gained formal traction in computer science through Richard Bird's 1984 paper, which introduced circular programs as a mechanism to optimize functional computations by avoiding multiple data traversals, laying early groundwork for corecursive schemes that generate data structures incrementally. This work formalized aspects of anamorphisms, dual to catamorphisms, enabling the systematic construction of infinite or large data via unfolding generators rather than folding consumers.
In the late 1980s and 1990s, researchers like John Hughes and Philip Wadler advanced corecursion's integration into lazy functional languages, emphasizing unfolding transformations to eliminate intermediate data structures and promote efficiency in non-strict evaluation environments.[26] Complementing this, Lloyd Allison's 1989 analysis demonstrated the space and time efficiency of corecursive implementations for self-referential data structures, such as queues, by leveraging circular definitions to achieve constant-time operations without explicit mutation.
In the late 1990s and early 2000s, coalgebraic approaches gained prominence in computer science, with Jan Rutten's 2000 framework of universal coalgebra providing a unified theory for modeling infinite behaviors and coinductive types in systems like automata and processes.[27]
The 2000s saw corecursion expand into proof assistants through coinductive types, with Coq incorporating coinductive types in the 1990s through Eduardo Giménez's work, formalized around version 6.1 in 1999.[28] Similarly, Agda's implementation of coinductive types, formalized in Phil Norell's 2007 thesis, enabled guarded corecursion for productive infinite computations in dependent type theory. Post-2010 developments, including Atkey et al.'s 2015 framework for guarded recursion in the lambda-calculus, provided type-theoretic guarantees for coinductive productivity, bridging corecursion with modal types for safer infinite data handling.
Corecursion has evolved from these theoretical foundations to practical tools in functional and dependent programming, with ongoing refinements in type systems but no major paradigm shifts reported after 2023.[29]