Fact-checked by Grok 2 weeks ago

Corecursion

Corecursion is a fundamental concept in and , serving as the dual of by defining functions that generate potentially data structures through a process of unfolding from a base case outward, rather than consuming finite inputs inward. Unlike recursion, which operates on datatypes defined as least fixed points (such as finite lists), corecursion targets codatatypes as greatest fixed points, enabling the construction of , trees, or other objects via primitive corecursive schemes. This duality arises in the context of coalgebraic structures, where corecursion specifies a function by detailing the outcomes of applying destructors (or observers) to its results, often formalized as morphisms to terminal coalgebras. For instance, in languages like Haskell, a corecursive definition such as generating an infinite list of natural numbers—countFrom n = n : countFrom (n+1)—leverages non-strict evaluation to produce unending sequences without termination issues inherent in recursive consumption. Corecursion is closely linked to coinduction, a proof technique using bisimulation to establish behavioral equivalences on codata, contrasting with induction's use for inductive datatypes. The technique's importance stems from its role in handling non-terminating computations and infinite behaviors, which are prevalent in modeling reactive systems, process calculi, and advanced functional paradigms. Proof methods for corecursive programs include fixpoint , approximation lemmas, and laws, ensuring correctness and (guaranteed finite progress in observable outputs). While has long dominated finite data processing, corecursion has gained prominence since the in both theoretical foundations and practical implementations, complementing tools like apomorphisms for monadic corecursion in effectful computations.

Fundamentals

Definition

Corecursion is the dual of , serving as a computational for defining functions that generate codata structures, such as infinite streams or , by incrementally producing outputs from an input state. In contrast to recursion, which decomposes finite inductive data types through and constructors, corecursion composes potentially infinite coinductive structures using destructors like head and tail, enabling the definition of processes that observe and extend data indefinitely. This duality arises in coalgebraic settings, where recursion corresponds to initial algebras for building finite data, while corecursion leverages final coalgebras for observing behavioral equivalence in infinite or reactive systems. Operationally, corecursion is often realized through an unfolding that specifies how to produce successive elements of the output. A corecursive f applied to a seed value x can be defined as f x = \text{unfold}\, p\, g\, h\, x, where p is a to check for termination (yielding an empty output if true), g computes the next state from the current one, and h extracts the output element to prepend. This pattern ensures productivity by generating observable parts of the codata lazily, without requiring the entire structure to be computed upfront. A common generalization in languages appears in the form:
unfold :: (s → (a, s)) → s → [a]
Here, the step function maps a state s to a pair consisting of an output value a and the subsequent state, iteratively building (or stream for infinite cases) from an initial seed; for infinite codata, the process continues indefinitely without termination. For instance, this can generate codata like infinite lists, where each application observes the current state to produce the next element.

Relation to Recursion

Corecursion stands in a relationship to , where operates on finite inductive data structures via least fixed points, such as built bottom-up from base cases, while corecursion generates potentially infinite codata structures via greatest fixed points, such as that unfold indefinitely. This duality reflects the analytical nature of , which decomposes data toward a base, contrasted with the synthetic nature of corecursion, which constructs data outward from an initial state. In the , this duality manifests through fixed point operators: the recursive fixed point is denoted by μf.x, capturing the least fixed point for terminating computations on finite data, whereas the corecursive fixed point is denoted by νf.x, capturing the greatest fixed point for non-terminating or unfoldings. Equivalence between corecursive processes is verified using coinductive bisimulation, a that ensures two structures match observationally by simultaneously advancing their states, dual to the inductive proofs used for recursive functions.
AspectRecursionCorecursion
Data HandlingConsumes finite input via pattern matching on constructors (e.g., case analysis on ).Produces infinite output via observation on destructors (e.g., head/ on ).
Fixed PointLeast fixed point (inductive, finite unfolding).Greatest fixed point (coinductive, potentially infinite unfolding).
Computation StyleTerminates by reaching base case, building result inward.Progresses productively by guarding recursive calls with constructors, avoiding immediate .
Corecursion plays a key role in guarded recursion schemes, where recursive calls must be immediately preceded by a constructor application to ensure —meaning finite prefixes of are computable in finite time—thus enabling well-behaved computations that plain would render non-terminating or looping.

Examples

Arithmetic Sequences

Corecursion provides a natural way to generate sequences by unfolding state incrementally, contrasting with traditional that computes finite results by folding inputs. In the case of sequences, for example, the of natural numbers can be generated corecursively. Starting from an initial value n, each step outputs n and advances to n+1. In , this can be implemented as:
haskell
countFrom :: Integer -> [Integer]
countFrom n = n : countFrom (n + 1)
Applying countFrom 0 yields the stream $0, 1, 2, 3, \dots, where the constant difference of 1 ensures an . This definition is productive due to , producing elements without termination. More complex sequences like can be defined corecursively to produce an infinite , adapting the standard recursive factorial function—which terminates after reaching zero—into a productive that streams values without end. This shift emphasizes output production over input consumption, enabling in functional languages to handle potentially infinite data safely. A corecursive stream of factorials can be defined using a state that tracks the current index n and the accumulated factorial fact_n. Starting from initial state (0, 1), each step outputs fact_n and advances to (n+1, fact_n \times (n+1)). In pseudocode, this unfolds as:
unfoldFact(state = (n, fact_n)) {
  output fact_n
  next_state = (n + 1, fact_n * (n + 1))
  return unfoldFact(next_state)
}
This generates the stream $1, 1, 2, 6, 24, \dots (i.e., $0! = 1, 1! = 1, 2! = 2, \dots), where productivity ensures only as many elements are computed as needed. Alternatively, in a more equation-based form using stream operations, the stream facA satisfies facA = 1 :: (facA \otimes (1 :: facA)), where \otimes denotes the shuffle product for element-wise multiplication defined as xs \otimes ys = (head(xs) \times head(ys)) :: ((tail(xs) \otimes ys) \oplus (xs \otimes tail(ys))), with \oplus the pointwise sum of streams; this corecursive equation produces the infinite factorial sequence starting from $1!. The offers another illustrative example, where corecursion tracks two consecutive terms to produce an infinite . The standard recursive definition F_0 = 0, F_1 = 1, F_n = F_{n-1} + F_{n-2} for n > 1 is finite and inward-focused, but the corecursive form adapts it to outward generation by defining the f such that f = 0 :: 1 :: \mathrm{zipWith}(+) \, f \, (\mathrm{tail} \, f). Here, \mathrm{zipWith}(+) \, xs \, ys applies addition element-wise to corresponding elements of streams xs and ys, and \mathrm{tail} \, f discards the first element of f. To derive this step by step:
  1. The begins with the base cases: head(f) = 0, second element = 1.
  2. The tail after the second element is \mathrm{zipWith}(+) \, f \, (\mathrm{tail} \, f), so the third element is head(f) + head(\mathrm{tail} \, f) = 0 + 1 = 1.
  3. The fourth element is second(f) + second(\mathrm{tail} \, f) = 1 + (0 + 1) = 1 + 1 = 2 (where second(\mathrm{tail} \, f) is the third element of f).
  4. Continuing, the fifth is 1 + 2 = 3, sixth is 2 + 3 = 5, and so on, yielding the infinite $0, 1, 1, 2, 3, 5, 8, \dots.
This equation ensures the stream is productive, as each new element depends only on previously generated ones. In , this can be implemented as:
haskell
import Data.List (zipWith)

fib :: [Integer]
fib = 0 : 1 : zipWith (+) fib ([tail](/page/Tail) fib)
Such corecursive forms highlight the duality to : while decomposes finite inputs to a base case, corecursion composes infinite outputs from initial seeds, facilitating streaming computations in .

Data Structures

Corecursion provides a natural mechanism for traversing structures in a breadth-first manner, in contrast to the depth-first approach typically achieved through standard . In depth-first traversal, the process recursively explores one subtree fully before moving to the next, often leading to stack-based implementations that mirror the tree's depth. Breadth-first traversal, however, processes s level by level, which can be elegantly expressed corecursively by maintaining a of pending subtrees as and unfolding the next layer of nodes from that . This approach generates a of node values in level-order without explicit recursion on the tree itself, instead advancing through the queue to observe roots and enqueue children progressively. A example in a functional for level-order traversal of a of (where a single is treated as a ) illustrates this corecursive unfolding:
levelf :: Forest a -> [a]
levelf = unfold [null](/page/Null) (map [root](/page/Root)) (concatMap children)
Here, null tests if the is empty (base case for termination), map root extracts values from the current level's , and concatMap children advances the state by collecting all children into the next level. For a single t, the traversal is levelf [t], producing the level-order efficiently in linear time by avoiding redundant computations inherent in some fold-based alternatives. Corecursion also enables the generation of infinite structures by unfolding from an initial , where the process observes the current value and produces states indefinitely, suitable for modeling fractal-like or unending hierarchical . In this , the corecursive destructs the state to yield a and its successors, recursively constructing subtrees from those successors to build the overall bottom-up from observations. This duality to recursive consumption allows for productive of potentially non-terminating structures, as long as the unfolding respects coinductive . A specific implementation of this is the corecursive unfoldTree function, generalized for multi-ary () trees:
unfoldTree :: (s -> (a, [s])) -> s -> Tree a
unfoldTree f s = [Node](/page/Node) (fst (f s)) (map (unfoldTree f) (snd (f s)))
This observes the s via f to produce a value a and a list of child seeds [s], then corecursively unfolds each child seed into a subtree. For example, to generate an infinite balanced binary tree where each holds its depth, one can define f depth = (depth, [depth+1, depth+1]) and apply unfoldTree f 0, yielding a full infinite binary tree with levels expanding symmetrically, useful in simulations of fractal geometries or endless search .

Formalization

Mathematical Basis

Corecursion finds its mathematical foundation in the theory of within . For an endofunctor F on a \mathcal{C}, an F- is a pair (S, \alpha) where S is an object of \mathcal{C} and \alpha: S \to F S is a . Corecursion arises as the unique from (S, \alpha) to the final F- (\nu F, \eta), where \nu F is the terminal object in the category of F-, satisfying the property that for any (S', \alpha'), there exists a unique h: S' \to \nu F such that the commutes: \eta \circ h = F h \circ \alpha'. This unique homomorphism, often denoted as the anamorphism [\alpha]: S \to \nu F, defines the corecursive process by "unfolding" the state space S into the infinite structure captured by \nu F. A canonical example illustrates this in the , where F X = A \times X for a set A, modeling streams of elements from A. The final coalgebra (\nu F, \eta) has carrier \nu F = A^\mathbb{N}, the set of infinite sequences over A, with \eta decomposing each stream into its head (first element in A) and tail (remaining stream in \nu F). Given a coalgebra (S, \alpha: S \to A \times S), the corecursion [\alpha]: S \to A^\mathbb{N} produces, for each state s \in S, the infinite stream obtained by iteratively applying \alpha to generate successive heads and next states. This construction ensures that corecursion systematically builds potentially non-terminating or infinite data structures from finite descriptions. The \nu F of the final represents the greatest fixed point of X \cong F X, denoting the largest solution in the of subobjects or that accommodates all possible behaviors modeled by F. This contrasts with the least fixed point \mu F, the initial algebra, which underpins for finite structures. The principle complements corecursion by providing a proof method for behavioral : two states s, s' \in S are bisimilar if there exists a bisimulation R such that (s, s') \in R, where bisimulations are spans of morphisms lifting to the final , ensuring that bisimilar states map to identical elements in \nu F. This principle, dual to on initial algebras, verifies properties of infinite objects via one-step approximations preserved across unfoldings.

Productivity Conditions

In corecursion, refers to the property that a corecursive process generates elements of an infinite in finite time, ensuring that the computation does not diverge without producing output even when defining potentially infinite objects. This means that for any finite prefix of the output, there exists a finite number of computation steps to produce it, preventing non-termination while allowing the definition to be coinductively well-founded. A key mechanism to enforce productivity is guardedness, which requires that every corecursive call be protected by a constructor that produces an observable output before the recursion proceeds. In languages like Haskell, this is achieved syntactically: recursive calls must appear in positions guarded by data constructors, such as the cons operator (:) for lists, ensuring that the head element is computed finitely before recursing on the tail. For instance, the infinite list of natural numbers defined as nats = 0 : map (+1) nats is productive because the constructor produces 0 immediately, and the recursive call to map is guarded. Coinductive proofs verify productivity through bisimulation, a relation that establishes behavioral equivalence between infinite structures by matching heads and relating tails coinductively. To prove two streams equal, one defines a bisimulation relation R such that if xs \, R \, ys, then the heads are equal and the tails are related by R; coinduction then lifts this to full equality. Consider Fibonacci streams: a productive definition is fibs = 0 : 1 : zipWith (+) fibs (tail fibs), where the constructors guard the recursive calls, allowing finite computation of any prefix via bisimulation on the arithmetic progression. In contrast, a non-productive variant like badFibs = 0 : zipWith (+) badFibs badFibs fails guardedness, as the recursive calls are unguarded, leading to divergence without output since both tails depend on the full structure. For the common corecursive pattern encapsulated by the unfold combinator, defined as \text{unfold} \, p \, h \, t \, x = \text{if } p \, x \text{ then } [] \text{ else } h \, x : \text{unfold} \, p \, h \, t \, (t \, x), productivity holds under conditions that ensure state advancement in finite steps. Specifically, the predicate p must reject the initial state (for infinite outputs, p is constantly false), the head function h must compute an output value finitely, and the transition t must produce a new state without diverging. A theorem sketch: if h and t are total and productive functions (i.e., they terminate in finite steps for any input), then \text{unfold} \, p \, h \, t is productive, as each application produces a head via h before recursing on t's output, inducting on the finite depth to any prefix. This guarantees that the generated stream observes elements sequentially without stalling.

Applications

Functional Programming

In functional programming languages, corecursion is facilitated by lazy evaluation mechanisms that allow the construction of potentially infinite data structures without immediate computation of all elements. In Haskell, corecursive definitions are commonly expressed using higher-order functions such as unfoldr from the Data.List module, which generates lists by repeatedly applying a step function until a base case, and iterate, which produces infinite lists by successive application of a function to an initial seed value. These functions leverage Haskell's laziness to ensure productivity, where each element is computed only when demanded. Similarly, in Scala, corecursion is supported through LazyList (previously Stream), where infinite sequences are built using the lazy cons operator #::, enabling recursive tail definitions that defer evaluation until elements are accessed. This synergy with lazy evaluation permits the handling of infinite data structures in total functional languages, avoiding runtime errors from non-termination as long as definitions remain productive. Corecursion provides practical benefits by allowing programmers to define composable data structures without resorting to explicit imperative loops, promoting declarative code that models ongoing processes naturally. For instance, it enables the of event , where new events are appended lazily as they arrive, facilitating paradigms. In contexts, corecursive definitions can process unbounded input incrementally, composing parser combinators to handle potentially token sequences without buffering the entire input upfront. These capabilities enhance and reusability, as structures can be transformed and filtered functionally, mirroring the duality of for finite deconstruction. However, implementing corecursion poses challenges related to non-productivity, where a corecursive definition might loop indefinitely without producing output, leading to black-hole effects in lazy evaluators like Haskell's. To mitigate this, guarded corecursion enforces that recursive calls are protected by productive constructors, such as cells for , ensuring finite-time generation of initial elements before further unfolding. In practice, developers also employ fuel limits—passing a decrementing counter to approximate finite prefixes of structures for testing or bounded —though this trades totality for explicit termination control. A notable example of corecursion in is the stream-based for generating an of primes in . This approach defines the primes as a lazy where the first element is emitted, and the tail filters multiples of that prime from the remaining candidates, recursively. The code is:
haskell
primes :: [Integer]
primes = sieve [2..]
  where
    sieve (p:ns) = p : sieve [n | n <- ns, n `mod` p /= 0]
This corecursive definition exploits to produce primes without computing the entire .

Theoretical Models

In process calculi, corecursion provides a foundational mechanism for modeling the infinite behaviors of concurrent systems, particularly through , which contrasts with by focusing on greatest fixed points rather than least ones. In the Calculus of Communicating Systems (), corecursive definitions capture ongoing interactions among processes, enabling the specification of non-terminating computations such as perpetual synchronization or . Similarly, in the , which extends with dynamic channel mobility, corecursion supports coinductive reasoning to define behavioral equivalences that account for evolving communication topologies in reactive environments. Bisimulation, a key coinductive relation in these calculi, equates processes based on their observable actions over infinite traces, formalized via guarded corecursion to ensure productivity in unfolding process definitions. Denotational semantics interprets corecursive definitions using domain theory, where types are modeled as complete partial orders (cpos) equipped with continuous functions to handle fixed points of infinite objects. In this framework, a corecursive specification, such as a process generating an infinite stream, is assigned the semantics of a greatest fixed point of a continuous function on a domain, contrasting with the least fixed point used for recursive (inductive) definitions. Domains, often Scott domains, provide the partial order structure needed to approximate infinite computations through chains of finite approximations, ensuring that corecursive programs denote well-defined elements representing coinductive data like infinite lists or trees. This approach, rooted in the mathematical theory of program correctness, allows proving properties of corecursive programs via coinductive invariants that hold over the entire domain. Corecursion finds significant applications in verification through coinductive logic, which is particularly suited to establishing properties of infinite-state systems, such as liveness in reactive systems where processes must respond indefinitely to inputs. Coinductive proofs leverage greatest fixed points to define predicates over infinite execution traces, enabling the of non-termination guarantees like eventual or fairness in concurrent protocols. For instance, in systems modeled as transition systems with unbounded state spaces, verifies liveness by showing that certain actions recur ly often along all paths, using techniques like bisimulation up-to to compose proofs modularly. This is essential for reactive systems, where traditional inductive methods falter due to the absence of finite base cases. A specific theoretical model employs corecursive traces within the modal μ-calculus to characterize behavioral equivalences via greatest fixed points. The modal μ-calculus extends basic modal logic with least (μ) and greatest (ν) fixed-point , allowing formulas to define coinductive properties over labeled transition systems. Corecursive traces, representing infinite sequences of observable actions, are captured by formulas of the form νX. φ(X), where φ is a on sets of states or paths, and the greatest fixed point denotes the largest set satisfying the recursive characterization—such as all traces bisimilar to a given . \nu X . \ \phi(X) Here, satisfaction is coinductive: a state satisfies νX. φ(X) if it belongs to every approximant in the descending chain starting from the full powerset, ensuring equivalence under behavioral relations like strong bisimulation in process calculi. This model underpins automated verification tools, as the alternation hierarchy of fixed points corresponds to expressive power for distinguishing infinite behaviors.

History

Early Concepts

The philosophical roots of corecursion trace back to ancient conceptions of infinity, notably Aristotle's distinction between potential and actual infinity in his works Physics and Metaphysics. Aristotle posited that the infinite exists only potentially—as an unending sequence or process that can always be extended further but never exists as a completed whole—rejecting actual infinity as incoherent and leading to paradoxes. This notion of potential infinity, exemplified by the unending division of a line or the perpetual generation of natural numbers, anticipates coinductive reasoning, where definitions unfold indefinitely through behavioral observation rather than finite exhaustion. Building on such ideas, early precursors to appeared in during the mid-20th century, where frameworks for reasoning about possibility and necessity involved fixed-point-like structures and dualities that echoed inductive processes in reverse. Saul Kripke's 1963 semantics for , using possible worlds and accessibility relations, provided a foundation for later coinductive proofs by allowing equivalence relations over infinite models without requiring termination. These developments in highlighted as a for establishing behavioral equalities in potentially infinite domains. In , category theory's advancements in the and introduced coalgebras as the categorical to algebras, emphasizing observational dynamics over generative construction and enabling the formal study of infinite and branching behaviors. and J. H. C. Whitehead's early work on (1940s) laid groundwork for categorical dualities, but it was the 1965 paper by Eilenberg and J. B. Moore on for automata that first explicitly explored coalgebraic structures for state-based systems. These ideas culminated in the 1980s with Peter Aczel's formalization of final coalgebras within non-wellfounded set theory, where the final coalgebra for the powerset models infinite, self-referential sets. Prior to applications, in addressed hypersets—non-wellfounded sets featuring circular membership chains—and circular definitions, resolving self-referential paradoxes through coiterative constructions rather than the well-founded iterative hierarchy of Zermelo-Fraenkel . Aczel's Anti-Foundation Axiom (AFA), introduced in 1988 alongside earlier proposals like those by Marco Forti and Furio Honsell in 1983, ensures unique solutions to equations like \Omega = \{\Omega, \{\Omega, \Omega\}\}, modeling infinite descending chains without vicious regress. This approach treats sets coinductively, as bisimilar under membership relations, providing a rigorous basis for infinite objects in .

Key Developments

The concept of corecursion gained formal traction in through Bird's 1984 paper, which introduced circular programs as a mechanism to optimize functional computations by avoiding multiple data traversals, laying early groundwork for corecursive schemes that generate data structures incrementally. This work formalized aspects of anamorphisms, to catamorphisms, enabling the systematic construction of infinite or large data via unfolding generators rather than folding consumers. In the late and , researchers like John Hughes and advanced corecursion's integration into lazy functional languages, emphasizing unfolding transformations to eliminate intermediate structures and promote efficiency in non-strict evaluation environments. Complementing this, Lloyd Allison's 1989 analysis demonstrated the space and time efficiency of corecursive implementations for self-referential structures, such as queues, by leveraging circular definitions to achieve constant-time operations without explicit . In the late 1990s and early , coalgebraic approaches gained prominence in , with Rutten's 2000 framework of universal coalgebra providing a unified theory for modeling behaviors and coinductive types in systems like automata and processes. The saw corecursion expand into proof assistants through coinductive types, with incorporating coinductive types in the 1990s through Eduardo Giménez's work, formalized around version 6.1 in 1999. Similarly, Agda's implementation of coinductive types, formalized in Phil Norell's 2007 thesis, enabled guarded corecursion for productive computations in dependent type theory. Post-2010 developments, including Atkey et al.'s 2015 framework for guarded recursion in the lambda-calculus, provided type-theoretic guarantees for coinductive , bridging corecursion with types for safer data handling. Corecursion has evolved from these theoretical foundations to practical tools in functional and dependent programming, with ongoing refinements in type systems but no major shifts reported after 2023.

References

  1. [1]
    what they are and how they relate to recursion and induction
    Corecursion defines a function mapping to a codatatype by specifying the results of applying destructors to the results of the function. This is illustrated ...
  2. [2]
    [PDF] Proof Methods for Corecursive Programs - University of Nottingham
    Recursion is a central concept in computing, with applications ranging from the theoretical foundations of computation [34] to practical programming techniques ...
  3. [3]
    None
    Summary of each segment:
  4. [4]
    [PDF] A Computational Understanding of Classical (Co)Recursion
    The goal of this paper is to introduce the basic principles of coinductive and inductive structures from a computational point of view, by presenting a language ...<|control11|><|separator|>
  5. [5]
    [PDF] Proof Methods for Corecursive Programs 1. Introduction
    The function unfold encapsulates a simple pattern of corecursion for producing a list from a seed value x, by means of three argument functions p, h, and t. If ...
  6. [6]
    [PDF] Lecture Notes on Recursive Types
    Sep 30, 2021 · In LAMBDA we reuse $ to stand for fix for expressions, since it serves the same purpose and obeys corresponding laws to the type constructors.Missing: nu | Show results with:nu
  7. [7]
    Productive coprogramming with guarded recursion
    Productivity means that, even if a program generates an infinite amount of data, each piece will be generated in finite time. The theoretical underpinning for ...
  8. [8]
    [PDF] Foundational Extensible Corecursion—A Proof Assistant Perspective
    Corecursion produces codatatype values, consisting of finitely or infinitely many constructors, and is formalized in a framework based on corecursion up-to.
  9. [9]
    [PDF] Using Structural Recursion for Corecursion - Hal-Inria
    The following definition describes the stream of Fibonacci numbers: fib = 0 :: 1 :: (zipWith nat nat plus (tl fib) fib). Again, this recursive equation ...
  10. [10]
    The under-appreciated unfold | ACM SIGPLAN Notices
    We specify breadth-first traversal in terms of level-order traversal, which we characterize first as a fold. The presentation as a fold is simple, but it is ...
  11. [11]
    [PDF] When is a function a fold or an unfold? - University of Nottingham
    The recursion operator fold encapsulates a common pattern for defining pro- grams that consume values of a least fixpoint type such as finite lists. Dually,.
  12. [12]
    [PDF] called algebraic specification or abstract data type theory. T he ob j ...
    It is the aim of this tutorial to provide a brief introduction to this relatively new field of coalgebra. 1 Introduction. Algebra is a well-established part ...
  13. [13]
    Universal coalgebra: a theory of systems - ScienceDirect.com
    Coalgebras, which are the dual of algebras, turned out to be suited, moreover, as models for certain types of automata and more generally, for (transition and ...Missing: Jan | Show results with:Jan
  14. [14]
    [PDF] Productive Coprogramming with Guarded Recursion - Bob Atkey
    ensure productivity of programs defined using corecursion. With- out these checks, the soundness of these systems cannot be guar- anteed. An unfortunate ...
  15. [15]
    [PDF] Classical (Co)Recursion: Programming - arXiv
    Mar 11, 2021 · To illustrate the diversity and expressiveness of corecursion in practice, we present a series of examples in a variety of programming languages ...
  16. [16]
    [PDF] The Genuine Sieve of Eratosthenes - Computer Science
    Removing elements from the front of the union of this list corresponds to removing elements from our priority queue. Page 11. The Genuine Sieve of Eratosthenes.
  17. [17]
    Bisimulation and Coinduction Enhancements: A Historical Perspective
    Nov 8, 2019 · Bisimulation is an instance of coinduction. Both bisimulation and coinduction are today widely used, in many areas of Computer Science, ...
  18. [18]
    On the bisimulation proof method - ACM Digital Library
    The usefulness of our proof techniques is supported by various non-trivial examples drawn from the process algebras CCS and π-calculus. They include the ...
  19. [19]
    Coinductive Proofs for Temporal Hyperliveness - ACM Digital Library
    Temporal logics for hyperproperties have recently emerged as an expressive specification technique for relational properties of reactive systems.Missing: liveness | Show results with:liveness
  20. [20]
    [PDF] Coinductive Proofs for Temporal Hyperliveness
    We detail the construction of HyCo, provide a proof of its soundness, and exemplify its use by applying it to the verification of reactive systems modeled as.Missing: liveness | Show results with:liveness
  21. [21]
    [PDF] Modal Mu-Calculi - Index page
    The modal mu-calculus comes not from the philosophical tradition of modal logic, but from the application of modal and temporal logics to program verification.
  22. [22]
    On the origins of bisimulation and coinduction - ACM Digital Library
    Bisimulation and bisimilarity are coinductive notions, and as such are intimately related to fixed points, in particular greatest fixed points.
  23. [23]
    The Infinite | Internet Encyclopedia of Philosophy
    A potential swimmer can learn to become an actual swimmer, but a potential infinity cannot become an actual infinity. Aristotle argued that all the problems ...What “Infinity” Means · Infinity in Physical Science · Infinity in MathematicsMissing: coinduction | Show results with:coinduction
  24. [24]
    Coalgebra - Wikipedia
    In mathematics, coalgebras or cogebras are structures that are dual (in the category-theoretic sense of reversing arrows) to unital associative algebras.Missing: history | Show results with:history
  25. [25]
  26. [26]
    Non-wellfounded Set Theory - Stanford Encyclopedia of Philosophy
    Apr 16, 2008 · Non-wellfounded set refers to sets which contain themselves as members, and more generally which are part of an infinite sequence of sets.Circular Phenomena in Set... · Using AFA · Comparing Foundation and...
  27. [27]
    [PDF] Why Functional Programming Matters
    In this paper we show that two fea- tures of functional languages in particular, higher-order functions and lazy evaluation, can contribute significantly to ...Missing: unfolding | Show results with:unfolding
  28. [28]
    Coinduction — Agda 2.9.0 documentation
    The check for guarded corecursion is integrated with the check for size-change termination, thus allowing interesting combinations of inductive and coinductive ...