Dependent type
In type theory and programming language design, a dependent type is a type whose formation depends on the value of a term, allowing types to be indexed by values and enabling the expression of properties and constraints directly within the type system.[1] This contrasts with non-dependent types, where types are independent of specific values, by permitting constructions such as the type of vectors of length n (denoted Vec A n) for a given natural number n, ensuring length information is encoded at the type level to prevent runtime errors like index out-of-bounds.[1] Dependent types unify the notions of computation and specification, treating types as first-class programs that classify other terms while themselves being classifiable.[2] The concept of dependent types originated in the work of Per Martin-Löf, who introduced them as part of his intuitionistic type theory in the early 1970s, with a foundational presentation in his 1972 paper "An Intuitionistic Theory of Types," aiming to provide a constructive foundation for mathematics that bridges logic and computation.[3] Martin-Löf's framework extended earlier type theories, such as those of Alonzo Church and Haskell Curry, by incorporating dependent function types (Π-types) and dependent pair types (Σ-types), which allow functions and products to have types that vary with input values, thus supporting the propositions-as-types correspondence where proofs are computations.[2] This development addressed limitations in simple type theory by enabling predicative definitions and avoiding paradoxes like Girard's through stratified universes of types.[2] Dependent types form the basis for several modern proof assistants and dependently typed programming languages, including Coq, Agda, and Lean, where they facilitate formal verification of mathematical theorems and software correctness by encoding invariants and relations at the type level.[4] For instance, in Agda, dependent types enable the definition of finite sets like Fin n (natural numbers less than n), which can be used to prove properties such as array bounds safety without runtime checks.[1] Their expressive power supports advanced features like equality types for reasoning about program equivalence, though they introduce challenges in decidability and inference compared to simpler type systems.[2]Fundamentals
Definition and Motivation
A dependent type is a type whose formation depends on the value of a prior term, allowing types to act as predicates over values rather than merely classifying terms into fixed categories.[5] In contrast to non-dependent types in simple type systems, which statically classify terms without incorporating value dependencies, dependent types integrate term values directly into type construction, enabling more expressive and precise specifications.[2] This extension addresses key limitations of simply typed lambda calculus by bridging programming and proving: types can encode properties such as "vectors of length n," where n is a term value, thereby verifying program behaviors at compile time and reducing runtime errors.[2] Dependent types thus enhance expressiveness, allowing the type system to capture relational invariants between data and computation that simpler systems cannot.[5] Originating in Per Martin-Löf's intuitionistic type theory, dependent types model propositions as types under the Curry-Howard correspondence, where proofs correspond to programs, fostering constructive mathematics and computational logic.[6] This framework introduces mechanisms like dependent function (Π) and pair (Σ) types to realize such dependence, setting the foundation for advanced type-theoretic systems.[5]Basic Syntax and Examples
Dependent types allow the type of an expression to depend on the value of another expression, enabling more precise specifications of program behavior at the type level. A basic form of this is the dependent function type, informally written asf : (x : A) → B(x), where the return type B varies based on the specific value provided for x of type A. This syntax extends traditional function types by incorporating dependencies, bridging the gap between programming constructs and logical propositions as motivated in foundational type theories.[7]
A classic example is the vector type Vec A n, which represents a list of elements of type A with length exactly n, where n is a natural number term. This ensures that array lengths are verified during type checking, preventing mismatches at runtime. For instance, an append function can be typed as append : Vec A m → Vec A n → Vec A (m + n), guaranteeing that the resulting vector's length is the sum of the inputs' lengths, thus preserving the length invariant without additional runtime assertions.[7][1]
Another illustrative case is safe division, where the type encodes a proof that the divisor is nonzero. The function can be specified as div : (y : [Nat](/page/Nat)) → y ≠ 0 → [Nat](/page/Nat), requiring an explicit argument witnessing y ≠ 0 before returning the quotient. This compile-time check eliminates division-by-zero errors, as the type system rejects any call without the required proof, enforcing the safety invariant statically.[8]
These examples demonstrate how dependent types encode program invariants—such as fixed lengths or nonzero conditions—directly in the type signatures, allowing the compiler to verify correctness and avoid runtime overhead for these checks.[7][1][8]
Historical Development
Origins in Logic and Type Theory
The origins of dependent types can be traced to early efforts in mathematical logic to resolve foundational paradoxes and enhance expressive power in formal systems. Bertrand Russell introduced the theory of types in 1908 as a ramified hierarchy to avoid paradoxes like Russell's paradox, which arises from naive set theory; this stratified approach to types laid the groundwork for later type systems by enforcing levels of quantification and predication to prevent self-referential inconsistencies.[9] Building on this, Alonzo Church formalized the simple theory of types in 1940, presenting a typed λ-calculus with basic types and function types that provided a consistent foundation for logic and computation, inspiring extensions like dependent types to allow types to depend on values for greater flexibility in expressing mathematical structures.[10] A pivotal early implementation came with the Automath project, initiated in 1968 by Nicolaas Govert de Bruijn at Eindhoven University of Technology. Automath was the first computer-based system to use dependent types for formalizing and verifying mathematical proofs, employing a typed lambda calculus where types could depend on terms to express complex dependencies in mathematical definitions. This work exploited the Curry–Howard correspondence, treating proofs as typed programs, and successfully formalized texts like Landau's Foundations of Analysis, demonstrating the viability of dependent types for rigorous mathematical verification.[11] In the 1960s, William Howard developed unpublished manuscripts exploring typed λ-calculi with dependencies, extending simple types to incorporate value-dependent typing in a way that bridged logic and computation, foreshadowing the integration of proofs as computational terms. This work aligned with the emerging Curry-Howard isomorphism, first articulated by Haskell Curry in the 1930s and refined by Howard in 1969, which establishes a correspondence between proofs in intuitionistic logic and programs in typed λ-calculus, where implications map to function types and proofs become executable terms, enabling dependent types to represent propositions with their proofs as inhabitants. Per Martin-Löf advanced these ideas in the 1970s through his intuitionistic type theory (ITT), introduced in a 1972 lecture and further developed in subsequent works, positing types as propositions in a constructive framework where dependent types serve as the core mechanism for formalizing intuitionistic mathematics, allowing types to be defined dependently on prior terms to capture judgments like "A is a type" or "a is of type A."[3] In ITT, this approach treats proofs as constructive evidence, with dependent types—such as Π types—emerging naturally to encode quantified propositions, thus unifying logic, types, and computation in a single system.[5]Key Milestones and Influential Works
In 1984, Per Martin-Löf published Intuitionistic Type Theory, a seminal work that refined the foundations of dependent types within intuitionistic type theory (ITT), emphasizing predicative mathematics and providing a constructive framework for type-theoretic reasoning.[6] This book formalized the meaning explanations for judgments in ITT, enabling precise definitions of dependent types as predicates over prior types, which addressed consistency issues in earlier formulations and laid the groundwork for subsequent implementations.[6] Building on these foundations, Henk Barendregt introduced the lambda cube in 1991, a classification system that systematized various extensions of typed lambda calculi, including those incorporating dependent types through Π-types and abstraction over types.[12] Barendregt's framework, presented in his work on generalized type systems, highlighted the relationships between simply typed lambda calculus, System F, and full dependent type theories, facilitating comparisons and influencing the design of type systems in proof assistants.[12] Jean-Yves Girard's development of System F in the early 1970s provided a polymorphic foundation that intersected with dependent types, as its universal quantification over types prefigured the dependent product types in later systems, though System F itself remains impredicative. Girard's work demonstrated the expressive power of higher-order polymorphism, which extensions like F_ω incorporated into dependent settings, bridging functional programming and type theory. During the late 1980s and early 1990s, the Coq proof assistant emerged as a practical implementation of dependent types, initially developed at INRIA and based on the Calculus of Inductive Constructions (CIC), an extension of the Calculus of Constructions by Thierry Coquand and Christine Paulin-Mohring. Coq's initial release in the late 1980s, with version 5.1 starting in 1989 and formalized in the 1990 paper "Inductively Defined Types," popularized dependent types for formal verification by enabling inductive definitions and proof scripting in a dependently typed environment. This system demonstrated the feasibility of large-scale mathematical proofs, such as the Four Color Theorem formalization, and spurred adoption in academia and industry. In the 2000s, Bengt Nordström and collaborators advanced ITT implementations through practical programming applications, notably in the 1990 book Programming in Martin-Löf's Type Theory, which introduced executable specifications and type checking algorithms for dependent types in constructive settings. Nordström's contributions, including work on type inference and the Alf programming language, emphasized the computational interpretation of ITT, influencing educational tools and proof development. The 2000s and 2010s saw dependent types integrated into general-purpose programming languages, beginning with Agda in 2007, developed by Ulf Norell at Chalmers University as a dependently typed functional language based on Martin-Löf's ITT with universes.[13] Agda's design, detailed in Norell's thesis Towards a Practical Programming Language Based on Dependent Type Theory, supported interactive theorem proving and verified programming through pattern matching on dependent types.[13] Following this, Idris was released in 2011 by Edwin Brady, offering a Haskell-like syntax with full dependent types, totality checking, and effects, as outlined in the PLPV paper "Idris: Systems Programming Meets Full Dependent Types."[14] Idris advanced practical software development by embedding proofs in code and compiling to efficient executables.[14] Lean, initiated in 2013 by Leonardo de Moura at Microsoft Research, further extended dependent types for mathematical formalization through its kernel based on CIC with a focus on performance and automation.[15] Lean's development emphasized a minimal trusted kernel and tactics for proof automation, enabling projects like the formalization of advanced theorems in mathematics, and its open-source nature has fostered a large community.[15] In the 2020s, ongoing advancements included the release of Idris 2 around 2021, which introduced a core language based on quantitative type theory (QTT) to support linear and dependent types more efficiently for systems programming and verification.[16] Similarly, Lean 4, officially released in 2023, represented a complete reimplementation allowing compilation to C code, enhancing performance for both theorem proving and general-purpose programming applications.[17]Formal Definition
Dependent Function Types (Π Types)
Dependent function types, denoted as Π types, form the core construct for expressing functions whose codomain depends on the value of their argument in dependent type theory. These types generalize ordinary function types by allowing the return type to vary with the specific input, enabling precise specifications of computational behavior and logical implications. Introduced in the framework of intuitionistic type theory, Π types provide a foundation for encoding universal quantification under the Curry-Howard isomorphism.[6] The syntax of a Π type is given by \Pi x : A. B(x), where A is a type (the domain), x is a variable ranging over elements of A, and B(x) is a type that may depend on the term x : A (the codomain family). This notation indicates that for each possible value of x in A, the function must produce a result of type B(x). Formation of such a type requires that A is a valid type and that B forms a well-defined family of types indexed by elements of A.[6] Semantically, a Π type \Pi x : A. B(x) represents the type of all functions f such that for every a : A, f(a) : B(a), capturing computations where the output type is contingent on the input value. In the logical interpretation via the Curry-Howard correspondence, Π types correspond to universal quantification \forall x : A. P(x), where proofs of the quantified proposition are witnessed by lambda abstractions serving as proof terms. This duality underscores their role in both programming and proof construction, ensuring that dependencies are respected throughout.[6] The typing rules for Π types follow standard introduction and elimination judgments in a context \Gamma. Specifically, if \Gamma \vdash x : A and \Gamma, x : A \vdash b : B(x), then \Gamma \vdash \lambda x : A. b : \Pi x : A. B(x), where \lambda x : A. b is the lambda abstraction introducing a dependent function term. For elimination, if \Gamma \vdash f : \Pi x : A. B(x) and \Gamma \vdash t : A, then \Gamma \vdash f \, t : B(t), applying the function to the argument while substituting into the dependent codomain. These rules ensure type safety by preserving the dependency structure in the context.[6] Reduction in Π types is governed by β-reduction, which computes the application of a lambda abstraction: (\lambda x : A. b) \, t \to b[t/x], where b[t/x] denotes the substitution of t for free occurrences of x in b, provided t : A. This reduction preserves the dependent typing, as the resulting term inhabits B(t). Equality rules further ensure that applications are equivalent under substitution, maintaining computational consistency.[6] A canonical example is the polymorphic identity function, defined as \mathrm{id} : \Pi A : \mathrm{Type}. \Pi x : A. x, which takes a type A and a value x of that type, returning x itself; its typing demonstrates nested dependencies, first over types and then over terms.[6]Dependent Pair Types (Σ Types)
Dependent pair types, also known as Σ-types, generalize Cartesian product types to the dependent setting, where the type of the second component depends on the value of the first.[6] The syntax for a dependent pair type is \Sigma_{x : A} B(x), which pairs a value x of type A with a value in the dependent type B(x).[18] This construction allows for terms of the form (a, b), where a : A and b : B(a), serving as the introduction form for Σ-types.[19] Semantically, Σ-types represent dependent products, often interpreted as the disjoint union \sum_{a \in A} B(a) = \bigcup_{a \in A} (\{a\} \times B(a)), encoding existential quantification \exists x : A. P(x) by witnessing the existence of x along with a proof or value b satisfying P(x).[18] The projections \mathrm{fst} (or p) and \mathrm{snd} (or q) enable elimination: for c : \Sigma_{x : A} B(x), \mathrm{fst}(c) : A recovers the first component, and \mathrm{snd}(c) : B(\mathrm{fst}(c)) recovers the second.[6] The typing rules for Σ-types ensure well-formedness and correctness. The formation rule states that if \Gamma \vdash A : \mathsf{type} and \Gamma, x : A \vdash B(x) : \mathsf{type}, then \Gamma \vdash \Sigma_{x : A} B(x) : \mathsf{type}.[19] The introduction rule requires \Gamma \vdash a : A and \Gamma, x : A \vdash b : B(x) (substituting a for x in b), yielding \Gamma \vdash (a, b) : \Sigma_{x : A} B(x).[18] Elimination via projections follows: if \Gamma \vdash t : \Sigma_{x : A} B(x), then \Gamma \vdash \mathrm{fst}(t) : A and \Gamma \vdash \mathrm{snd}(t) : B(\mathrm{fst}(t)).[6] A representative example of Σ-types encoding existential quantification is the statement "there exists a natural number n such that n is even," formalized as \Sigma_{n : \mathsf{[Nat](/page/Nat)}}. \mathsf{Even}(n), where \mathsf{Even} is a type family defining evenness (e.g., \mathsf{Even}(n) \equiv \Sigma_{k : \mathsf{[Nat](/page/Nat)}}. (n = 2k)).[18] A term witnessing this might be (2, (1, \mathsf{refl})) : \Sigma_{n : \mathsf{[Nat](/page/Nat)}}. \mathsf{Even}(n), pairing the even number 2 with evidence via k=1.[18] In relation to dependent function types (Π-types), Σ-types are termed "dependent sums" to contrast with the "dependent products" of Π-types, reflecting their categorical duality where Σ corresponds to coproducts (existentials) and Π to products (universals). Unlike Π-types, which support β-reduction upon function application, Σ-types lack a direct β-reduction rule but instead feature projection-based elimination and η-expansion ensuring (a, b) = (\mathrm{fst}(p), \mathrm{snd}(p)) for p : \Sigma_{x : A} B(x).[19]Type Systems
The Lambda Cube Framework
The lambda cube, introduced by Henk Barendregt in 1991, serves as a classification framework for typed lambda calculi that incorporate dependent types, organizing eight key systems in a three-dimensional structure based on inclusion relations.[12] This cube provides a systematic way to explore the spectrum from simple typed lambda calculi to more expressive systems supporting full dependent types, polymorphism, and higher-order constructions, all formalized within the broader paradigm of pure type systems (PTS). Pure type systems form the foundational framework for the lambda cube, consisting of a simply typed lambda calculus extended with sorts—such as * for types and Prop for propositions—and inference rules that govern type formation, introduction, and elimination.[12] In PTS, a system is specified by a set of sorts and relations between them, enabling the uniform definition of dependent function types (Π-types) and other constructors; for instance, rules allow Π-types to be formed when the domain and codomain satisfy appropriate sort constraints, such as Πx:A.B : s where A : s' and B : s. This generality allows PTS to encompass a wide range of type theories while maintaining decidable type checking in many cases. The lambda cube visualizes these systems as the vertices of a three-dimensional cube, with each dimension corresponding to a form of abstraction or dependency. The first dimension introduces polymorphism via universal quantification (∀-types), enabling terms to be quantified over types. The second dimension adds dependency through Π-types, where types can depend on term values, as in the dependently typed lambda calculus denoted λΠ. The third dimension incorporates higher-order types, allowing abstraction over types within type expressions, often represented by an additional sort Ω. These axes intersect to yield the eight vertices: λ→ (simply typed, at one corner, with no dependencies or polymorphism), progressing to λP (polymorphic), λΠ (dependent), λω (higher-order), and culminating in λΠω (higher-order dependent) or the full λPΠω (higher-order dependent polymorphic) at the opposite corner.[12] Along the dependency axis, enabling Π-types fundamentally allows types to vary based on computational content, distinguishing systems like λΠ from non-dependent ones; for example, in λΠ, one can express types such as the vector type Vec(n, A) where n is a natural number term and A a type.[12] This structure highlights inclusions, such as λ→ ⊆ λP ⊆ λPω and λΠ ⊆ λPΠ ⊆ λPΠω, facilitating comparisons of expressiveness and properties like strong normalization across the cube.First-Order and Higher-Order Variants
In the lambda cube framework, first-order dependent types correspond to the system denoted as \lambda \Pi, which extends the simply typed lambda calculus by allowing types to depend on terms through dependent function types (\Pi-types).[20] In this system, the only dependencies permitted are terms depending on other terms, with no support for polymorphism or type-level dependencies on types; for example, a function type \Pi x : A . B(x) has B as a type that varies based on the term x : A, but A and B themselves cannot abstract over type variables.[20] This formulation is captured as a pure type system with sorts * (types) and \square (kinds), an axiom * : \square, and introduction rules (*, *) for term-level functions and (*, \square) for dependent types, ensuring terms remain first-order in their dependencies.[20] The expressiveness of \lambda \Pi is thus limited to basic dependent typing without the ability to quantify over types, restricting its use to scenarios where term-level variation suffices for specification. The second-order variant, \lambda \Pi_2, builds on \lambda \Pi by incorporating polymorphism, akin to an extension of System F with dependent types, where terms can now depend on types through universal quantification.[20] This allows for abstraction over type variables in dependent types, such as \Pi \alpha : \square . \Pi x : A(\alpha) . B(x, \alpha), enabling polymorphic functions whose behavior depends both on term arguments and type parameters.[20] Formally, it adds the rule (\square, *) to the pure type system of \lambda \Pi, permitting type abstractions in term contexts and thus supporting quantification over types while retaining term dependencies on terms.[20] Developed as part of the lambda cube classification, this system enhances expressiveness by allowing proofs and programs to generalize over types, bridging simple dependent typing with polymorphic reuse, though it still lacks full type-on-type dependencies.[21] Higher-order variants, such as \lambda P^\omega, introduce full type dependencies, where types can depend on both terms and other types, exemplified by the Calculus of Constructions.[22] In this framework, the type system includes rules for type-level abstractions over types, enabling constructions like \Pi \alpha : \square . \alpha \to \alpha where the codomain type depends on the abstracted type \alpha, and further allowing types to vary based on term values.[22] Defined with sorts * (types) and \square (kinds), axioms * : \square, and rules including (*, *), (\square, *), and (*, \square), it supports higher-kinded types and impredicative definitions, facilitating proofs about types themselves.[20] Introduced by Coquand and Huet in 1985 and refined in 1988, this calculus enables advanced type-level computation, such as defining functors or encoding logical implications at the type level.[22] The polymorphic dependent calculus, denoted \lambda \Pi^\Omega, combines all features of the lambda cube, integrating dependent types, polymorphism, and higher-order type dependencies into a single expressive system.[20] It includes the full set of rules: (*, *), (\square, *), (*, \square), and (\square, \square), allowing mutual dependencies among terms and types, such as polymorphic type operators \Pi \alpha : \square . \Pi x : A(\alpha) . B(x, \alpha) where A and B can themselves be higher-kinded.[20] This culminates in the most powerful position of the cube, underpinning proof assistants like Coq, where it supports formal verification of complex mathematical structures through impredicative quantification over a sort like Prop.[22] The key differences across variants lie in dependency scope: first-order \lambda \Pi confines expressiveness to term-term relations, lacking generalization over types; second-order \lambda \Pi_2 adds type quantification for terms but not for types; higher-order systems like \lambda P^\omega and \lambda \Pi^\Omega enable proofs and computations at the type level, vastly expanding the ability to encode logical and categorical concepts directly in types.[20]Applications
In Proof Assistants and Formal Verification
Dependent types play a central role in proof assistants by enabling the Curry-Howard isomorphism, which identifies propositions with types and proofs with terms inhabiting those types. Under this correspondence, a proposition is expressed as a dependent type, and a proof is a term of that type; for instance, implications are represented as dependent function types (Π-types), where ∀x:A.B x corresponds to A → B in non-dependent settings. This allows proof assistants like Coq to treat logical deductions as type checking, ensuring that every proof term is well-typed and thus valid.[23][24] In formal verification, dependent types facilitate the encoding of complex mathematical theorems by supporting inductive definitions that capture infinite structures with finite descriptions. A prominent example is the Four Color Theorem, formally verified in Coq, where dependent types define planar graphs via hypermaps and inductive predicates ensure planarity and coloring properties, with the proof relying on case analysis over a finite set of reducible configurations.[25] Type checkers in such systems guarantee totality, preventing non-terminating computations and ensuring all proofs are constructive. Dependent sum types (Σ-types) support existential quantifiers in proofs, encoding statements like "there exists x in A such that P x" as pairs (x, proof of P x).[26] Inductive types, often built using dependent sums, model data structures like lists in proof assistants; for example, in Coq, the list type is defined inductively asInductive list (A : Type) : Type := nil : list A | cons : A → list A → list A, with dependent variants allowing lengths or indices to refine types, such as vectors ensuring exact lengths. The type checker enforces totality by verifying that recursive definitions are well-founded, eliminating partial functions and enabling reliable induction principles for proofs.[26]
Dependent types extend logic programming techniques through unification algorithms adapted to handle dependencies, supporting automated proof search by matching terms against goals while respecting type indices. In this setting, unification resolves placeholders in proof terms, enabling tactics to instantiate variables dependently. For instance, Lean's tactic system leverages dependent types for elaboration, where holes in partial proofs are filled via higher-order unification and type inference, allowing users to write incomplete terms that the system completes while preserving type correctness.[27][28][29]
In Programming Languages and Software Development
Dependent types enable programmers to express precise behavioral properties directly in type signatures, enhancing code safety and reducing runtime errors. In languages like Idris, theVect type family exemplifies this expressiveness by indexing lists with their exact length, ensuring that operations on vectors respect this invariant at compile time. For instance, the Vect n a type represents a list of n elements of type a, constructed via Nil : Vect Z a for the empty case and (::) : a -> Vect k a -> Vect (S k) a for cons, where S is the successor function on natural numbers. This dependency prevents common off-by-one errors, such as indexing beyond bounds, by requiring proofs of valid indices via the Fin n type, which encodes bounded natural numbers from 0 to n-1. The index : Fin n -> Vect n a -> a function thus guarantees safe access, rejecting invalid indices during type checking.[30]
These expressive types provide compile-time guarantees for critical properties, including inversion principles that ensure functions behave as specified. In safe string parsing, dependent types can enforce that a parser consumes the entire input, inverting the string to a value accompanied by a proof of full consumption. This is achieved through total parser combinators, where the type of a parser depends on the input length and guarantees termination without partial results or left recursion issues. For example, in Agda, parsers are defined such that successful parsing yields a result with an equality proof that the remaining input is empty, preventing subtle bugs like unhandled suffixes in string processing. Such guarantees shift error detection from runtime to compile time, improving software reliability in development workflows.[31]
Dependent types also integrate seamlessly with effectful programming, allowing types to track state or I/O dependencies explicitly. In Agda, state-dependent IO monads extend the standard IO monad by making the return type depend on the initial state, ensuring that operations respect state transitions through proofs. This approach, formalized in type theory, verifies monad laws and prevents invalid state manipulations at compile time, as seen in implementations where IO actions carry dependent types reflecting post-state properties. Similar mechanisms in Idris support resource-aware effects, where types encode protocols for handling mutable state or external interactions safely.[32]
Despite these benefits, practical use of dependent types in software development faces challenges, particularly around decidability of type checking, which often requires manual proofs. Full-spectrum dependent types can lead to undecidable unification problems, where the type checker cannot automatically confirm equalities involving computed indices, necessitating explicit propositional equality proofs (e.g., using reflexivity or congruence lemmas). This manual intervention slows development and complicates refactoring, as changes to implementations may break dependent equalities that were previously automatic. In systems like Lean and Idris, developers must supply proofs for such cases, balancing expressiveness against usability.[33]
A key feature enabling practical programming with dependent types is dependent pattern matching, which allows exhaustive, type-safe case analysis informed by values in types. In Idris, pattern matching on dependent data like Vect refines the context for subsequent branches, ensuring coverage and inferring proofs automatically where possible. For example, matching on Vect n a splits cases into Nil (when n = Z) and x :: xs (when n = S k), with the type checker adjusting goals based on these constructors to guarantee totality and safety. This mechanism supports concise, verified implementations of algorithms on indexed structures, reducing boilerplate while maintaining strong guarantees.[30]
Comparisons and Implementations
Feature Comparisons Across Systems
Dependent type systems vary significantly in their foundational calculi, impacting expressiveness, decidability of type checking, and supported programming paradigms. Coq is based on the Calculus of Inductive Constructions (CIC), which is higher-order and supports inductive types for defining data structures and proofs. Agda implements Martin-Löf Intuitionistic Type Theory (MLTT), enabling types to directly influence equality judgments.[34] Idris extends a dependently typed lambda calculus with a focus on totality, incorporating effect systems for practical programming. Lean employs a dependent type theory inspired by CIC but with elements of Homotopy Type Theory (HoTT), emphasizing proof irrelevance and mathematical formalization. The following table summarizes core feature differences across these systems:| Feature | Coq (CIC) | Agda (MLTT) | Idris (Dependently Typed λ-Calculus) | Lean (Dependent Type Theory with HoTT) |
|---|---|---|---|---|
| Foundational Calculus | Higher-order, inductive types | Inductive families | Totality-focused, effect handlers | Proof-irrelevant, universe polymorphism |
| Universe Management | Cumulative hierarchy (Type(i)) | Explicit levels (Set_i) | Implicit cumulative hierarchy | Hierarchical levels with max operation |
| Pattern Matching | Refined match with indices | Dependent pattern matching, copatterns | Advanced dependent matching with views | Eliminators with dependent cases |
| Termination Checking | Guarded fixpoints, no strict totality | Structural recursion enforcement | Totality checker for total functions | Built-in, with well-founded recursion support |
| Paradigm (Pure/Impure) | Pure functional | Pure functional | Supports impure via effects/IO | Pure functional |
| Predicativity | Impredicative in Prop, predicative in Type | Predicative | Predicative | Impredicative in Prop, predicative in Type |