Fact-checked by Grok 2 weeks ago

Intuitionistic type theory

Intuitionistic type theory (ITT) is a formal system developed by Swedish logician Per Martin-Löf as a foundation for constructive mathematics, first presented in its predicative form during the 1973 Logic Colloquium in Bristol and published in 1975. The theory integrates type theory with intuitionistic logic, embodying the propositions-as-types principle where propositions correspond to types and their proofs to terms inhabiting those types, enabling explicit constructions without reliance on classical principles like the law of excluded middle. At its core, ITT is predicative, meaning it avoids impredicative definitions that quantify over the totality of types, thus maintaining constructivity and avoiding paradoxes associated with unrestricted set formation. The system revolves around judgments, the basic statements of the theory, such as "A is a type," "a belongs to type A," "A equals B," and "a equals b in A," which capture multiple interpretations including logical assertions, mathematical sets, and computational programs. These judgments are governed by inference rules divided into formation rules (specifying when a type exists), introduction rules (constructing canonical terms), elimination rules (using terms to derive new ones), and equality rules (ensuring computational consistency). Key type formers in ITT include dependent function types (Π-types) for universal quantification and implication, dependent pair types (Σ-types) for existential quantification and conjunction, and inductive types such as the natural numbers (N), which support recursion and induction in a constructive manner. Universes (U) provide a hierarchical structure for types of types, allowing the formation of transfinite hierarchies while remaining predicative, as there is no universal set encompassing all types. This setup aligns with the Brouwer-Heyting-Kolmogorov interpretation of intuitionistic logic, where the truth of a proposition is equated with its provability, and supports principles like the axiom of choice through explicit constructions rather than axioms. Historically, ITT evolved from earlier type theories, such as Russell's ramified type theory, by incorporating constructive operations like Cartesian products and disjoint unions to enable the formalization of substantial portions of analysis and other mathematics without non-constructive assumptions. Martin-Löf's lectures in Padua (1980) further refined the theory, emphasizing its philosophical underpinnings in meaning explanations for judgments and its duality between logic and computation. In applications, ITT has profoundly influenced computer science, serving as the logical foundation for dependent type systems in proof assistants such as Agda, which implements a variant of intensional ITT to verify mathematical proofs and software correctness through type checking. Its emphasis on explicit proofs and computational content has also inspired extensions like the Calculus of Constructions, underpinning tools like Coq for formal verification.

Introduction and Historical Context

Overview

Intuitionistic type theory (ITT) is a dependent type theory grounded in the principles of , a philosophy of that prioritizes constructive proofs as evidence of mathematical truth, rejecting non-constructive existence proofs that rely on . In , mathematical objects and proofs are formalized through typed calculi where constructions must be explicitly computable, ensuring that every proof corresponds to an effective method for building the asserted . This approach stems from the work of , who introduced ITT in the as a framework for constructive . The core goals of ITT include providing a unified foundation for mathematics, logic, and computer science, where types serve as propositions and proofs as programs via the Curry-Howard isomorphism. This correspondence allows logical statements to be interpreted computationally, enabling the extraction of algorithms from proofs and supporting formal verification in computational settings. ITT plays a pivotal role in modern proof assistants such as Agda, Coq, and NuPRL, which implement variants of the theory to mechanize mathematical reasoning and ensure proof correctness. It also influences type-safe programming languages by promoting dependent types that enforce stronger guarantees at compile time. Unlike classical type theories, ITT rejects the law of excluded middle, emphasizing computable constructions over arbitrary existential claims.

Development and Key Figures

The foundations of intuitionistic type theory trace back to L.E.J. Brouwer's intuitionism, a philosophical approach to mathematics emphasizing mental constructions and rejecting the law of excluded middle, which Brouwer began developing in his 1907 doctoral thesis and elaborated in subsequent works during the early 20th century. Brouwer's ideas rejected classical logic's reliance on non-constructive proofs, advocating instead for mathematics as a free creation of the human mind grounded in temporal intuition. This intuitionistic program was formalized in logical terms by Brouwer's student Arend Heyting, who in 1930 provided the first axiomatic system for intuitionistic logic, capturing the constructive principles through inference rules that require explicit constructions for existential claims and disjunctions. Intuitionistic type theory emerged as a within this tradition through the work of , who introduced its core ideas in lectures delivered in 1971 and 1972. His initial 1971 formulation was impredicative, featuring a type of all types, but was shown inconsistent by Jean-Yves Girard due to the ; Martin-Löf then revised it to a predicative version in 1972, introducing hierarchical universes to avoid while maintaining constructivity. He presented this predicative theory at the 1973 Logic Colloquium and published it in 1975, establishing ITT as a foundation for constructive mathematics. , a Swedish logician and philosopher, developed the theory to unify logic, types, and computation, where types serve as propositions and terms as proofs, aligning with the constructive ethos. He provided a comprehensive formulation in his 1984 monograph Intuitionistic Type Theory, based on earlier lectures, which established the judgmental structure, type formers, and rules for equality central to the system. remains the primary architect of intuitionistic type theory, refining it over decades to address issues in predicativity and proof normalization. The theory builds on earlier developments in typed lambda calculi by Haskell Curry, William Alvin Howard, and Dana Scott. Curry's work in the 1930s and 1950s on combinatory logic laid groundwork for functional abstraction and type assignment, introducing correspondences between logical proofs and lambda terms. Howard extended this in 1969 with an unpublished manuscript articulating the isomorphism between natural deduction proofs and simply typed lambda calculus, now known as the Curry-Howard correspondence, which Martin-Löf incorporated to interpret intuitionistic proofs computationally. Scott's contributions in the late 1960s and 1970s on domain theory and models for lambda calculi influenced the semantic foundations, enabling categorical interpretations of dependent types in intuitionistic settings. Subsequent evolution shifted intuitionistic type theory from Martin-Löf's predominantly predicative variant—where types are defined without self-reference—to impredicative extensions, allowing quantification over all types including those being defined, to enhance expressiveness for higher-order logic. This progression influenced computer science applications, notably through Thierry Coquand and Gérard Huet's 1985 introduction of the Calculus of Constructions, an impredicative system extending Martin-Löf's framework with polymorphic universes for type dependency and abstraction. Coquand's PhD thesis formalized this calculus as a basis for proof assistants, bridging intuitionistic type theory with practical program verification. In the 1990s, key milestones included advancements in proof theory, such as normalization proofs for extended type systems, and applications to category theory, where intuitionistic type theory was modeled in locally Cartesian closed categories to interpret dependent types semantically. Bengt Nordström, Kent Petersson, and Jan M. Smith provided an accessible exposition of Martin-Löf's theory in their 1990 textbook, emphasizing its programming implications and constructive proofs. These developments solidified intuitionistic type theory's role in formal verification and homotopy type theory precursors, with categorical models by researchers like Peter Dybjer exploring fibration categories for type universes.

Foundational Principles

Intuitionistic Logic Foundations

Intuitionistic type theory (ITT) is fundamentally grounded in intuitionistic logic, which rejects non-constructive proofs that do not provide explicit constructions for mathematical objects. In particular, intuitionism, as developed by L. E. J. Brouwer, discards the law of excluded middle (A \lor \neg A) because it assumes every mathematical statement is either provable or refutable without offering a method to decide between them, and similarly rejects double negation elimination (\neg\neg A \to A) since it would validate statements merely by showing their negation leads to contradiction, without constructing a direct proof. This stance, formalized by Arend Heyting, emphasizes that mathematical truth consists in the mental construction of proofs, ensuring all reasoning remains effective and verifiable. ITT provides a type-theoretic that embeds Heyting arithmetic, an intuitionistic formalization of first-order arithmetic, where logical connectives are interpreted constructively. In this system, (A \to B) corresponds to a type that transforms any proof of A into a proof of B, conjunction (A \land B) to a product type containing proofs of both, and disjunction (A \lor B) to a sum type that specifies which disjunct is proven. This encoding aligns arithmetic operations with type-theoretic constructions, allowing ITT to embed Heyting arithmetic while preserving its intuitionistic properties, such as the absence of non-constructive axioms. Central to ITT's logical foundations is the Curry-Howard correspondence, which identifies propositions with types and proofs with terms inhabiting those types. Originating from Haskell Curry's work on combinatory logic and extended by William Howard to lambda calculus, this isomorphism interprets intuitionistic proofs as computational objects: for instance, a proof of A \to B is a lambda abstraction \lambda x^A . M^B that inhabits the function type A \to B. In ITT, as formulated by Per Martin-Löf, this correspondence fully integrates logic and computation, where the formation rules for types mirror those for propositions, and normalization of terms ensures proof irrelevance for propositions. The Brouwer-Heyting-Kolmogorov (BHK) interpretation further elucidates these foundations by providing a constructive semantics for intuitionistic connectives. Under BHK, a proof of implication A \to B is a construction or method that, given any proof of A, produces a proof of B; a proof of conjunction A \land B consists of proofs of both components; and a proof of disjunction A \lor B includes a proof of one disjunct along with an indicator of which. This interpretation, rooted in Brouwer's intuitionism, Heyting's formalization, and Kolmogorov's problem-solving perspective, ensures that all proofs in ITT are explicit constructions, rejecting abstract existence claims. To maintain consistency, adopts a predicative approach, defining types without to prevent paradoxes such as Russell's. In predicative , higher-level types are formed from lower-level ones without quantifying over collections that include themselves, using ramified hierarchies of where each universe U_i contains types but not itself, thus avoiding impredicative definitions that could lead to inconsistencies like the set of all sets. This predicativity aligns with intuitionistic principles by ensuring all definitions are justified by prior constructions, supporting the development of constructive without circularity.

Constructive Approach to Types

In intuitionistic type theory (ITT), types serve as constructive specifications for mathematical objects, where every element of a type must be explicitly constructed through a finite sequence of introduction rules that define its canonical forms. This approach ensures that inhabiting a type requires providing a method or program that yields a canonical element upon execution, thereby avoiding non-constructive existence proofs. As articulated by Martin-Löf, a set or type A is defined precisely by prescribing how its canonical elements are formed, along with rules for identifying equal canonical elements. This constructive stipulation aligns types with computational tasks, where proofs are not mere assertions but verifiable constructions. For each type former in ITT, the typing rules include introduction rules to build elements, elimination rules to use them in computations, and computation rules (often called β-reduction rules) that normalize terms to their unique canonical forms. These computation rules facilitate the reduction of composite terms, ensuring that proofs can be evaluated step-by-step to reach a normal form where no further simplifications apply. Moreover, this structure supports decidable equality for canonical elements in many cases, as equality is determined by direct comparison of their constructive origins rather than abstract axioms. Such rules enforce that all operations remain within the bounds of effective computability, distinguishing ITT from classical systems. ITT eschews axioms of infinity or non-constructive choice, requiring all structures to be built inductively from finite data through explicit type formers, thereby guaranteeing that every type and its inhabitants arise from finitary constructions. This inductive foundation prevents the introduction of uncomputable elements, as all infinities must be witnessed constructively via recursive definitions. In relation to realizability interpretations, types in ITT are inhabited by terms that act as realizers—explicit proofs or programs that constructively witness the truth of propositions, linking logical validity directly to computational realizability. The constructive approach yields key advantages, including canonical proofs that are unique up to and strong normalization theorems, which prove that every term reduces to a unique normal form in a finite number of steps, ensuring and decidability of type checking. These properties make ITT particularly suitable for and programming languages, as they provide a where proofs behave as executable algorithms. Inductive types, such as those for natural numbers, exemplify this by being defined through finitary introduction rules that enable recursive constructions without invoking infinite axioms.

Basic Type Formers

Empty and Unit Types

In intuitionistic type theory (ITT), the empty type, often denoted as $0 or \bot, serves as the foundational type representing falsehood or the absurd, with no inhabitants. Its formation rule is straightforward: there are no premises required to form the type $0, establishing it as a basic set in the theory. Unlike other types, $0 lacks an introduction rule, meaning there are no canonical terms or constructors that can inhabit it, rendering it uninhabited. The elimination rule for $0, known as ex falso quodlibet, allows one to derive any type C from an assumption of a term c : 0; formally, if c \in 0, then for any family C(z) where z \in 0, one obtains R_0(c) \in C(c), though this is vacuously true since no such c exists. Computationally, applications of this elimination reduce trivially: \mathsf{elim}_0 \, M \, N (where M : 0) reduces to N, the designated absurd case, emphasizing the type's role in propagating contradictions. Logically, $0 corresponds to the proposition \bot (falsity), enabling the inference of arbitrary propositions from falsehood in the Curry-Howard isomorphism. Complementing the empty type is the unit type, denoted as $1 or \top, which represents truth or the trivial proposition, inhabited by exactly one element. Its formation rule similarly requires no premises: $1 is a basic set. The introduction rule provides a single constant term, often written as * or $0_1, such that * : 1. Elimination proceeds via pattern matching on this sole inhabitant: given c : 1 and a term c_0 : C(*) for a family C over $1, one forms R_1(c, c_0) : C(c). The corresponding equality rule ensures computational behavior: R_1(*, c_0) = c_0 : C(*), making reductions immediate and deterministic. Thus, functions from $1 to another type A are equivalent to elements of A itself, as the only input is *, yielding the provided output directly. In logical terms, $1 embodies the tautology \top (truth), with * as its canonical proof. These types form the atomic building blocks for more complex constructions in ITT, such as products and sums, where $0 acts as an absorber (e.g., $0 + A \equiv A) and $1 as an (e.g., $1 \times A \equiv A). Their simplicity underscores the constructive nature of ITT, where types are defined by explicit rules for formation, inhabitation, and usage, without classical assumptions like the law of excluded middle.

Product and Sum Types

In intuitionistic type theory, the product type A \times B, also known as the Cartesian product, is formed whenever A and B are types. Elements of A \times B are introduced by pairing terms a : A and b : B to form (a, b) : A \times B. Elimination occurs through projection functions: for p : A \times B, the first projection \mathsf{proj}_1(p) : A and second projection \mathsf{proj}_2(p) : B, satisfying the computation rules \mathsf{proj}_1((a, b)) \equiv a and \mathsf{proj}_2((a, b)) \equiv b. These rules ensure that product types capture structured pairs without computational overhead in projections. Logically, the product type A \times B interprets the conjunction A \land B, where a proof of the conjunction consists of proofs of both components. In the non-dependent case, A and B do not depend on variables from each other, distinguishing these types from dependent variants like dependent pairs. For instance, the type of booleans can be represented as $1 + 1, where $1 is the unit type, but this previews how sums build finite types; natural numbers arise from more advanced inductive constructions. The sum type A + B, or disjoint union, is similarly formed when A and B are types. Introductions inject elements via \mathsf{inl}(a) : A + B for a : A or \mathsf{inr}(b) : A + B for b : B. Elimination uses case analysis: given w : A + B and terms c_1 : \Pi x : A . C(x) and c_2 : \Pi y : B . C(\mathsf{inr}(y)) for some type family C, the eliminator \mathsf{case}(w, c_1, c_2) : C(w) computes as \mathsf{case}(\mathsf{inl}(a), c_1, c_2) \equiv c_1(a) and \mathsf{case}(\mathsf{inr}(b), c_1, c_2) \equiv c_2(b). This enforces exhaustive checking of both cases in proofs or programs. In logical terms, A + B corresponds to the disjunction A \lor B, where a proof of the disjunction provides evidence for exactly one disjunct, tagged by the injection. As with products, the non-dependence means A and B are fixed types, not varying with indices, though this extends to dependent sums in later developments. An example is the boolean type as the sum $1 + 1, yielding two inhabitants \mathsf{true} \equiv \mathsf{inl}(*) and \mathsf{false} \equiv \mathsf{inr}(*), where * : 1.

Dependent Types

Dependent Function Types

In intuitionistic type theory, dependent function types, written as \Pi(x : A). B, extend the simply typed lambda calculus by permitting the codomain type B to vary with the specific value of the argument x drawn from the domain type A. This construct is fundamental to the theory's expressiveness, as it captures not only computational functions but also logical universals under the Curry-Howard correspondence. The formation rule for a dependent function type states that \Pi(x : A). B is a type provided A is a type and, for every term a : A, the substitution B[a/x] is also a type. Formally, in a context \Gamma, \frac{\Gamma \vdash A \text{ type} \quad \Gamma, x : A \vdash B \text{ type}}{\Gamma \vdash \Pi(x : A). B \text{ type}} This ensures well-formedness across the family of types indexed by elements of A. Equality of such types follows if the domains are equal and the codomains are pointwise equal under substitution. The introduction rule constructs a dependent function via lambda abstraction: a term \lambda x. b inhabits \Pi(x : A). B if, in the extended context, b : B. The typing judgment is \frac{\Gamma, x : A \vdash b : B}{\Gamma \vdash \lambda x. b : \Pi(x : A). B}, where B may depend on x. The elimination rule applies such a function to an argument a : A, yielding (\lambda x. b) \, a : B[a/x], which computationally reduces via \beta-reduction: (\lambda x. b) \, a \to b[a/x]. This reduction preserves typing and embodies the computational content of the function. Additionally, \eta-expansion equates any function f : \Pi(x : A). B with \lambda x. f \, x, ensuring extensionality at the level of propositional equality in intensional variants of the theory. Under the propositions-as-types interpretation, the dependent function type \Pi(x : A). B corresponds to universal quantification \forall x : A. B, where proofs are witnessed by constructive methods providing evidence for every instance. When the codomain B does not depend on x, the type simplifies to the non-dependent function type A \to B. Specializing further, if the domain A is the unit type $1, then \Pi(x : 1). B \cong B, aligning with the logical implication \top \to B \equiv B, or more generally, implication A \to B as \Pi(x : A). B when A and B are propositions. A representative example is the on a type A, defined as I \equiv \lambda A. \lambda x : A. x : \Pi(A : \text{Set}). A \to A, which demonstrates dependent typing over the universe of types and provides a proof of reflexivity in the logical reading. In inference rules, to derive \Gamma \vdash e : \Pi(x : A). B, one typically assumes \Gamma, x : A \vdash e \, x : B and forms the abstraction, ensuring the term e is suitably dependent.

Dependent Pair Types

In intuitionistic type theory, dependent pair types, known as Σ-types, extend the notion of pairs to cases where the type of the second component depends on the specific value of the first component. The formation rule for a Σ-type is as follows: if A is a type and B(x) is a family of types indexed over x : A, then \Sigma(x : A). B(x) is itself a type. This construction is due to Per Martin-Löf's foundational work on the theory. The introduction rule forms elements of a Σ-type as dependent pairs (a, b), where a : A and b : B(a), ensuring the second element inhabits the type determined by the first. Elimination is typically achieved through projection functions: the first projection \pi_1(p) extracts the first component of type A from a pair p : \Sigma(x : A). B(x), while the second projection \pi_2(p) extracts the second component of type B(\pi_1(p)). Computationally, these projections satisfy β-reduction rules: \pi_1(a, b) \equiv a and \pi_2(a, b) \equiv b, enabling definitional equality in type checking. An alternative elimination form uses a let-construct, such as \mathsf{let}\ (x, y) = p\ \mathsf{in}\ e, which substitutes x := \pi_1(p) and y := \pi_2(p) into an expression e of appropriate type, also governed by β-reduction. Logically, Σ-types interpret existential quantification in the Curry-Howard correspondence: a term of type \Sigma(x : A). B(x) corresponds to a proof of \exists x : A.\ B(x), where the pair provides both a witness a : A and evidence b : B(a). When B does not depend on x, the Σ-type reduces to the Cartesian product A \times B, representing conjunction A \land B. This dependence enables expressing total spaces of type families, such as the type of even natural numbers as \Sigma(n : \mathbb{N}). \mathsf{even}(n), where each inhabitant pairs a natural number with its evenness proof.

Identity and Equality

Identity Type Formation

In intuitionistic type theory (ITT), the type provides a means to express and reason about between terms within a given type. The formation for the type, denoted \mathrm{Id}_A(a, b), states that if A is a type and a : A, b : A are terms, then \mathrm{Id}_A(a, b) is itself a type. This type is intended to contain proofs or "paths" witnessing that a equals b propositionally, distinguishing ITT's constructive approach from classical set theory where is often extensional. The introduction rule establishes reflexivity: for any term a : A, there exists a canonical term \mathrm{refl}_a : \mathrm{Id}_A(a, a), serving as the basic proof of equality of a term with itself. Unlike definitional equality, which is a judgment about syntactic identity built into the theory's rules, the identity type enables non-trivial propositional equalities that must be constructed explicitly, reflecting ITT's emphasis on constructive proofs. To eliminate the identity type and use such proofs, ITT employs the dependent eliminator J. Given a type family C(x, y, p) over x : A, y : A, p : \mathrm{Id}_A(x, y), and a term d : C(a, a, \mathrm{refl}_a), the rule J(C, d, a, b, p) : C(a, b, p) allows computation along any proof p : \mathrm{Id}_A(a, b). The key computation rule ensures reflexivity: J(C, d, a, a, \mathrm{refl}_a) \equiv d, where \equiv denotes judgmental equality, guaranteeing that the eliminator behaves correctly on the reflexive proof. This reflexivity axiom, introduced via \mathrm{refl}, forms the foundational property enabling inductive reasoning over equalities in ITT.

Equality Proofs and Properties

In intuitionistic type theory, proofs of between terms a, b : A are inhabitants of the type \mathrm{Id}_A(a, b), and such proofs are eliminated using the J , which provides a dependent eliminator for the type. Specifically, for a dependent type P : \prod_{x,y : A} \mathrm{Id}_A(x, y) \to \mathcal{U}, where \mathcal{U} is a universe, and a base case d : P(a, a, \mathrm{refl}_a) with \mathrm{refl}_a : \mathrm{Id}_A(a, a) the reflexivity proof, the J yields a term J(d, b, p) : P(a, b, p) for any b : A and p : \mathrm{Id}_A(a, b). The computation ensures J(d, a, \mathrm{refl}_a) \equiv d, where \equiv denotes judgmental , guaranteeing that the elimination behaves correctly on reflexive proofs. This J rule serves as the foundation for deriving key propositional properties of equality. Symmetry, providing \mathrm{sym}(p) : \mathrm{Id}_A(b, a) from p : \mathrm{Id}_A(a, b), is constructed by applying J to the family C(x, y, q) \equiv \mathrm{Id}_A(y, x), using the base case \mathrm{sym}(\mathrm{refl}_a) \equiv \mathrm{refl}_a. Transitivity, yielding \mathrm{concat}(p, q) : \mathrm{Id}_A(a, c) from p : \mathrm{Id}_A(a, b) and q : \mathrm{Id}_A(b, c), is similarly obtained via J on the family C(x, y, r) \equiv \prod_{z : A} \mathrm{Id}_A(y, z) \to \mathrm{Id}_A(x, z), with appropriate base case leveraging reflexivity. Other properties, such as reflexivity itself, follow directly from the introduction rule for identity types, and all such constructions preserve the intensional nature of the theory by not collapsing to judgmental equality in general. Equality in intuitionistic type theory distinguishes between propositional equality, given by terms of \mathrm{Id}_A(a, b), which represents a proof obligation that must be constructively witnessed, and judgmental (or definitional) equality a \equiv b : A, a meta-level relation enforced during type checking and reduction that identifies convertible terms syntactically. Propositional equality allows for multiple distinct proofs between the same terms, reflecting the constructive philosophy, whereas judgmental equality is stricter and decidable via normalization. This separation ensures that equality proofs are computationally meaningful without forcing all propositional equalities to be definitionally true, a feature central to the intensional variant of the theory. Congruence properties ensure that equality respects the structure of type formers; for instance, given f : A \to B and p : \mathrm{Id}_A(a, b), the action \mathrm{ap}_f(p) : \mathrm{Id}_B(f(a), f(b)) is derived by applying J to the family \lambda x \, y \, q . \mathrm{Id}_B(f(x), f(y)), with base case \mathrm{ap}_f(\mathrm{refl}_a) \equiv \mathrm{refl}_{f(a)}. Similar congruences hold for other type constructors, such as products and sums. The role of equality in substitution is facilitated by J, often specialized as the \mathrm{eq\_rect} principle: for a dependent family C : A \to \mathcal{U}, u : C(a), and p : \mathrm{Id}_A(a, b), \mathrm{eq\_rect}(C, u, p) : C(b) transports u along p, with computation \mathrm{eq\_rect}(C, u, \mathrm{refl}_a) \equiv u; this enables replacing equals by equals in arbitrary contexts, underpinning dependent elimination and pattern matching. Equality on dependent function (\Pi) and pair (\Sigma) types follows from these congruence and substitution principles.

Higher-Level Constructions

Inductive Type Definitions

Inductive types form a cornerstone of intuitionistic type theory (ITT), enabling the definition of recursively structured types through a uniform schema comprising formation, introduction (constructors), elimination (recursors), and computation rules. These types capture constructive notions of data structures, such as sequences or trees, by specifying the "generators" (constructors) that build elements and the principles for reasoning about them via recursion or induction. The formation rule for an inductive type declares it as an element of a universe of types, ensuring it is well-formed. For instance, the natural numbers type \mathbb{N} is introduced by the rule \mathbb{N} : \mathcal{U}, where \mathcal{U} denotes a universe. The introduction constructors for \mathbb{N} are zero, $0 : \mathbb{N}, and successor, \mathrm{succ} : \mathbb{N} \to \mathbb{N}, which generate all elements of \mathbb{N} by iterated application starting from zero. The elimination principle, or recursor, permits defining dependent functions from the inductive type to a type family by providing data for the base cases and step functions corresponding to each constructor. For natural numbers, the recursor is given by \mathrm{rec}_{\mathbb{N}} : \left( C : \mathbb{N} \to \mathrm{Set} \right) \to C(0) \to \left( \Pi n : \mathbb{N}.\ C(n) \to C(\mathrm{succ}(n)) \right) \to \Pi k : \mathbb{N}.\ C(k), where C is the motive (type family), the second argument covers the zero case, and the third provides the successor step. Computation is governed by \beta-reduction rules ensuring definitional equality: \mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ 0 \equiv d, \mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ (\mathrm{succ}(n)) \equiv f\ n\ (\mathrm{rec}_{\mathbb{N}}\ C\ d\ f\ n), along with an \eta-rule that every total function on \mathbb{N} respects the structure, confirming the type's totality. In the general schema, an inductive type A with constructors c_i : F_i \to A (for i in some index set, where each F_i is a type or dependent type, possibly involving occurrences of A) has formation A : \mathcal{U}, introduction rules c_i(x) : A for x : F_i, and a dependent eliminator \mathrm{rec}_A that takes a motive C : A \to \mathrm{Set} and, for each constructor i, a function d_i : \Pi x : F_i . (\Pi y : \mathrm{rec}_C(x)) . C(c_i(x)), where \mathrm{rec}_C(x) denotes the product of C applied to the recursive subterms in x, yielding \Pi z : A . C(z). This ensures that elimination respects the inductive structure by providing computations for constructors using values from substructures. Computation rules apply \beta-reductions for each constructor, and an \eta-rule ensures structural completeness. This schema extends to dependent eliminators for more expressive definitions. Representative examples illustrate the schema's versatility. For lists over a type A : \mathcal{U}, the formation is \mathrm{List}(A) : \mathcal{U}, with constructors \mathrm{nil} : \mathrm{List}(A) and \mathrm{cons} : A \to \mathrm{List}(A) \to \mathrm{List}(A); the recursor \mathrm{rec}_{\mathrm{List}} takes a motive C : \mathrm{List}(A) \to \mathrm{Set}, nil-case data, and a cons-step function, with \beta-rules \mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ \mathrm{nil} \equiv d and \mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ (\mathrm{cons}(a, l)) \equiv e\ a\ l\ (\mathrm{rec}_{\mathrm{List}}\ C\ d\ e\ l). Finite sets can be defined inductively as \mathrm{Fin}(n) for n : \mathbb{N}, with constructors building elements up to n via zero and successor, mirroring \mathbb{N} but bounded.

Universe Hierarchies

In intuitionistic type theory, universes form a hierarchy of types that classify other types, enabling the construction of increasingly expressive structures while maintaining consistency. The base universe U_0 is the type containing codes for all "small" types, such as the natural numbers \mathbb{N} : U_0, which are those formed by a finite number of applications of primitive type formers like products and sums. Higher universes are defined recursively, with the formation rule ensuring U_i : U_{i+1} for each level i, allowing U_{i+1} to include codes for types in U_i as well as more complex constructions. This stratification permits the typing of type constructors that operate across levels, such as the list type former, which has type U_i \to U_{i+1} for appropriate i, enabling the formation of \mathsf{list}(A) for any small type A : U_i. Cumulativity is a key property of this hierarchy: if a type A : U_i, then A : U_j for all j > i, realized through decoding functions T_i that map codes in U_i to types and ensure compatibility across levels, such as T_{i+1}(t_i(a)) = T_i(a) where t_i embeds codes from U_i into U_{i+1}. This allows types to be flexibly assigned to the lowest possible universe while being usable in higher ones, supporting the typing of polymorphic functions and higher-kinded types without rigid level constraints. In some variants of the theory, U_0 is made closed under dependent products \Pi-types even for large domains, introducing limited impredicativity to enhance expressiveness while preserving predicativity overall. The hierarchy prevents paradoxes like Girard's paradox, which arises if a universe were allowed to contain itself (i.e., U : U), leading to an inconsistency via encoding of impredicative comprehension within the type system. By enforcing U_i \not: U_i and stratifying types, the theory avoids such self-reference, ensuring that universes themselves reside in strictly higher levels. One approach to modeling a cumulative "Type" over the hierarchy in extensions is via the indexed hierarchy with cumulative embeddings, allowing uniform treatment of type constructors across levels. Inductive types, such as those for natural numbers or lists, inhabit specific universes in this structure, with their definitions assigned to appropriate levels based on complexity.

Type Theory Rules

Judgements and Contexts

In intuitionistic type theory, the logical framework is built upon a system of judgements that assert the well-formedness of syntactic entities relative to a context, without a separate derivability judgement for proofs; instead, all assertions focus on structural validity and typing. The four primary judgement forms are: a context Γ is valid (Γ ctx); an expression A forms a type in context Γ (Γ ⊢ A type); an expression a inhabits type A in context Γ (Γ ⊢ a : A); Γ ⊢ A = B type (A equals B as types); and Γ ⊢ a = b : A (a equals b as elements of A). These judgements provide the basis for deriving more complex type-theoretic constructions, ensuring that all inferences respect contextual assumptions. Contexts in ITT represent sequences of variable declarations, starting with the empty context, which is always valid: () ctx. A context is extended by adding a fresh variable x bound to a type A, provided the prior context Γ is valid and A is a type therein: if Γ ctx and Γ ⊢ A type, then Γ, x : A ctx. This rule ensures that each variable's type is well-formed before it can be used in subsequent assumptions, maintaining the integrity of the typing environment. Structural rules like weakening and substitution underpin the manipulation of judgements across contexts. Weakening allows a valid typing to persist when irrelevant assumptions are added: if Γ ⊢ a : A and Γ, x : B, Δ ctx (with x not occurring free in a or A), then Γ, x : B, Δ ⊢ a : A. This rule formalizes the intuition that additional hypotheses do not invalidate prior conclusions. Substitution, conversely, enables the instantiation of hypothetical judgements by replacing a variable with a concrete term: if Γ, x : A, Δ ⊢ b : B and Γ ⊢ c : A, then Γ, Δ[c/x] ⊢ b[c/x] : B[c/x], where [c/x] denotes simultaneous replacement of all free occurrences of x by c. These rules are essential for modular reasoning and variable elimination in dependent settings. Dependent contexts arise when types in the sequence rely on preceding variables, enabling the expression of type families. For instance, a context such as Γ, x : A, y : B(x) ctx is valid if Γ ctx, Γ ⊢ A type, and Γ, x : A ⊢ B(x) type, allowing B to depend on the value of x. This dependency structure is crucial for ITT's expressive power, as it permits types to vary based on prior terms, forming the foundation for dependent function types and other higher-order constructions.

Inference Rules Overview

In intuitionistic type theory (ITT), inference rules provide the syntactic foundation for deriving judgments about types and terms, ensuring constructive proofs and computations. These rules follow a structured for each type former, typically comprising formation (establishing when a type exists), introduction (constructing canonical terms), elimination (using terms to derive new terms), computation (relating introduction and elimination via definitional equality), and uniqueness (ensuring canonical forms or ). This , introduced by , applies uniformly across type constructors, promoting modularity and extensibility in the theory. The forms the basis for hypothetical reasoning in : given a \Gamma and a x of type A, it derives \Gamma, x : A \vdash x : A. This enables the extension of with assumptions, essential for dependent types. For dependent function types (\Pi), the application rule allows deriving \Gamma \vdash e_1 \, e_2 : B[e_2 / x] if \Gamma \vdash e_1 : \Pi(x : A). B and \Gamma \vdash e_2 : A, where substitution [e_2 / x] replaces free occurrences of x in B. The abstraction rule correspondingly infers \Gamma \vdash \lambda x. e : \Pi(x : A). B provided \Gamma, x : A \vdash e : B, introducing lambda abstractions as canonical proofs of function types. These rules exemplify the general schema, with formation ensuring A and B(x) are types, introduction via abstraction, elimination via application, computation equating (\lambda x. e) \, e_2 definitionally to e[e_2 / x], and uniqueness via normalization to canonical forms. Elimination for dependent pair types (\Sigma) and inductive types employs a general pattern-matching eliminator. For a term c : \Sigma(x : A). B(x) and a motive C(c) with a branch d(x, y) : C((x, y)) where x : A and y : B(x), the rule derives \Gamma \vdash \mathsf{ind}_\Sigma(c, \lambda(x, y). d(x, y)) : C(c), covering cases via dependent pattern matching. Computation reduces the eliminator applied to an introduction form (e_1, e_2) to d(e_1, e_2), while uniqueness ensures eliminators normalize to canonical terms. This framework extends to inductives, where eliminators recurse over constructors. Conversion rules preserve typing under definitional equality: if \Gamma \vdash a \equiv a' : A' and \Gamma \vdash A' \equiv A, then \Gamma \vdash a : A. These enable weakening and congruence, allowing terms and types to be interchangeable if computationally equal, thus supporting the theory's intensional equality. As seen briefly in the \Pi and \Sigma examples detailed in prior sections on dependent types, this schema ensures coherent derivations across the theory.

Semantics and Models

Categorical Interpretations

Categorical interpretations of intuitionistic type theory (ITT) provide a framework for modeling its syntax using structures from category theory, particularly locally Cartesian closed categories (LCCCs). In an LCCC \mathcal{C}, objects represent types, while morphisms between objects correspond to terms of those types. Contexts, such as a type A, are interpreted as slice categories \mathcal{C}/A, where dependent types over A become objects in this slice. This setup captures the basic structure of ITT without identity types, interpreting dependent function types \Pi_{x:A} B(x) as the right adjoint to the pullback functor along a morphism f: X \to A, denoted \Pi_f \dashv f^*, and dependent sum types \Sigma_{x:A} B(x) via composition in the slice category. To handle dependent types more fully, fibrations extend this model by displaying categories over a base category, where the total category encodes types and terms, and reindexing functors u^* along context morphisms u perform substitution. In display map categories ( \mathcal{B}, \mathcal{V} ), display maps represent typed projections, closed under pullback to ensure reindexing preserves structure, with comprehension categories further aligning types to these projections for a semantics of dependent products and sums. An intuitionistic perspective refines Seely's LCCC interpretation using categories with families (cwfs), where types in context \Gamma are objects in \mathcal{C}/\Gamma, and equality of types corresponds to isomorphisms in the slice, constructively validating the type rules. For types in , categorical models require additional structure, such as categories equipped with weak factorizations of display maps into and total maps, or fibred groupoids where are modeled as isomorphisms. In fibrations with , the type \mathrm{Id}_A(a, b) is interpreted via the equalizer of the pair (a, b): X \to A \times A, with reflexivity and transport ensured by Cartesian lifts. Examples of such models include the , which provides an extensional interpretation where is given by set , validating ITT rules via . Realizability toposes, constructed over partial combinatory algebras, offer intensional models that interpret types as assemblies and terms as realizers, preserving the computational of ITT while satisfying its rules. Soundness of these interpretations holds when the category satisfies the required adjunctions and fibration properties: for instance, an LCCC validates the formation and introduction rules for \Pi- and \Sigma-types if the pullback functors have right adjoints preserving the structure, ensuring that type-theoretic derivations map to valid categorical morphisms. Similarly, fibrational models are sound for dependent types if reindexing satisfies the Beck-Chevalley condition, confirming the consistency of ITT within these categorical semantics.

Extensional and Intensional Variants

Intuitionistic type theory admits two primary variants concerning the treatment of equality: extensional and intensional. In the extensional variant, propositional equality is identified with definitional equality, meaning that two terms are propositionally equal if and only if they are definitionally equal, often achieved through rules that enforce unique identity proofs for any type. This coincidence simplifies the semantics by collapsing distinctions between computational and propositional notions of sameness, for example, via eta-conversion rules for record types that ensure structural extensionality. However, this approach results in less computational content, as terms may not normalize, complicating type-checking in proof assistants. The intensional variant, as developed by , treats propositional as a primitive type constructor, denoted Id_A(x, y), which allows for multiple distinct proofs of between the same terms, including proofs beyond the reflexivity constructor refl. Here, definitional remains separate and is governed by conversion rules limited to β-reduction and, in some formulations, η-expansion, without incorporating full propositional into the judgmental structure. This separation preserves decidability of type checking and ensures that terms normalize, making it suitable for computational implementations, but it requires additional axioms for extensional reasoning. A key bridging these variants is the Streicher , also known as Axiom K, which postulates the uniqueness of proofs: for any type A and term a, every proof of Id_A(a, a) is judgmentally equal to refl_a. This , introduced in the of models, enforces for types without fully collapsing propositional and definitional , though its addition to intensional theory yields set-level behavior where all types are h-sets. Extensional type theory offers simpler semantics and supports direct encoding of extensional concepts like function extensionality, facilitating certain mathematical constructions such as ordinals via Π-types. In contrast, the intensional variant provides a richer structure, enabling the development of homotopy type theory (HoTT), where identity types interpret as paths in a space, allowing higher-dimensional equalities and univalence. While extensionality eases model construction, such as in locally cartesian closed categories, intensionality better accommodates proof irrelevance and computational proof assistants like Agda and Coq.

Extensions and Implementations

Martin-Löf Theories

Martin-Löf theories refer to the family of dependent type theories developed by Per Martin-Löf primarily in the 1970s and 1980s as foundations for intuitionistic mathematics, emphasizing constructive proofs and computation. These theories evolved from early impredicative attempts to robust predicative systems, incorporating stratified universes to avoid paradoxes like Girard's second-order paradox. The core motivation was to formalize constructive reasoning via the propositions-as-types correspondence, where types represent propositions and terms represent proofs. The original 1970s formulation, presented in 1972, established a predicative intuitionistic type theory with key type formers including dependent products \Pi x : A . B(x), dependent sums \Sigma x : A . B(x), disjoint unions A + B, and a universe V of small types closed under these operations but not containing itself. Equality was handled definitionally, without primitive identity types, and inductive types such as the natural numbers \mathbb{N} were defined via to support recursion. This version avoided impredicativity by stratifying types, aligning its consistency strength with subsystems of . W-types, enabling general well-founded recursion over trees, were referenced conceptually but formalized later; they construct types (W x : A) B(x) from a base type A and family B : A \to \mathcal{U}, with introduction \mathsf{sup} : \prod_{a : A} (B(a) \to (W x : A) B(x)) and elimination rules for induction. The 1984 formulation refined these ideas into a comprehensive , introducing primitive identity types \mathsf{Id}_A(a, b) with formation, introduction (\mathsf{refl}_a : \mathsf{Id}_A(a, a)), and elimination rules to express propositional constructively. It featured cumulative universes \mathcal{U}_i for higher types, ensuring predicativity by allowing quantification only over lower levels (\mathcal{U}_i : \mathcal{U}_{i+1}, but not vice versa). W-types were explicitly included with indexed rules, supporting along well-founded orders, and the served as a foundation for constructive set theory, such as Peter Aczel's interpretation of ZF via graphs modeled as W-types. This predicative approach enabled formalization of ordinals and transfinite induction without assuming the axiom of infinity impredicatively. An impredicative variant appears in Martin-Löf's early 1971 draft, where the universe V satisfied V \in V, allowing definitions to quantify over collections including themselves, generalizing Girard's but leading to inconsistency. Later discussions consider impredicative encodings where the base universe \mathcal{U}_0 is closed under \Pi-types, akin to the impredicative sort in the , to encode fragments constructively while maintaining overall predicativity in higher universes. Extensions to the core theory include equality reflection in extensional variants, adding a rule that propositional equality implies judgmental equality (p : \mathsf{Id}_A(a, b) \vdash a \equiv b : A), collapsing the two notions and simplifying proofs but complicating type-checking decidability. Inductive families extend W-types to parameterize inductives over indices, defining families like finite vectors \mathsf{Vec}(A, n) where constructors respect dependencies, as formalized with general rules for formation, introduction, and elimination. Some formal systems incorporate quotient types to construct types from equivalence relations, with rules ensuring canonicity in indexed contexts. Martin-Löf type theories (MLTT) constitute a specific predicative within the broader of intuitionistic type theories (ITT), which encompasses various constructive systems; MLTT uniquely stresses explicit rules for and universes, distinguishing it from impredicative or non-stratified ITT .

Practical Systems and Applications

Several proof assistants implement of intuitionistic type theory, the formal of mathematical proofs and programs. Agda, for instance, is a dependently typed and proof assistant based on Martin-Löf type theory (MLTT), which emphasizes total functions and constructive proofs. It supports interactive proof development and has been used to formalize advanced mathematical structures. Coq is another prominent proof assistant, built on the Calculus of Constructions (CoC), an impredicative extension of intuitionistic type theory that allows for more flexible type definitions while maintaining constructivity. Coq's tactic-based interface facilitates step-by-step proof construction and has powered large-scale formalizations. Lean, developed by Microsoft Research, incorporates dependent types inspired by intuitionistic type theory to support theorem proving in mathematics, with a focus on usability for both experts and learners. In addition to proof assistants, intuitionistic type theory influences practical programming languages that integrate verification capabilities. is a purely functional language with full dependent types, enabling the specification of program behaviors at the type level and supporting practical effects like through totality checking. This makes it suitable for writing verified software where types encode invariants. is a proof-oriented programming language that combines dependent types with solvers for automated , targeting high-assurance applications such as cryptographic libraries. It allows programmers to extract verified code to languages like OCaml or F# for execution. Applications of these systems span formal mathematics and software verification. In Agda, libraries like HoTT-Agda formalize , providing a foundation for synthetic and univalent foundations. has been instrumental in projects like , a formally verified C compiler whose correctness is proven with respect to a mathematical semantics, ensuring no mistranslations during optimization. Such tools enable the development of certified software, where proofs guarantee properties like safety and correctness in critical systems, including embedded software and security protocols. Despite these advances, implementing intuitionistic type theory presents challenges, particularly in termination checking to ensure all programs halt, as required for consistency. Systems like Agda and employ sophisticated termination analyzers, but handling complex recursive definitions remains computationally intensive. Large-scale proofs also efficient infrastructure, as managing thousands of definitions and lemmas can lead to performance bottlenecks in type checking and proof search. Post-2020 developments have integrated to enhance proof based on intuitionistic type theory. Tools like Copilot large models to suggest proof tactics and automate routine steps, improving productivity in formal while addressing the limitations of proof . This AI assistance has shown in generating verifiable proofs, though it requires careful validation to maintain rigor.

References

  1. [1]
    Per Martin-Löf. An intuitionistic theory of types: predicative part ...
    Per Martin-Löf. An intuitionistic theory of types: predicative part. Logic colloquium '73, Proceedings of the logic colloquium, Bristol, July 1973, edited by ...
  2. [2]
    [PDF] Intuitionistic Type Theory
    Intuitionistic Type Theory. Per Martin-Löf. Notes by Giovanni Sambin of a series of lectures given in Padua, June 1980. Page 2. Page 3. Contents. Introductory ...Missing: sources | Show results with:sources
  3. [3]
    [PDF] On the Algebraic Foundation of Proof Assistants for Intuitionistic ...
    Abstract. An algebraic presentation of Martin-Löf's intuitionistic type theory is given which is based on the notion of a category with families.
  4. [4]
    Intuitionism in the Philosophy of Mathematics
    Sep 4, 2008 · From constructive proofs one can, at least in principle, extract ... –––, 1984, Intuitionistic type theory, Napoli: Bibliopolis.
  5. [5]
    Intuitionistic Type Theory - Stanford Encyclopedia of Philosophy
    Feb 12, 2016 · In Martin-Löf (1996) a general philosophy of logic is presented where the traditional notion of judgment is expanded and given a central ...Missing: sources | Show results with:sources
  6. [6]
    [PDF] The Logic of Brouwer and Heyting - UCLA Mathematics
    Nov 30, 2007 · Intuitionistic logic consists of the principles of reasoning which were used informally by. L. E. J. Brouwer, formalized by A. Heyting (also ...
  7. [7]
    [PDF] An Intuitionistic Theory of Types
    An intuitionistic theory of types. Per Martin-Löf. Department of Mathematics, University of Stockholm. The theory of types with which we shall be concerned is ...
  8. [8]
    [PDF] Propositions as Types - Informatics Homepages Server
    It is often referred to as the Curry-Howard. Isomorphism, referring to a correspondence observed by Curry in. 1934 and refined by Howard in 1969 (though not ...
  9. [9]
    [PDF] Lectures on the Curry Ho ard &somorphism
    The Curry-Howard isomorphism states an amazing correspondence between systems of formal logic as encountered in proof theory and computational.Missing: primary | Show results with:primary
  10. [10]
    (PDF) The calculus of constructions - Academia.edu
    The calculus of constructions is a higher-order formalism for constructive proofs in natural deduction style. Every proof is a A-expression, ...
  11. [11]
    [PDF] Intuitionistic Type Theory Lecture 1 - Chalmers tekniska högskola
    May 9, 2018 · Intuitionistic Type Theory: 1972 Martin-Löf, intensional Intuitionistic Type Theory, universes, proof theoretic properties. 1974 Aczel ...
  12. [12]
    [PDF] the Curry-Howard correspondence, 1930–1970 - Xavier Leroy
    Intuitionistic logic: family of logics studied by Heyting, Glivenko, Gödel,. Kolmogorov. Formalizes the “only constructive proofs” aspect of intuitionism. 18 ...
  13. [13]
    [PDF] Constructive Mathematics and Computer Programming
    CONSTRUCTIVE MATHEMATICS AND COMPUTER PROGRAMMING. 155. Programming program, procedure, algorithm input output, result x := e si; Sa if B then Sl else Sa.
  14. [14]
    [PDF] Per Martin-Löf - INTUITIONISTIC TYPE THEORY
    The ramified theory of types was predicative, but it was not sufficient for deriving even elementary parts of analysis. So the axiom of reducibility was added ...
  15. [15]
    [PDF] Martin-Löf's Type Theory
    First we will give a short overview of di erent formulations and implementations of type theory. Section 2 will explain the fundamental idea of propositions as ...
  16. [16]
    ETA-RULES IN MARTIN-LÖF TYPE THEORY | Bulletin of Symbolic ...
    Jul 22, 2019 · The eta rule for a set A says that an arbitrary element of A is judgementally identical to an element of constructor form.Missing: expansion | Show results with:expansion
  17. [17]
    Per Martin-Löf: Intuitionistic Type Theory | per
    It encompasses universes, dependent products Pi , dependent pairs Sigma , identity types Id , and 0 , 1 , 2 , W types for well-founded definitions. Its ...<|control11|><|separator|>
  18. [18]
    [PDF] martin-lof-tt.pdf - Intuitionistic Type Theory
    Introductory remarks . Propositions and judgements. " 3. Explanations of the forms of judgement. ' 7. Prop ositions. 11. Rules of equal i ty.
  19. [19]
    [PDF] Martin Löf's J-Rule
    Jul 16, 2018 · Martin Löf's J-rule is the elimination rule for identity types in type theory. Before we treat this elimination rule closer, we want to give ...
  20. [20]
    [PDF] On Universes in Type Theory
    In this paper we discuss the notion of universe in type theory and suggest and study some useful extensions. We assume familiarity with type theory as presented ...
  21. [21]
    [PDF] An intuitionistic theory of types - Machine Logic
    The theory of types with which we shall be concerned is intended to be a full scale sys- tem for formalizing intuitionistic mathematics as developed, ...
  22. [22]
    [PDF] Locally cartesian closed categories and type theory - McGill University
    For example, since toposes are locally cartesian closed, there are many familiar locally cartesian closed categories: the category of Sets, and more generally ...
  23. [23]
    [PDF] Categorical Logic and Type Theory - People at MPI-SWS
    This book presents logic and type theory from a categorical perspective, using fibred categories, for logicians, type theorists, category theorists and ...
  24. [24]
    [PDF] The Interpretation of Intuitionistic Type Theory in Locally Cartesian ...
    Abstract. We give an intuitionistic view of Seely's interpretation of Martin-Löf's intuitionistic type theory in locally cartesian closed categories.
  25. [25]
    Realizability Models for Type Theories - ScienceDirect.com
    Realizability semantics does not only provide intuitive models but can also be used for proving independence results of type theories. Finally, by considering ...
  26. [26]
    [PDF] The groupoid interpretation of type theory
    The groupoid interpretation of type theory. Martin Hofmann and Thomas Streicher. August 27, 1996. 1 Introduction. Many will agree that identity sets are the ...
  27. [27]
    [PDF] Homotopy Type Theory: Univalent Foundations of Mathematics
    Univalent foundations is closely tied to the idea of a foundation of mathematics that can be implemented in a computer proof assistant. Although such a ...
  28. [28]
    Inductive families | Formal Aspects of Computing
    Dybjer, P.: Inductive sets and families in Martin-Löf's type theory and their set-theoretic semantics. InLogical Frameworks, pages 280–306. Cambridge University ...
  29. [29]
    Agda: A dependently typed functional programming ... - Hackage
    Jul 5, 2025 · Agda is also a proof assistant: It is an interactive system for writing and checking proofs. Agda is based on intuitionistic type theory, a ...
  30. [30]
    Chapter 4 Calculus of Inductive Constructions - Rocq Prover
    The Coq Proof Assistant. Chapter 4 Calculus of Inductive Constructions. 4.1 ... Coq can be used as a type-checker for the Calculus of Inductive Constructions ...
  31. [31]
    2. Dependent Type Theory - Lean
    Dependent type theory is a powerful and expressive language, allowing you to express complex mathematical assertions, write complex hardware and software ...
  32. [32]
    IDRIS ---: systems programming meets full dependent types
    This paper describes the use of a dependently typed programming language, Idris, for specifying and verifying properties of low-level systems programs.
  33. [33]
    A Proof-Oriented Programming Language: F*
    F* is a general-purpose proof-oriented language supporting functional and effectful programming, combining dependent types with proof automation.F* Tutorial · Introduction · Download · Learn
  34. [34]
    HoTT/HoTT-Agda: Development of homotopy type theory in Agda
    This repository contains a development of homotopy type theory and univalent foundations in Agda. The structure of the source code is described below.
  35. [35]
    CompCert - Main page
    CompCert is a project that formally verifies compilers, creating a high-assurance C compiler with a mathematical proof of correctness.Compiler · Downloads · Partners · Motivations
  36. [36]
    [PDF] Machine assisted proofs - Terry Tao
    Mar 17, 2024 · More advanced AI tools (e.g., Lean Copilot) are in development to suggest entire proofs of short mathematical statements, using proof assistants ...