Coq
Coq, now known as the Rocq Prover, is an interactive theorem prover and proof assistant that enables users to formalize mathematical definitions, develop machine-checked proofs of theorems, and extract certified programs from specifications.[1] It is based on the polymorphic, cumulative variant of the Calculus of Inductive Constructions, a dependently typed functional programming language that integrates higher-order logic with constructive type theory.[1] Developed primarily at Inria in France, it supports interactive proof construction via tactics, automated decision procedures, and connections to external solvers, while allowing program extraction to languages like OCaml, Haskell, and Scheme.[1] The origins of Coq trace back to 1984, when Thierry Coquand and Gérard Huet initiated work on the Calculus of Constructions at INRIA-Rocquencourt, with the first prototype released in 1989.[2] The system evolved through extensions, including Christine Paulin's 1991 addition of inductive types, and has since been implemented in OCaml under the GNU LGPL v2.1 license, with contributions from over 200 developers worldwide.[1] In 2023, it was renamed the Rocq Prover to honor its roots at INRIA-Rocquencourt and the mythological figure Roc, following a community poll, though the core tool remains backward-compatible with prior Coq versions.[1] Coq has been instrumental in landmark formal verifications, including Georges Gonthier's machine-checked proof of the Four Color Theorem in 2005, which demonstrated its capacity for complex mathematical reasoning.[3] It also facilitated the full formalization of the Feit-Thompson Odd Order Theorem in 2012, a major result in group theory requiring extensive collaboration over six years.[4] In software verification, the CompCert project produced the first industrially useful, formally verified C compiler, ensuring semantic preservation across optimizations and targeting critical systems.[5] These achievements, along with awards like the 2013 ACM Software System Award, underscore Coq's role in advancing certified programming and mathematical foundations.[1]History
Origins and early development
The development of Coq originated in 1984 as a research project led by Thierry Coquand and Gérard Huet at INRIA's Formel team in Rocquencourt, France.[6] This initiative aimed to implement advanced type theories for formal verification and proof checking, initially using the CAML programming language to build a prototype proof assistant.[6] The project drew foundational influences from earlier systems, including the Automath project of the 1960s, which pioneered automated proof verification in type theory, and Nuprl, a constructive type theory system developed in the 1970s and 1980s that emphasized computational interpretations of proofs.[6][7] Early versions of the system, prior to the 1990s, concentrated on core implementations of type theory, particularly the Calculus of Constructions introduced by Coquand in 1985 as a framework for encoding mathematical definitions and proofs.[6] The first public release came in 1989 as the CoC system (version 4.10), providing a basic proof checker alongside a vernacular language for mathematical expressions, marking the transition from theoretical experimentation to a usable tool for researchers.[6] By the late 1980s, the project had evolved to support interactive theorem proving, with the name "Coq"—standing for Calculus of Constructions—adopted around 1990 to reflect its growing capabilities in guiding users through proof construction via tactics and automation.[6][7] A pivotal advancement occurred through the contributions of Christine Paulin-Mohring, who extended the Calculus of Constructions to the Calculus of Inductive Constructions around 1991, incorporating primitive inductive types to better handle recursive data structures and proofs by induction.[6] This extension, implemented in subsequent early versions, enhanced Coq's expressiveness for practical formalizations while maintaining a small trusted kernel for reliability.[8] Her work, building on collaborations with Coquand and others, solidified Coq's role as an interactive system for mechanized mathematics during its formative phase.[9]Major releases and evolution
Coq's development began with its first public release in May 1989 as version 4.10, a mature implementation supporting the Calculus of Constructions (CoC) with features like program extraction.[2] By 1991, version 5.6 introduced full support for inductive types and an improved user interface, marking a significant evolution toward practical proof development and establishing the Calculus of Inductive Constructions (CIC) as the foundational logic.[2] Version 6.1, released in November 1996, represented a major architectural shift with a port to Objective Caml, enhancing portability and performance while solidifying CIC as the core logical foundation.[2] Subsequent updates in the 6.x series, including version 6.3 in December 1999, expanded the tactic system for interactive proofs and introduced foundational libraries for common mathematical structures, facilitating broader adoption in formal verification.[2] The 7.x series, starting with version 7.0 in March 2001, rearchitected the system for modularity and extensibility, incorporating a new proof engine and enhanced extraction mechanisms.[2] Version 8.0, released in April 2004, introduced XML output for structured proof export and improved Unicode support in the CoqIDE interface, enabling better integration with external tools and multilingual mathematical notation.[10] Coq 8.5, released in January 2016, advanced proof automation with deeper integration of the SSReflect library, a set of tactics for small-scale reflection that streamlined reasoning in mathematical components.[11] More recent versions, such as 8.18 in September 2023, emphasized performance optimizations in the kernel and greater stability for plugins, reducing compilation times and improving reliability for large-scale developments.[12] Coq's evolution culminated in the stable release of version 9.0.0 on March 12, 2025, which coincided with the official renaming to The Rocq Prover.[13] Subsequent minor releases include 9.0.1 in September 2025 and 9.1.0 in September 2025, focusing on bug fixes, guard checking improvements, and enhancements like sort polymorphism support and better extraction.[14] By 2025, the project boasted over 200 contributors, coordinated by Matthieu Sozeau since 2016, reflecting a vibrant open-source community driving innovations in type theory and proof automation.[15] The rename, announced on October 11, 2023, addressed longstanding naming ambiguities—such as unintended connotations—and honored the system's origins at Inria Rocquencourt while evoking solidity and mythological strength.[1] The transition included codebase updates to replace "Coq" references, a new website at rocq-prover.org, and commitments to backward compatibility, ensuring seamless continuity for existing projects and libraries.[1]Theoretical foundations
Calculus of Inductive Constructions
The Calculus of Inductive Constructions (CIC) forms the theoretical core of the Rocq Prover (formerly Coq) proof assistant, providing a dependently typed lambda calculus that supports both programming and proof construction. Introduced by Christine Paulin-Mohring as an extension of the Calculus of Constructions (CoC), CIC incorporates primitive support for inductive types to facilitate the definition of recursive data structures and functions, addressing limitations in CoC's ability to handle common mathematical objects like natural numbers efficiently.[16] This extension preserves CoC's expressiveness while enabling a richer set of constructions grounded in Martin-Löf-style type theory. CIC is structured as a pure type system augmented with a hierarchy of universes to ensure logical consistency. The type sorts comprise Prop for propositions and an infinite ascending chain of Type_i universes (for i \in \mathbb{N}), where Type_i : Type_{i+1}. This stratification avoids paradoxes, such as Girard's paradox, which arises in impredicative systems with a single universe by allowing inconsistent self-referential encodings of uncountable sets.[17] Definitions in Prop are impredicative, permitting universal quantification over the entire universe including the defined object itself, which supports classical logic encodings via the Curry-Howard isomorphism. In contrast, Set and Type_i enforce predicativity, restricting quantification to previously defined types to prevent inconsistency. Inductive types in CIC are specified by a collection of introduction rules (constructors) that generate terms of the type, along with eliminators for recursion and case analysis, and computation rules defining their behavior. Constructors provide the base cases, such as zero and successor for natural numbers, while eliminators include fixpoints for recursive definitions and pattern-matching primitives for dependent elimination, embodying the principle of induction. The system ensures well-foundedness through a positivity condition on recursive occurrences in constructors. Computation proceeds via a confluent reduction strategy encompassing β-reduction for lambda application—(λx:A.t)\;a \twoheadrightarrow t[a/x]—δ-reduction for unfolding constants and fixpoints, ι-reduction for matching constructors to branches, and ζ-reduction for substituting let-bindings. For instance, the natural numbers type is declared as an inductive Set: \begin{align*} &\text{Inductive}\;\textit{nat}\;:\;\textit{Set}\;:=\\ &\quad|\;\textit{O}\;:\;\textit{nat}\\ &\quad|\;\textit{S}\;:\;\textit{nat}\;\to\;\textit{nat}. \end{align*} Addition is then defined recursively via a fixpoint: \begin{align*} &\text{Fixpoint}\;\textit{add}\;(\textit{n}\;\textit{m}\;:\;\textit{nat})\;:\;\textit{nat}\;:=\\ &\quad\text{match}\;\textit{n}\;\text{with}\\ &\quad\quad|\;\textit{O}\;\to\;\textit{m}\\ &\quad\quad|\;\textit{S}\;\textit{n}'\;\to\;\textit{S}\;(\textit{add}\;\textit{n}'\;\textit{m})\\ &\quad\text{end}. \end{align*} This relies on the recursor for nat, with reductions applying ι to evaluate the match and δ to unfold the fixpoint when the argument is a constructor.Type theory and propositions as types
The Rocq Prover's (formerly Coq) type system is founded on dependent type theory, where types can depend on values, enabling expressive specifications that integrate computation and proof. In this framework, the type of a term may incorporate parameters from prior terms, allowing for definitions like predicates that vary with inputs. This dependency is realized through Pi-types for dependent functions, formally expressed as \Pi x : A. B(x) or equivalently \forall x : A. B(x), where the codomain B depends on the value x of type A. Complementing these are Sigma-types for dependent pairs, \Sigma x : A. B(x), which pair a value x with a dependent component of type B(x). These constructs form the basis for encoding complex logical structures within types, as detailed in the foundational work on the Calculus of Constructions extended with inductives.[18] Central to the Rocq Prover's (formerly Coq) logical encoding is the Curry–Howard isomorphism, which establishes a correspondence between proofs in intuitionistic logic and programs in typed lambda calculi, treating propositions as types and proofs as inhabitants of those types. Under this isomorphism, a proof of a proposition P is a term of type P, transforming logical deduction into type checking. Implications A \to B map to function types \Pi x : A. B, where a proof is a function accepting evidence of A and producing evidence of B. Conjunctions A \land B correspond to product types A \times B or \Sigma-types, with proofs as pairs of subproofs. This identification enables the Rocq Prover (formerly Coq) to unify programming and proving, where constructing a term verifies both computational correctness and logical validity, as articulated in the system's type-theoretic design.[19] To maintain consistency and stratify the type hierarchy, the Rocq Prover (formerly Coq) employs universes, a cumulative hierarchy inspired by Girard's work on System Fω. Universes are indexed levels Type_i, satisfying Type_i : Type_{i+1}, preventing self-referential paradoxes such as Girard's paradox that would arise from a direct Type : Type rule. Cumulativity introduces subtyping, where Type_i <: Type_{i+1}, allowing terms typed at lower universes to be used at higher ones via conversion rules, facilitating polymorphic definitions without excessive universe lifting. This mechanism ensures well-foundedness while supporting impredicative quantification over Prop, the universe for propositions. Equality in the Rocq Prover (formerly Coq) adopts the Leibniz notion, defined inductively as eq : ∀ A : Type, A → A → Prop, with the sole constructor refl : ∀ A (x : A), eq x x, capturing that two terms are equal if they are identical. This propositional equality supports reasoning via the J eliminator, a dependent elimination principle that inducts over equality proofs: given a property P depending on x and an equality e : eq x y, J allows deriving P(y, e) from P(x, refl x), enabling substitution and congruence in proofs. The J rule, rooted in Martin-Löf type theory, underpins tactics for rewriting and simplification, ensuring that equality behaves as an equivalence relation convertible to reflexivity where applicable.[20]System architecture
Kernel and trust model
The kernel of Coq serves as the trusted computing base, implementing a type checker for the Calculus of Inductive Constructions (CIC) to verify the correctness of all definitions, theorems, and proofs submitted to the system. This kernel is deliberately minimal, encompassing only the essential rules for type formation, introduction, and elimination, along with computation mechanisms such as beta-reduction and iota-reduction for inductive types.[21] Type checking relies on conversion rules that ensure two terms are equal if they reduce to the same normal form, typically using weak head normal form (WHNF) for efficiency during elaboration, with full normalization available for precise equivalence verification.[22] The kernel performs no proof search or automation; instead, it incrementally checks proof terms constructed by higher-level components, ensuring that each term inhabits its declared type without relying on external assumptions beyond CIC. To prevent issues like variable name capture, the kernel employs de Bruijn indices for representing variable bindings, where variables are referenced by their depth in the context rather than by explicit names. This anonymous indexing facilitates sound substitution and reduction operations central to CIC's higher-order logic. Constants in the kernel are treated as opaque, meaning their internal definitions are not inspected during checking unless explicitly unfolded, which helps maintain modularity and trust boundaries.[21] Axioms are minimized to those inherent in CIC, such as the impredicative Prop sort and the rules for inductive definitions; users may introduce additional axioms via declarations, but these must be explicitly managed to avoid undermining consistency, with tools likePrint Assumptions revealing dependencies.
Coq's trust model adheres to the de Bruijn criterion, confining trust to the small kernel codebase, while treating tactics, plugins, and the elaboration layer as untrusted. Users must trust the kernel's implementation for soundness, as it alone certifies proofs; errors in tactics might produce incorrect proof terms, but the kernel rejects ill-typed ones. Extraction of Coq developments to languages like OCaml or Haskell is supported for executable code generation but remains untrusted, as it bypasses full verification. The plugin system, which allows OCaml extensions for custom tactics, can extend the trusted base if plugins interact directly with the kernel, necessitating careful validation for formal developments; verified plugins or the use of coqchk for bytecode validation mitigates this risk. This architecture enables reliable proof checking even with unverified libraries, as long as their compiled .vo files pass kernel scrutiny.[21]
Modules and vernacular commands
Coq's module system extends the core language with hierarchical namespaces, enabling the organization of large-scale developments into modular, reusable components while maintaining type safety through the kernel's verification. Modules are declared using theModule vernacular command, which delimits a scope for definitions, lemmas, and other elements; for instance, the syntax Module M. (* declarations *) End M. encapsulates content under the identifier M, allowing qualified access like M.x for an inner definition x. Module types, defined via Module Type MT. (* specifications *) End MT., specify interfaces that modules must satisfy, facilitating abstraction and signature-based programming. The Include command incorporates an entire module or module type into the current one, promoting compositionality without explicit rebinding; this is essential for building layered structures, such as extending a base module with additional functionality.
Functors provide a mechanism for parametric modules, akin to higher-order functors in functional programming, allowing modules to depend on other modules as parameters. The syntax Module F (P : MT) := (* body using P *) End F. defines F as a functor taking a module P conforming to module type MT, enabling generic implementations like functorial data structures (e.g., a list module parameterized over an equality type). Applicative functors apply a functor to arguments, producing a concrete module: Module L := List_eq (EqNat)., where List_eq is a functor and EqNat implements its parameter. This system supports modularity in proofs and specifications, with the kernel ensuring that functor applications preserve type correctness.
Vernacular commands constitute Coq's top-level language for managing developments, including declarations and imports. Key commands include Definition id : type := term. for introducing constants or functions, and Theorem id : statement. Proof. (* tactics *) Qed. (or Defined. for proof terms) for theorems, with variants like Lemma and Fact differing only in naming conventions. The Require qualid. command loads a precompiled module (.vo file) from the loadpath into the environment, while Import qualid. brings its names into the current namespace for unqualified use; Require Export qualid. extends this by making the imported module's exports available transitively to dependents. Sections create local scopes via Section S. (* local declarations *) End S., where variables (e.g., Variable v : T.) are discharged at the end, generalizing inner definitions over them—useful for parameterized lemmas without global pollution. Additionally, Opaque constant. and Transparent constant. control reduction behavior: opaque constants are not unfolded in computations, optimizing proof search, whereas transparent ones allow full reduction for evaluation.
Coq developments are structured as compilation units in .v files, each representing a module that can be compiled independently to .vo (object) or .vi (interface) files using the coqc tool. The coq_makefile utility generates Makefiles from a list of .v files or a _CoqProject descriptor, automating dependency resolution and parallel builds for efficient project management. Command-line flags like -Q dir path map a physical directory dir to a logical module path path, enabling hierarchical imports such as From path Require submodule.; this is crucial for maintaining qualified names in multi-file projects, contrasting with -R which allows unqualified requires within the root path. These features ensure scalable builds while integrating seamlessly with the module system for namespace control.
Specification language
Gallina syntax and semantics
Gallina is the specification language of the Rocq Prover (previously known as Coq), providing a means to define mathematical objects, programs, and logical statements within the framework of the Calculus of Inductive Constructions (CIC). It supports the expression of both constructive proofs and specifications through a pure functional language that emphasizes type safety and logical consistency.[1] The syntax of Gallina terms is hierarchical and supports dependent types, allowing types to depend on values. Basic terms include sorts, which form the foundational universes: SProp, the sort for small proof-irrelevant propositions; Prop, the sort for logical propositions whose inhabitants are proofs; Set, the sort for computational types whose inhabitants can be executed; and Type, the impredicative sort containing Set, Prop, and SProp, enabling the typing of type constructors. These sorts establish the stratified universe polymorphism in the Rocq Prover, with the hierarchy SProp ⊆ Prop ⊆ Set ⊆ Type ensuring well-founded typing.[23] Product types in Gallina capture both function spaces and lambda abstractions with dependencies. The dependent product, denotedforall (x : A), B, represents the type of all terms t such that for every x of type A, the application to t yields a term of type B (where B may depend on x); this corresponds to the Π-type in dependent type theory. The corresponding abstraction is fun (x : A) => t, forming a lambda term that binds x in t and has type forall (x : A), B. Multiple binders can be grouped, as in forall (x1 : A1) (x2 : A2), B, for nested dependencies.[8]
Applications combine terms via juxtaposition: t u applies the function t to argument u, with left-associativity and higher precedence than products. Type casts explicitly coerce a term to a specified type: t : T inserts t into type T if convertible, often used for disambiguation or coercion via type class instances. Pattern matching is handled by the match expression, which destructs a term t of inductive type I: match t as x in I return P with | C1 p1 => t1 | ... | Cn pn => tn end, where x renames t (via the as clause), P is the dependent return type (via return), patterns pi match constructors Ci of I (via in I), and ti are branches. This syntax abstracts over primitive eliminators.
Concrete syntax in Gallina is customizable through notations, enhancing readability for domain-specific constructs. For instance, infix notation can define operators like Infix "+" for addition on natural numbers, allowing expressions such as 3 + 4 instead of Nat.add 3 4. Notations support levels for precedence, associativity, and binders, parsed via a grammar that extends the core syntax without altering semantics.
Semantically, Gallina realizes a dependently typed lambda-calculus as part of CIC, where every term must be typable and reduction preserves types. The core is a pure, simply typed lambda-calculus extended with dependencies, inductive definitions, and universes. Definitional equality between terms relies on convertibility: two terms are equal if they normalize to the same form under beta-reduction (lambda application), delta-reduction (constant unfolding), iota-reduction (constructor matching), and zeta-reduction (let-binding expansion). Normalization ensures decidable type checking, with strong normalization guaranteeing termination of reductions. This semantics underpins the Rocq Prover's kernel, enforcing logical consistency.[8]
Definitions in Gallina facilitate modular term construction. Non-recursive bindings use let-in: let x := t in u, which scopes x to t within u and reduces to substitution, enabling local definitions without global impact. Recursive functions employ Fixpoint f (x1 : A1) ... (xn : An) {struct xi} : A := body, where the {struct xi} annotation specifies the decreasing argument for structural recursion; the Rocq Prover enforces a guardedness condition via the positivity check, preventing non-terminating definitions by requiring recursion on inductive arguments. Postulates are declared via Axiom x : A, introducing uninterpreted constants, but their use is strongly discouraged as they bypass proof obligations and may introduce inconsistencies if overused.
Pattern matching semantics in Gallina desugar to applications of type-specific eliminators (e.g., rec for non-dependent elimination, rect for dependent). The match construct generates a proof term by case analysis on constructors, with the as, return, and in clauses ensuring dependency preservation: as x binds the scrutinee for use in P, return P types the result dependently on x, and in I specifies the family. This provides an intuitive, high-level interface to CIC's elimination rules, where the eliminator's type encodes the induction principle.
Inductive and coinductive definitions
In the Rocq Prover's Gallina language, inductive definitions allow users to specify recursive data types that represent finite structures built from a finite number of constructors. The syntax uses theInductive keyword followed by the type name, its sort (typically Set for types or Prop for propositions), optional parameters or indices, and a list of constructors, each with their arguments. For instance, the natural numbers are defined as Inductive nat : Set := O | S (n : nat).[24][25] This definition introduces two constructors: O for zero and S for the successor function, enabling the construction of all finite natural numbers through nested applications, such as S (S O) representing 2.[24]
Upon declaration, the Rocq Prover automatically generates recursion and induction principles for the defined type to support functional definitions and proofs by structural induction. For the nat type, this includes the recursor nat_rect, which has the type forall P : nat -> Set, P 0 -> (forall n : nat, P n -> P (S n)) -> forall n : nat, P n, allowing recursive computations over natural numbers, such as defining addition via Fixpoint add (n m : nat) {struct n} : nat := ....[24] Similarly, the induction principle nat_ind facilitates proofs by cases on the structure of nat. A classic example of a parameterized inductive type is the polymorphic list: Inductive list (A : Set) : Set := nil | [cons](/page/Cons) (x : A) (l : [list](/page/List) A).[24][25] Here, A is a parameter fixed for the entire type, while constructors build lists like cons 1 ([cons](/page/Cons) 2 nil) of type [list](/page/List) [nat](/page/Nat).[24]
Inductive types distinguish between parameters, which are universally quantified outside the type and shared across all constructors, and indices, which are existentially quantified within the type and may vary per constructor to index the family of types. For example, vectors (length-indexed lists) are defined as Inductive vector (A : Set) : [nat](/page/Nat) -> Set := Vnil : vector A 0 | Vcons (x : A) (n : [nat](/page/Nat)) (v : vector A n) : vector A (S n).[25] In this case, A is a parameter, while the length nat serves as an index, ensuring that Vcons 1 0 (Vnil [nat](/page/Nat)) has type vector [nat](/page/Nat) 1. To maintain well-foundedness and prevent non-terminating definitions, the Rocq Prover enforces a strict positivity condition: recursive occurrences of the inductive type in constructor arguments must appear in strictly positive positions, such as within products or sums, but not under negations or implications like I -> False. Violations, such as attempting Inductive I : [Prop](/page/Prop) := c : (I -> False) -> I, are rejected by the kernel.[24]
Mutual inductive definitions enable the simultaneous declaration of multiple interdependent types using the with keyword, which is useful for structures like trees and forests. An example is Inductive tree (A : Set) : Set := node : A -> forest A -> tree A with forest (A : Set) : Set := leaf : A -> forest A | nodef : tree A -> forest A -> forest A.[24][25] This defines binary trees (tree) whose nodes point to forests (forest), which in turn can contain subtrees, with mutual recursion principles generated automatically. Nested inductives allow constructors to refer to other inductive types within their arguments, as in Inductive list2 (A : Set) : Set := cons2 : A -> list2 (A * A) -> list2 A, though the positivity condition still applies to ensure termination.[24]
Coinductive definitions, in contrast, support the construction of infinite or potentially infinite structures by relaxing the well-foundedness requirement, allowing non-terminating co-recursive definitions. The syntax mirrors inductive definitions but uses CoInductive, such as CoInductive stream (A : Set) : Set := { hd : A; tl : stream A }, which defines infinite streams with a head element and a tail stream, using record notation for clarity.[26] Unlike inductives, coinductives do not generate eliminators or induction principles; instead, they provide destructors like projection functions hd and tl, typed as hd : forall s : stream A, A and tl : forall s : stream A, stream A, enabling observation of infinite objects without requiring productivity checks in modern Rocq Prover versions (since 8.9, negative coinductives are preferred).[26] Co-recursive functions are defined using CoFixpoint, for example, CoFixpoint ones : stream nat := { hd := 1; tl := ones }, which must satisfy the guardedness condition to ensure definitional equality with coinductive predicates.[26][25]
Proof development
Interactive theorem proving process
In Coq, the interactive theorem proving process begins when a user declares a theorem, lemma, or goal using commands such asTheorem, Lemma, or Goal, which transitions the system into proof mode.[27] In this mode, Coq displays the current proof state, consisting of one or more subgoals, each showing the local context (hypotheses and variables above a dashed line) and the conclusion to prove below it.[27] Tactics are then applied sequentially to transform, split, or solve these subgoals, gradually building a proof tree until all goals are discharged. Upon successful completion, the [Qed](/page/QED) command exits proof mode, verifies the proof, and registers an opaque proof term in the environment; alternatively, [Save](/page/Save) or Defined can be used to override naming or keep the term transparent for further use.[27]
Navigation and revision within the proof are facilitated by dedicated commands to manage the proof state without restarting the entire session. The Back command undoes the effect of the last one or more sentences (defaulting to one), which can include tactics or vernacular commands within proof mode, though it treats completed proofs (post-Qed) as atomic units.[28] The Restart command restores the current proof to its initial goal, discarding all intermediate steps while preserving the overall session state.[27] For broader resets, Reset removes all objects introduced since a specified identifier or returns to the session's initial state with Reset Initial.[29] Structured proofs benefit from bullet notation, where symbols like -, +, and * organize subgoals hierarchically within the script (e.g., - intros. for the first subgoal, followed by nested bullets for branches), improving readability and enabling focused editing in interactive environments like CoqIDE or Proof General.
The process balances manual control with varying degrees of automation to suit different proof complexities. Manual steps involve explicit tactic applications to guide the proof precisely, ensuring transparency and verifiability, while semi-automated tactics like auto attempt to solve goals by iteratively applying a bounded search over hint databases, combining basic tactics such as intros, apply, and reflexivity without user intervention for simple cases.[30] Post-verification with Qed, the resulting proof term—a lambda abstraction in the Calculus of Inductive Constructions—can be inspected via Show Proof or extracted for use in definitions, libraries, or even program extraction to functional code in languages like OCaml or Haskell, though the primary focus remains on the certified term itself.[27]
A typical workflow starts with stating the theorem in Gallina, entering proof mode, and using foundational tactics to decompose the goal: intros introduces universal quantifiers or implications as hypotheses from the context; refine partially applies a term to instantiate existentials or create subgoals for arguments; apply unifies the goal with the conclusion of a lemma or hypothesis, potentially generating subgoals for premises. For case analysis on inductive types, inversion decomposes a hypothesis by generating cases and injecting relevant facts, while discriminate refutes goals by exploiting injectivity or distinct constructors. This sequence refines the proof tree until subgoals are solved by trivial tactics like assumption or reflexivity, at which point auto may be invoked to handle residual simplifications. Custom automation can be achieved via the Ltac language for more advanced search strategies.
Ltac tactic language
Ltac is a domain-specific programming language embedded in Coq for defining custom tactics, enabling users to automate and script complex proof search strategies beyond simple interactive commands.[31] It provides a declarative yet expressive syntax that combines pattern matching on proof states, backtracking search, and composition of primitive tactics, making it suitable for creating reusable proof automation procedures.[31] Unlike Coq's specification language Gallina, which focuses on defining terms and proofs, Ltac operates at the level of proof goals and contexts, allowing dynamic manipulation during interactive theorem proving.[31] The core syntax of Ltac revolves around defining named tactics with the formLtac ident := tactic_expr., where tactic_expr can be a primitive tactic, a combinator, or a more elaborate expression.[31] A key construct is the match goal expression, which pattern-matches against the current proof goal (including hypotheses and conclusion) using clauses of the form | pattern => tactic end, enabling conditional execution based on goal structure; patterns can include wildcards (_) for irrelevant parts and constraints on term shapes.[31] Backtracking is integral to Ltac's search-oriented semantics, facilitated by the try combinator, which attempts a tactic and succeeds even if it fails (denoted as tactic + in infix notation), and the ? operator for non-deterministic choice, as in t1 ? t2 to try t1 first and fall back to t2 if it fails.[31] More generally, alternatives are expressed with first [t1 | t2 | ... ], which exhaustively tries tactics in sequence until one succeeds without error, supporting depth-first search with backtracking over proof branches.[31]
Built-in tactics form the foundation of Ltac expressions, providing atomic operations on proof states.[31] For instance, intros automatically introduces quantified variables or hypotheses from the goal's premises into the local context, optionally binding names.[31] The apply tactic unifies a given term (such as a lemma or hypothesis) with the goal's conclusion, generating subgoals for preconditions; it supports both forward and backward reasoning styles.[31] reflexivity solves goals of the form x = x by appealing to the reflexive axiom of equality, often closing trivial equalities.[31] Similarly, assumption searches the context for a hypothesis that exactly matches the goal, discharging it immediately if found.[31]
Ltac includes powerful combinators for sequencing and controlling tactic execution.[31] The repeat tactic applies the inner tactic zero or more times until it fails, useful for iterative simplifications like normalization.[31] do n tactic precisely executes the tactic n times, succeeding only if all iterations complete without failure, which aids in bounded automation.[31] Building on backtracking primitives, try tactic wraps any tactic to make failure non-fatal, allowing continuation in search branches, while first (as noted earlier) enables ordered disjunction for strategy selection.[31]
Context management in Ltac allows fine-grained manipulation of hypotheses and goals during proof development.[31] The pose tactic introduces a new hypothesis ident := term into the local context without generating subgoals, useful for naming intermediate computations.[31] assert creates a new subgoal of a specified type (optionally naming the resulting hypothesis), splitting the proof into proving the assertion and then using it.[31] For cleanup, clear ident removes a hypothesis from the context if it is no longer needed, and unfold ident replaces occurrences of a constant (like a definition) with its body in the goal or selected hypotheses, facilitating simplification.[31]
Error handling in Ltac ensures robust automation by controlling failure propagation in backtracking contexts.[31] The fail tactic explicitly causes a branch to fail, optionally with a custom message (e.g., fail 0 "explanation" to distinguish failure levels), which is crucial for pruning unviable search paths without crashing the overall tactic.[31] This integrates with Ltac's exception-like semantics, where uncaught failures trigger backtracking to alternatives.[31]
A representative example of Ltac for basic automation is the definition:
This tactic iteratively attempts reflexivity (for equalities), discriminate (to refute unequal discriminants), or intros (to simplify premises) until no progress is possible, providing a simple solver for decidable goals.[31] Since version 8.16, Rocq provides Ltac2, a typed, functional successor to Ltac designed for more reliable and performant tactic development. Ltac2 offers better error messages, static typing, and integration with OCaml-like syntax, and is recommended for new custom tactics, while remaining backward-compatible with existing Ltac code.[32]Ltac auto_solved := repeat (try [reflexivity](/page/tactic) || try [discriminate](/page/tactic) || [intros](/page/tactic)).Ltac auto_solved := repeat (try [reflexivity](/page/tactic) || try [discriminate](/page/tactic) || [intros](/page/tactic)).
Standard library
Core mathematical components
The Arith module in Coq's standard library provides foundational arithmetic operations based on Peano's axioms, implemented using the inductive typenat for natural numbers, defined as nat : Set := O | S (n : nat). Addition (plus) and multiplication (mult) are defined recursively on this type, with plus n m computing the sum by recursing on m and mult n m by repeated addition. Key lemmas establish their algebraic properties, including commutativity of addition (plus_comm : ∀ n m : nat, n + m = m + n) and associativity of multiplication (mult_assoc : ∀ n m p : nat, n * (m * p) = (n * m) * p), all proved within the intuitionistic logic of Coq. The module also defines orders on natural numbers, with le as the less-than-or-equal relation (n ≤ m ↔ ∃ k, n + k = m) and lt as strict less-than (n < m ↔ n + 1 ≤ m), accompanied by trichotomy via the compare function, where compare n m returns Eq if n = m, Lt if n < m, or Gt if n > m, with the lemma compare_spec proving these cases are exhaustive and mutually exclusive.
Coq's standard library adopts intuitionistic logic by default, defining core connectives as inductive types in the Logic module, such as conjunction (and A B) with constructors conj : A → B → A ∧ B and disjunction (or A B) with or_introl : A → A ∨ B and or_intror : B → A ∨ B. Classical axioms, like the law of excluded middle (excluded_middle : ∀ A : [Prop](/page/Prop), A ∨ ¬A), are available but not assumed, requiring explicit import from the Classical module for their use. Relations form a central part, with equality (eq A) as an inductive family (eq_refl : ∀ x : A, x = x) supporting symmetry (eq_sym : ∀ A (x y : A), x = y → y = x) and transitivity (eq_trans : ∀ A (x y z : A), x = y → y = z → x = z), and logical equivalence (iff A B) defined as (A → B) ∧ (B → A) with reflexivity (iff_refl : ∀ A, A ↔ A), symmetry, and transitivity. Congruence properties follow from equality, via lemmas like f_equal : ∀ (A B : Type) (f : A → B) (x y : A), x = y → f x = f y, enabling substitution in functions.[33]
Finite sets in the standard library are handled through the Sets module, where sets are abstractly represented as predicates over a universe type (Ensemble U := U → Prop), and finiteness is captured inductively via the Finite predicate, with constructors Empty_is_finite : Finite (Empty_set U) for the empty set and Union_is_finite : ∀ A : Ensemble U, Finite A → ∀ x : U, ~ In U A x → Finite (Add U A x) for adding an element outside the set. In practice, finite sets are often concretized using lists (from the Lists module) as multisets or deduplicated collections, or via custom inductive types for specific domains, leveraging decidable equality for membership tests. Basic functions include eq_dec for types with decidable equality, such as natural numbers (eq_dec : ∀ n m : nat, {n = m} + {n ≠ m}), which enables efficient set operations like inclusion and equality by case analysis on equality decisions.
The Sorting submodule supplies mathematical primitives for verifying ordering algorithms, defining LocallySorted for lists where each element is less than or equal to its successor (LocallySorted A R l ↔ ∀ i, i < length l - 1 → R (nth i l (default A)) (nth (S i) l (default A))), and Sorted for global order (Sorted A R l ↔ ∀ i j, i < j < length l → R (nth i l (default A)) (nth j l (default A))), with equivalence under transitive relations (Sorted_LocallySorted : transitive R → Sorted R l ↔ LocallySorted R l). These support correctness proofs for insertion sort, a recursive algorithm that builds a sorted list by inserting each element into a sorted prefix while preserving permutation and order: the insertion function insert satisfies Permutation (insert x l) (x :: l) and Sorted R (insert x l) ↔ Sorted R l, leading to the full sort lemma insertion_sort_correct : Sorted R (insertion_sort l) ∧ Permutation (insertion_sort l) l. Binary search on sorted lists relies on these orders and the lt relation from Arith, with correctness proved via invariants ensuring the search interval halves while maintaining ∀ y in low..mid, y ≤ target and ∀ y in mid+1..high, target < y, terminating by well-founded induction on nat and yielding exact membership via trichotomy.[34]
Data structures and utilities
The Coq standard library provides theList module, which defines polymorphic lists and essential operations for functional programming tasks. The list type is inductively defined in Init.Datatypes as Inductive list (A : Type) : Type := nil | cons (x : A) (l : list A), supporting operations such as map for applying a function to each element, filter for selecting elements satisfying a predicate, and fold_left and fold_right for iterative accumulation. Additional utilities include lemmas for permutations via the Permutation relation, which captures list reorderings preserving elements, and sorting functions in the Sorting submodule that axiomatize comparison-based sorting with properties like stability and correctness. The FinFun module complements lists by handling functions over finite domains, representing them as total functions constant outside a finite support, with key results establishing that for functions f : A \to A where A is finite, injectivity, bijectivity, and surjectivity are equivalent.
Dependent vectors in Coq address length-indexed data structures, defined in the Vectors module as Inductive t (A : Type) (n : nat) : Type := nil : t A 0 | cons : A -> t A n -> t A (S n). This encoding ensures type-level length enforcement, enabling operations like append that concatenate two vectors of lengths m and n into one of length m + n, accompanied by proofs of correctness such as associativity. The nth function retrieves the element at a finite index i < n, returning a proof alongside the value to prevent out-of-bounds access, facilitating verified array-like computations.
String handling in the standard library relies on ASCII characters via the Ascii module, where ascii is an inductive type packing eight booleans into a Set, allowing literal notation like 'A' for the uppercase A character. Strings are implemented as lists of ascii in the String module, with utilities for concatenation (++), length computation, substring extraction, and comparison, though full Unicode support requires external libraries.
For relational utilities, the Relations submodule offers operators like clos_trans for the transitive closure of a relation R, generating the smallest transitive relation containing R, and reflexive_closure for adding reflexivity. Reflexive-transitive closure (clos_refl_trans) combines both, supporting automation via lemmas such as equivalence proofs between closure definitions and compatibility with union or composition, which aid in defining graph reachability or inductive invariants.
Applications
Formal software verification
Coq has been extensively applied in formal software verification to establish the correctness of programs with respect to their specifications, leveraging its ability to prove properties such as functional correctness, memory safety, and termination. This involves defining program semantics within Coq's logical framework and constructing machine-checked proofs that the implementations satisfy these semantics, often using higher-order logic to model imperative behaviors. Key techniques include program logics tailored for verification, such as separation logic, and tools that bridge Coq with deductive verification platforms. These efforts have enabled the certification of real-world software components, from compilers to cryptographic libraries, ensuring that verified properties hold in the extracted executable code.[35] A cornerstone of Coq's role in software verification is its program extraction mechanism, which translates Coq definitions into executable code in functional languages while preserving the computational content and discarding proof terms. Extraction targets include OCaml, Haskell, and Scheme, allowing verified algorithms to be compiled into efficient binaries; for instance, the process erases logical annotations to produce standalone programs that behave equivalently to their Coq counterparts for computational parts. This feature has been foundational since Coq's early development, enabling the deployment of certified extractors that maintain semantic preservation, as formalized in the system's reference manual. For imperative targets like C, extraction often proceeds indirectly via intermediate tools that refine the output, ensuring compatibility with low-level systems programming. Plugins extending Coq with advanced program logics, such as the CFML (Close-Founded Modular Logic) framework, facilitate verification of imperative OCaml programs using separation logic. CFML embeds separation logic into Coq's type theory, allowing proofs of local reasoning about heap-manipulating code through characteristic formulae that describe program behaviors modularly. This approach has been applied to verify algorithms involving mutable state, such as list manipulations and tree traversals, by proving postconditions that ensure absence of errors like null pointer dereferences. A seminal application is the CompCert project, initiated in 2005 and maintained ongoing, which uses Coq to formally verify an optimizing C compiler from its Clight subset to assembly for architectures like PowerPC, ARM, and x86; the proofs establish semantic preservation across all optimization passes, guaranteeing that compiled code behaves as specified by the C semantics.[36][5] Interactive program proofs in Coq are supported through integrations like Why3, a platform for deductive verification that generates verification conditions (VCs) from annotated programs and discharges them using Coq's interactive proving capabilities. Why3's WhyML language allows specifying pre- and postconditions for imperative code, which are then transformed into Coq lemmas for proof; this enables step-by-step verification of algorithms, combining automated solvers with manual tactics for complex cases. Notable examples include the certification of a generic parser interpreter using Coq's extraction and proof infrastructure, ensuring correct parsing of structured inputs like markup languages without runtime errors.[37] Similarly, the KreMLin tool extends extraction pipelines to produce verified C code from Coq (and related systems), applied in projects like high-assurance cryptographic implementations where proofs guarantee constant-time execution and absence of side-channel vulnerabilities.[38] In recent advancements, Coq has been used to verify bounds on the Busy Beaver function, a Turing-complete measure related to the halting problem, demonstrating its capacity for deciding non-halting behaviors in computational models. In 2024, researchers employed Coq to formally prove the halting status of all 5-state, 2-symbol Turing machines, establishing that the fifth Busy Beaver value is 47,176,870 steps; this involved machine-checked deciders and exhaustive enumeration, providing the first complete certification of this long-standing open problem and highlighting Coq's utility in bounding undecidable problems through finite verification.[39]Mathematical formalizations
Coq has enabled the formal verification of several landmark theorems in pure mathematics, highlighting its role in constructing machine-checked proofs of complex non-computational statements. These formalizations often involve developing extensive supporting libraries for underlying mathematical structures, such as graph theory or group theory, to ensure every step is rigorously justified within the system's logical framework. By 2025, such efforts have contributed to formal libraries exceeding 150,000 lines of code, as seen in the Mathematical Components project, which underpins many of these proofs.[40] One of the earliest and most influential formalizations is the Four Color Theorem, proved by Georges Gonthier in Coq between 2004 and 2005. The theorem asserts that any planar map can be colored using at most four colors such that no two adjacent regions share the same color, and the Coq proof formalizes the entire argument, including a detailed development of graph theory concepts like planar graphs and Kempe's method for reducibility. This work relied on the SSReflect tactic language extension to handle the proof's combinatorial complexity efficiently. The formalization, spanning thousands of lines, was fully checked by Coq version 7.3.1 and remains available in the rocq-community/fourcolor library.[41][42] In 2012, a collaborative team completed a machine-checked proof of the Feit-Thompson Theorem, also known as the Odd Order Theorem, which states that every finite group of odd order is solvable. This formalization, part of the Odd Order project, required an extensive library of finite group theory, including formal definitions of solvability, Sylow theorems, and character theory, totaling over 100,000 lines of Coq code at the time. The proof followed a modernized version of the original 1963 argument by Walter Feit and John G. Thompson, adapting it to Coq's dependent type theory while addressing gaps in the informal exposition. The result, verified using the Mathematical Components libraries and SSReflect, marked a significant milestone in formalizing advanced abstract algebra.[4][43] Beyond these seminal theorems, Coq supports formalizations of foundational mathematical structures essential for broader analysis. The Coquelicot library provides a user-friendly extension of Coq's standard real numbers, formalizing classical real analysis concepts such as limits, continuity, derivatives, and integrals in a way that aligns with standard mathematical practice while remaining conservative over the axiomatic reals.[44] Similarly, libraries have formalized key aspects of topology, including topological spaces and continuity, often integrated into analysis developments like those in Mathematical Components. Category theory has seen substantial progress, with formalizations of categories, functors, natural transformations, and limits in projects such as UniMath, enabling proofs in abstract algebraic settings.[45] Recent efforts from 2023 to 2025 have advanced formalizations in algebraic geometry through the UniMath library, which leverages univalent foundations based on homotopy type theory. Notable contributions include the algebraic small object argument, a categorical tool for constructing colimits and factorizations used in homological algebra and derived categories, fully formalized in Coq to support synthetic proofs of geometric structures. These developments build on UniMath's univalent approach to equality, facilitating equivalences between algebraic varieties and sheaves without classical axioms.[46]Extensions and ecosystem
Key plugins and extensions
Coq's ecosystem includes several key plugins that extend its core capabilities, enabling more efficient proof development, specialized mathematical reasoning, and advanced programming constructs. These plugins integrate directly into Coq's proof engine, modifying tactics, definitions, and type theory interpretations to support diverse formalization needs. SSReflect, developed in 2006, provides a suite of small-scale reflective tactics designed for concise and modular mathematical proofs. It emphasizes a proof style that leverages computational reflection to discharge proof obligations automatically where possible, reducing verbosity in tactic scripts. SSReflect integrates seamlessly with the Mathematical Components library, facilitating formalizations in algebra, analysis, and combinatorics by offering tactics likerewrite, apply, and case that handle views and canonical structures for typeclass-like resolution.[47][48]
The Coq-HoTT library serves as a plugin for formalizing Homotopy Type Theory (HoTT), extending Coq's type theory with the univalence axiom and higher inductive types to model synthetic homotopy and higher-dimensional structures. It enables proofs of theorems like the fundamental group of the circle or Voevodsky's univalence principle directly within Coq, supporting research in foundations of mathematics and computer science. Users activate it via flags such as -noinit and -indices-matter to override standard inductive definitions with higher ones.[49][50]
The Equations plugin enhances Coq's support for recursive function definitions by implementing dependent pattern matching that goes beyond the limitations of standard fixpoints and primitive recursion. It allows users to define functions via equations that handle mutual recursion, nested induction, and well-founded recursion, automatically generating proof obligations for correctness and exhaustiveness. The plugin compiles these definitions to eliminators and provides tactics like funelim for reasoning about them, making it essential for programming and proving in dependent types.[51][52]
Micromega offers decision procedures for linear arithmetic over ordered rings, including integers (Z), rationals (Q), and reals (R), automating the solution of goals involving linear inequalities and equalities. Tactics such as lia (for linear integer arithmetic) and nia (nonlinear integer arithmetic) use Fourier-Motzkin elimination and simplex methods to discharge proofs efficiently, often in combination with Coq's ring and field tactics. It is particularly useful for verifying properties in program analysis and optimization.
Coq-Elpi, introduced in 2018 as a major integration, embeds the ELPI logic programming language into Coq, allowing users to define custom commands and tactics using higher-order abstract syntax for Coq terms. This enables declarative extensions like advanced unification or meta-programming, bridging Coq's functional style with logic programming paradigms for more flexible proof automation. It supports λProlog-style rules to manipulate Coq's abstract syntax trees directly.[53][54]
Integration with other tools
Coq provides several interfaces and tools that enable seamless integration with external development environments, package managers, and other verification systems, facilitating collaborative and hybrid proof development workflows. The built-in CoqIDE offers a graphical user interface for interactive proof editing, featuring real-time display of proof goals, error highlighting, and support for scripting commands to navigate and execute proofs step-by-step.[55] For users preferring text-based editors, Proof General integrates Coq deeply with Emacs, providing syntax highlighting, goal buffers, and automated processing of proof scripts within the Emacs environment.[56] The OPAM package manager supports the Coq ecosystem by handling dependencies, versions, and installations of Coq libraries and extensions, allowing developers to manage complex projects with multiple Coq versions and third-party packages efficiently.[57] Complementing this, coq-serapi exposes a machine-readable interface for programmatic interaction with Coq, serializing abstract syntax trees and proof states into S-expressions for integration with external tools like IDEs or analysis pipelines.[58] Why3 serves as a verification platform that generates proof obligations from WhyML specifications and dispatches them to multiple backends, including Coq for interactive proofs alongside automated provers such as Alt-Ergo and SMT solvers like Z3, enabling hybrid verification strategies.[35] For web-based access, jsCoq compiles Coq to JavaScript using Emscripten, allowing full proof development and execution directly in modern browsers without server dependencies.[59] In recent developments, bridges for cross-formalization between Coq and Lean have emerged, such as extraction pipelines that translate Lean terms into Coq's verified extraction mechanism, supporting interoperability for shared mathematical libraries as of 2025.[60] Additionally, export mechanisms like those using the OpenDreamKit universal format enable translation of Coq libraries to Isabelle, promoting reuse and comparison across proof assistants.[61]Community and development
Contributors and governance
Coq's development originated at INRIA in France in 1984. It has evolved into an international effort, with core coordination handled by key figures such as Yves Bertot and Assia Mahboubi, a lead developer of major libraries like Mathematical Components.[1] As of 2025, the project has attracted over 280 contributors through its GitHub repository, reflecting a broad, global community of developers and users.[62] Governance of the Rocq Prover is managed by a steering committee, which oversees major decisions on the project's direction, releases, and technical policies. Day-to-day development occurs through collaborative processes, including discussions on issue trackers, pull requests, and regular developer meetings facilitated via platforms like Zulip and Discourse.[62] Funding for the Rocq Prover's advancement has been supported by European Union initiatives, notably the ERC-funded Fresco project (2021–2026), led by Assia Mahboubi, which focuses on reliable symbolic computation and enhances the Rocq Prover's capabilities. In 2025, the project released Rocq 9.0 in March, marking further evolution post-rename.[63][64] Several notable figures have shaped the Rocq Prover's evolution. Georges Gonthier developed the SSReflect proof language, a small-scale reflection extension that has become integral for large-scale formalizations, such as the Four Color Theorem proof.[65] Jean-Christophe Filliâtre created Why3, a verification platform that interfaces with the Rocq Prover to enable deductive program verification using multiple provers.[35] More recently, Amin Timany has contributed to advancements in homotopy type theory (HoTT) formalizations within the Rocq Prover, including work on cumulative inductive types and category theory libraries.[66] The Rocq Prover community fosters collaboration through dedicated events, including the annual Rocqshop (formerly the Coq Workshop), which is affiliated with the International Conference on Interactive Theorem Proving (ITP) and brings together developers to discuss implementation challenges and future enhancements.[67] These gatherings, along with ongoing developer sessions at ITP conferences, promote knowledge sharing and coordination among contributors.[68]Licensing and availability
The Rocq Prover is released under the GNU Lesser General Public License version 2.1 (LGPL v2.1), a licensing arrangement adopted since 2004 that facilitates the development of proprietary extensions by permitting the dynamic linking of the Rocq Prover's libraries with closed-source software without mandating disclosure of the proprietary code.[1][69] The LGPL's "lesser" copyleft provisions ensure that modifications to the Rocq Prover itself must be shared under the same terms, while allowing broader integration flexibility compared to stricter licenses like the GPL.[69] The source code repository, maintained openly for contributions and transparency, is hosted on GitHub at github.com/rocq-prover/rocq, where users and developers can access, fork, and build from the latest versions.[62] Installation of the Rocq Prover is streamlined through multiple channels to accommodate diverse user environments. The primary method utilizes OPAM, the OCaml package manager, enabling users to install specific versions and dependencies with commands likeopam install rocq-prover for seamless integration within OCaml ecosystems (with coq-core providing compatibility for legacy commands).[70] Official binary distributions are provided via the Rocq Platform (also known as Coq Platform) for major operating systems, including Linux, macOS, and Windows, offering pre-compiled executables that include essential tools and libraries without requiring manual compilation.[71] For container-based workflows, Docker images are readily available on Docker Hub, allowing quick deployment in isolated environments suitable for development, testing, or continuous integration.
The Rocq Prover operates on the OCaml runtime, leveraging its bytecode interpreter for portability and native compilation for performance, ensuring compatibility across Unix-like systems, Windows, and macOS. A web-accessible variant, jsCoq, compiles the Rocq Prover kernel to JavaScript (or WebAssembly), enabling interactive proof development directly in modern browsers like Chrome or Firefox without local installation.[72] The project's version management adheres to a policy of semi-annual major releases, typically in spring and fall, with long-term support (LTS) provided through stable branches and compatibility packages that maintain backward compatibility for libraries across at least the prior three major versions, as facilitated by the Rocq Platform's multi-version support.[73][74]
Comprehensive documentation supports the Rocq Prover's accessibility for both novices and experts. The official reference manual, detailing the language, tactics, and extensions, is hosted at rocq-prover.org/refman, while tutorials and introductory resources, including the Software Foundations series, are available at rocq-prover.org/docs to guide users through proof construction.[75] Version-specific changes and evolution are archived in the changes.html file, providing detailed diffs and release notes to track updates and migrate developments between releases.[76]