Linear logic is a substructural logic system introduced by French mathematician Jean-Yves Girard in 1987 as a refinement of classical and intuitionistic logic, emphasizing the treatment of logical hypotheses as consumable resources that must be used exactly once, without allowing their free duplication (contraction) or deletion (weakening).[1][2]This resource-aware approach decomposes traditional logical connectives into more granular ones, including multiplicative operators such as the tensor product (⊗) for parallel composition and linear implication (−∘) for sequential resource consumption, additive connectives like "with" (&) for choice-independent conjunction and "plus" (⊕) for disjunction, and exponential modalities ! ("of course") and ? ("why not") that permit controlled reuse or erasure of resources under specific conditions.[2] Motivated by limitations in classical proof theory—particularly the unrestricted handling of assumptions that fails to capture causality, state transitions, or physical resource constraints—linear logic builds on Gentzen's sequent calculus to provide a framework where proofs model dynamic processes like chemical reactions or economic exchanges.[1][2]Since its inception, linear logic has profoundly influenced computer science and proof theory, enabling resource-sensitive models in concurrent and functional programming, type systems via the Curry-Howard correspondence, and categorical semantics using monoidal categories to interpret proofs as computations.[1] Its duality via linear negation (⊥) and focus on cut-elimination ensure strong normalization properties, making it a foundational tool for studying complexity, ludics, and non-commutative logics in fields ranging from linguistics to quantum computing.[1][2]
Overview and History
Core Concepts and Motivation
Linear logic is a substructural logic developed by Jean-Yves Girard in his seminal 1987 paper, designed to refine classical and intuitionistic logics by explicitly accounting for resource consumption in proofs.[3] Unlike traditional logics, it treats hypotheses as consumable resources that must be used exactly once, unless explicitly marked for reuse via modalities, thereby modeling scenarios where assumptions represent limited assets rather than abstract truths.[4] This approach emerged from Girard's efforts to address foundational challenges in proof theory, such as the non-deterministic nature of cut-elimination in classical sequent calculus, and in category theory, where symmetric structures hindered meaningful denotational semantics.[4]The primary motivation for linear logic stems from the inadequacies of classical logic in handling resource scarcity. In classical systems, structural rules like weakening (discarding unused assumptions) and contraction (duplicating assumptions for reuse) allow premises to be treated as infinitely available, which fails to capture real-world dynamics such as economic transactions or computational state changes where resources are depleted upon use.[5] By decoupling these structural rules from the logical connectives—effectively suspending weakening and contraction by default—linear logic enforces precise resource management, enabling a more faithful representation of causality, concurrency, and bounded computation.[4] This innovation not only resolves theoretical paradoxes but also bridges logic with computer science, providing a framework for programming languages that track resource usage.[3]A illustrative example highlights this departure: in classical logic, the tautology A \vdash A \land A holds, as the single assumption A can be contracted to derive two instances via the conjunction.[5] In linear logic, however, this is invalid without additional modalities, requiring two distinct resources to produce the conjunctive conclusion, thus emphasizing the "linear" consumption of each hypothesis exactly once.[5] Girard's framework, originating in his 1987 work, was thus driven by the need for a logic that discriminates between stable truths and transient resources, laying the groundwork for applications in semantics and verification.[3]
Historical Development
Linear logic was introduced by Jean-Yves Girard in his seminal 1987 paper, where he proposed it as a refinement of classical and intuitionistic logic, motivated by issues in bounded arithmetic and the desire to model resource consumption more precisely.[6] This work emerged from Girard's earlier research on System F, the polymorphic lambda calculus, whose semantical analysis highlighted limitations in traditional logics regarding structural rules like contraction and weakening.[7] Girard's introduction also drew on connections to category theory, particularly monoidal categories, to provide a foundational framework for the logic's multiplicative and additive connectives.[6]In the 1990s, developments focused on proof representation and applications. Girard introduced proof nets in his 1987 paper as a graphical alternative to sequent calculus derivations, which were refined by Vincent Danos and Laurent Regnier in 1989 for multiplicative linear logic, enabling more efficient parallel proof normalization.[8] Concurrently, linear logic influenced concurrency models; Samson Abramsky and Radha Jagadeesan developed game semantics in 1994, interpreting proofs as winning strategies in games that captured the logic's resource-sensitive nature, bridging logic with process calculi like Petri nets. These advances, involving key figures like François Lamarche who extended the logic to intuitionistic variants in 1996, solidified linear logic's role in theoretical computer science.The 2000s saw expansions into programming languages via the Curry-Howard isomorphism, where linear types—ensuring resources are used exactly once—were integrated into systems for memory safety and concurrency. This influence appeared in languages like Clean and Id, and later in Rust's borrow checker around the 2010s, which enforces affine types inspired by linear logic to prevent data races without garbage collection.[9] Girard's later work on ludics in 2001 further extended these ideas, modeling interactive proofs as games to address non-termination in computation.Post-2010 advances integrated linear logic with emerging fields. In homotopy type theory, influenced by Vladimir Voevodsky's univalent foundations, linear variants emerged to handle higher-dimensional structures; for instance, Elies Harington's 2024 work modeled linear logic using polynomials in HoTT for synthetic reasoning about resources. Applications in quantum computing grew, with linear logic modeling quantum resources and error correction; a 2024 paper by Ehrhard, Tortora de Falco, and Dal Lago used multiplicative proof nets for quantum error-correcting codes, linking logical reductions to quantum operations.[10] In AI verification, recent proceedings highlighted linear logic's utility; for example, a 2023 paper by Horne and Padovani provided a logical account of subtyping for session types, aiding formal verification of AI protocols with resource constraints.[11] In 2025, further applications appeared, such as Giusti and Pagani's encoding of automatic differentiation in JAX via a linear lambda calculus, advancing resource-aware computational models.[12] These developments underscore linear logic's enduring impact through 2025.
Syntax and Connectives
Basic Syntax and Formulas
Linear logic employs a formal language to express resource-sensitive reasoning, where the propositional fragment forms the foundational syntax. Atomic propositions, denoted by variables such as p, q, r, serve as the basic building blocks, along with their linear negations p^\perp, q^\perp, r^\perp. Formulas are inductively constructed by applying the connectives of linear logic to these atoms and constants like the multiplicative units [1](/page/1) and \bot. This construction emphasizes the absence of structural rules for contraction and weakening, ensuring that assumptions and conclusions are treated as consumable resources.[13]Sequent notation in linear logic takes the form \Gamma \vdash \Delta, where \Gamma (the antecedent) and \Delta (the succedent) are finite multisets of formulas rather than sets. The use of multisets is crucial, as it permits multiple instances of the same formula, thereby tracking the exact count of resources without allowing implicit duplication or deletion. For example, the multiset \{ A, A, B \} distinguishes two occurrences of A from a single one, reflecting the logic's sensitivity to multiplicity.[13]The first-order extension of linear logic incorporates quantifiers \forall x . A and \exists x . A, where x is a first-order variable and A is a formula possibly containing free occurrences of x. These quantifiers adhere to De Morgan duality under linear negation, such that (\forall x . A)^\perp = \exists x . A^\perp and (\exists x . A)^\perp = \forall x . A^\perp. Linearity imposes restrictions on eigenvariables: in derivations, the variable introduced by a quantifier must not appear free in the surrounding context, preventing unauthorized resource reuse.[13]Illustrative examples highlight the distinct flavor of linear formulas. Consider the par connective A \parr B, which encodes a form of disjunction where the choice between A and B consumes resources without permitting classical-style duplication, in contrast to the classical A \lor B. Similarly, linear implication A \multimap B demands the full consumption of A to produce B, underscoring the resource-aware structure.[13]Higher-order linear logic extends this syntax to support functional abstraction and application, incorporating lambda terms such as \lambda x . M for linear functions, where M is a term, and applications M \, N. These terms are typed over linear formulas, enabling the encoding of higher-order resource management in type systems. The linear negation provides a duality between positive and negative connectives, foundational for these extensions.[13]
Connectives, Duality, and Polarity
Linear logic employs a refined set of connectives that distinguish between different modes of resource combination and consumption, categorized into multiplicative, additive, and exponential types. These connectives enable precise control over the use of assumptions and conclusions in proofs, avoiding the unrestricted duplication or discarding typical of classical logic.[2]The multiplicative connectives are the tensor \otimes, interpreted as multiplicative conjunction, and the par \parr, interpreted as multiplicative disjunction. The tensor \otimes combines resources in parallel, requiring both components to be used exactly once without sharing, as in proofs where premises from separate contexts yield a joint conclusion. For instance, if \Gamma \vdash A and \Gamma' \vdash B, then \Gamma, \Gamma' \vdash A \otimes B. The par \parr allows for disjunctive choice across independent resource contexts, where the conclusion can be derived from either side without interference, as in \Gamma, A \vdash \Delta and \Gamma', B \vdash \Delta' implying \Gamma, \Gamma', A \parr B \vdash \Delta, \Delta'. These connectives emphasize focused resource usage, where each assumption is consumed precisely once.[14][15]The additive connectives are the plus \oplus, denoting additive disjunction, and the with \&, denoting additive conjunction. The plus \oplus represents a choice between alternatives without resource duplication, where the same context must support either premise but not both simultaneously; for example, \Gamma, A \vdash \Delta and \Gamma, B \vdash \Delta yield \Gamma, A \oplus B \vdash \Delta. In contrast, the with \& provides simultaneous availability of options within a shared context, allowing the conclusion to draw from both sides independently, as in \Gamma \vdash A, \Delta and \Gamma \vdash B, \Delta implying \Gamma \vdash A \& B, \Delta. These additives facilitate branching without the multiplicative parallelism, preserving resource integrity by avoiding replication across choices.[14][15]The exponential connectives introduce modalities for handling reusable or discardable resources: the of course !A, which promotes a formula to allow contraction and weakening, enabling multiple uses of A as if it were a classical assumption; and the why not ?A, which permits dereliction, allowing A to be ignored or used optionally. These exponentials restore some classical behavior selectively, with !A enabling reuse (contraction) and ?A allowing discard (weakening), thus balancing linearity with flexibility.[14][2]Duality in linear logic follows De Morgan-like relations under linear negation \neg, which is involutive (\neg\neg A = A) and swaps positive and negative polarities. Each connective has a dual counterpart, ensuring that proofs of dual formulas correspond symmetrically. The full set of dual pairs is as follows:
These relations preserve the logical structure, with units transforming accordingly (\neg \bot = 1, \neg 0 = \top).[15][14]Polarity assigns formulas as positive or negative to structure proofs, distinguishing irreversible (positive) from reversible (negative) rules and enabling focused proof search. Positive formulas include atoms P, ?P (where P is positive), P \oplus P, P \otimes P, existentials \exists x. P, units $1 and $0, and !N (where N is negative); they support focalization, bundling invertible steps into a single phase. Negative formulas encompass atoms N, implications N \multimap N, N \& N, universals \forall x. N, units \bot and \top, and ?P; their rules are reversible, allowing backward propagation. Duality interchanges polarities: the dual of a positive formula is negative, and vice versa, as in \neg(A \otimes B) = \neg A \parr \neg B shifting from positive to negative. This distinction creates asynchronous (additive/multiplicative without exponentials) and synchronous (exponential-driven) phases in proofs, enhancing efficiency. In modal extensions like affine logic, polarity restricts contraction further, treating resources as strictly linear without reuse, as a variant where !A lacks full contraction.[16][14]
Proof Theory
Sequent Calculus Rules
Linear logic is formalized in sequent calculus using two-sided sequents of the form \Gamma \vdash \Delta, where \Gamma and \Delta are finite multisets of formulas representing assumptions and conclusions, respectively; this framework treats formulas as resources without implicit duplication or discarding, in contrast to classical logic.[7] The system includes an identity axiom and a cut rule for proof composition, alongside introduction rules for each connective, grouped into multiplicative, additive, and exponential categories; negation provides duality between left and right rules.[7]The identity axiom establishes the base case for provability:\frac{}{A \vdash A} \quad \text{(Id)}The cut rule allows eliminating a formula by composing two sequents:\frac{\Gamma_1 \vdash A, \Delta_1 \quad \Gamma_2, A \vdash \Delta_2}{\Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2} \quad \text{(Cut)}where commas denote multiset union.[7] Negation rules exchange formulas across the sequent turnstile using the perpendicular A^\perp:\frac{\Gamma \vdash A, \Delta}{\Gamma, A^\perp \vdash \Delta} \quad \text{(L$\perp$)} \qquad \frac{\Gamma, A \vdash \Delta}{\Gamma \vdash A^\perp, \Delta} \quad \text{(R$\perp$)}Multiplicative connectives model resource consumption and production in parallel. The tensor \otimes (multiplicative conjunction) right rule combines independent proofs:\frac{\Gamma_1 \vdash A, \Delta_1 \quad \Gamma_2 \vdash B, \Delta_2}{\Gamma_1, \Gamma_2 \vdash A \otimes B, \Delta_1, \Delta_2} \quad \text{(R$\otimes$)}Its left dual, the par \parr (multiplicative disjunction), mirrors this on the left side:\frac{\Gamma_1, A \vdash \Delta_1 \quad \Gamma_2, B \vdash \Delta_2}{\Gamma_1, \Gamma_2, A \parr B \vdash \Delta_1, \Delta_2} \quad \text{(L$\parr$)}The left \otimes and right \parr rules are contractions over multisets:\frac{\Gamma, A, B \vdash \Delta}{\Gamma, A \otimes B \vdash \Delta} \quad \text{(L$\otimes$)} \qquad \frac{\Gamma \vdash A, B, \Delta}{\Gamma \vdash A \parr B, \Delta} \quad \text{(R$\parr$)}Units are \mathbf{1} for \otimes (right: \vdash \mathbf{1}; left weakening-like but restricted) and \bot for \parr (left: \bot \vdash; right weakening). Unlike classical logic, no global weakening or contraction applies outside exponentials.[7]Additive connectives handle independent choices without resource sharing. The plus \oplus (additive disjunction) right rule selects one branch:\frac{\Gamma \vdash A_i, \Delta \ (i=1,2)}{\Gamma \vdash A_1 \oplus A_2, \Delta} \quad \text{(R$\oplus$)}Its left dual, the with \& (additive conjunction), requires both branches on the left:\frac{\Gamma, A_i \vdash \Delta \ (i=1,2)}{\Gamma, A_1 \& A_2 \vdash \Delta} \quad \text{(L\&)}The left \oplus and right \& rules synchronize contexts:\frac{\Gamma_1, A \vdash \Delta_1 \quad \Gamma_2, B \vdash \Delta_2 \ ( \Gamma_1 = \Gamma_2, \Delta_1 = \Delta_2 )}{\Gamma_1, A \oplus B \vdash \Delta_1} \quad \text{(L$\oplus$)} \qquad \frac{\Gamma \vdash A_i, \Delta \ (i=1,2)}{\Gamma \vdash A_1 \& A_2, \Delta} \quad \text{(R\&)}Units are \top for \oplus (right: \Gamma \vdash \top, \Delta) and $0 for \& (left: \Gamma, 0 \vdash \Delta). These rules enforce that additives operate over identical contexts, modeling non-interfering parallelism.[7]Exponential connectives ! (of course) and ? (why not) recover structural rules locally. For !A, the rules are:
Promotion: \frac{! \Gamma \vdash A, ? \Delta}{! \Gamma \vdash !A, ? \Delta} \quad \text{(R!)} (where ! \Gamma means all formulas in \Gamma are !-formulas)
The dual ?A rules are symmetric, with promotion \frac{? \Gamma, A \vdash ? \Delta}{? \Gamma, ?A \vdash ? \Delta} \quad \text{(L?)}. These confine contraction and weakening to modal contexts, enabling reusable resources while preserving linearity elsewhere.[7]For first-order linear logic, quantifiers follow standard sequent rules with eigenvariable conditions: the variable y must not occur free in the lower sequent's context. The universal quantifier \forall x . A rules are:\frac{\Gamma, A[t/x] \vdash \Delta}{\Gamma, \forall x . A \vdash \Delta} \quad \text{(L$\forall$)} \qquad \frac{\Gamma \vdash A[y/x], \Delta}{\Gamma \vdash \forall x . A, \Delta} \quad \text{(R$\forall$)} \ (y \notin \text{FV}(\Gamma, \Delta))Existential \exists x . A rules are dual:\frac{\Gamma, A[y/x] \vdash \Delta}{\Gamma, \exists x . A \vdash \Delta} \quad \text{(L$\exists$)} \ (y \notin \text{FV}(\Gamma, \Delta)) \qquad \frac{\Gamma \vdash A[t/x], \Delta}{\Gamma \vdash \exists x . A, \Delta} \quad \text{(R$\exists$)}where t is a term and FV denotes free variables; these ensure proper substitution without resource leakage.[7][17]Cut-elimination holds in linear logic: any proof with cuts can be transformed into an equivalent cut-free proof via normalization, preserving sequent validity; this theorem, proved by induction on proof complexity, implies strong normalization and has implications for computational boundedness, as cut reduction simulates resource-aware evaluation.[18]Light linear logic refines the sequent calculus to bound resources for polynomial-time computability, introducing double sequents with multiplicative (M-) and additive (A-) structural rules, alongside a neutral modality \S and restrictions on promotion to unary contexts.[19] Key changes include excluding direct dereliction and requiring additive blocks for \oplus-like behavior; promotion for !A is restricted to unary multiplicative contexts to ensure polynomial-time cut-elimination. Cut-elimination remains but is polynomial in proof size. Recent developments, such as generalized bounded linear logic, extend these by annotating derelictions with resource polynomials (e.g., !_{<p} A) to model finer resource bounds while maintaining soundness for polynomial time.[19][20]
Proof Nets and Graphical Proofs
Proof nets provide a graphical, parallel representation of proofs in linear logic, serving as an alternative to the sequential structure of sequent calculus derivations. Introduced by Jean-Yves Girard, they consist of bipartite graphs where conclusions and premises are represented as distinct sets of nodes connected by three types of links: axiom links pairing dual atomic formulas, logical links corresponding to the multiplicative connectives (tensor, par, etc.), and cut links for connecting complementary formulas. Unlike sequent proofs, proof nets impose no linear order on inference steps, allowing for a more intrinsic depiction of proof structure that eliminates redundant reorderings.[21]A proof net is constructed by translating a sequent proof into a graph, where axiom links form the base and logical links aggregate subproofs according to the connectives. For validity, the resulting proof structure must satisfy the Danos-Régnier correctness criterion: for every choice of "switching" at logical links (selecting one incoming edge per premise node), the underlying undirected graph must be acyclic and connected, ensuring it corresponds to a unique sequent proof up to reordering. This criterion guarantees that the net represents a genuine proof without pathological cycles or disconnected components.[21]One key advantage of proof nets is their support for parallel cut-elimination, where cuts are resolved simultaneously across the graph without sequential dependencies, achieving polynomial-time normalization compared to the potentially exponential time in sequent calculus due to rule permutations. Additionally, since linear logic is contraction-free in its multiplicative fragment, proof nets naturally avoid the duplication issues plaguing classical proofs, enabling efficient verification and a form of confluence known as the "garbage collector property."[22]The treatment of exponentials in proof nets addresses resource duplication and erasure via the promotion rule, which encloses subproofs in a "box" to represent the ! modality, allowing controlled contractions within the box while maintaining linearity outside. For !A, free contraction graphs permit multiple uses of the formula, modeled by auxiliary links that replicate connections without violating the acyclicity criterion. This mechanism ensures that the non-linear behavior of exponentials is localized and graphically explicit.[21]As an illustrative example, consider the proof net for the sequent A \par B \vdash !A \otimes B, where \par denotes the multiplicative par connective. The graph features an axiom link connecting A^\perp and A (with the latter promoted to !A via a box containing a contraction-auxiliary link for potential reuse), another axiom link for B^\perp and B, a par link merging the A^\perp and B^\perp premises, and a tensor link outputting the promoted !A and B conclusions. This structure satisfies the Danos-Régnier criterion through its single connected, acyclic switching graph, visualizing the parallel resource flow without sequential steps.[22]Subsequent developments refined the correctness criteria for broader fragments; in the 1990s, alternative formulations like Melliès' ribbon-based approach extended validity checks to proof nets including cuts, providing a differential topology for ensuring equivalence. More recently, in the 2020s, extensions to multiplicative-additive linear logic (MALL) proof nets have advanced complexity analysis, with new canonical representations enabling efficient local verification and resolving long-standing questions on cut-elimination decidability in polynomial space.[23]Proof nets also encode implicit computational content relevant to lambda calculi, where linear lambda terms correspond directly to net structures, facilitating translations that preserve normalization and reveal concurrency in type systems; recent work on linear lambda nets (2024) explores these encodings for resource-aware programming models.[24]
Semantics
Phase and Coherence Semantics
Phase semantics, introduced by Jean-Yves Girard in 1987, provides a foundational relational model for linear logic by interpreting formulas as subsets of a commutative monoid equipped with a distinguished subset called the pole.[2] A phase space is defined as a pair (M, \perp), where M is a commutative monoid and \perp \subseteq M is the pole satisfying closure properties such that facts—subsets equal to their biorthogonal \{ m \in M \mid \forall n \in X, mn \notin \perp \}—capture provable assertions.[2] The multiplicative connectives are interpreted via tensor products: for phase spaces X and Y, the tensor X \otimes Y is \{mn \mid m \in X, n \in Y\}^{\perp\perp}, while the par X \parr Y is dually (X^\perp \otimes Y^\perp)^\perp, where X^\perp = \{ m \in M \mid \forall n \in X, mn \notin \perp \}.[2] Additive connectives follow set-theoretic operations: X \oplus Y = (X \cup Y)^{\perp\perp} and X \& Y = X \cap Y, reflecting choice and conjunction without resource duplication.[2]The exponential modalities handle resource promotion and dereliction through idempotent elements: let I be the set of idempotents \{ e \in M \mid e^2 = e \}; then !X = (X \cap I)^{\perp\perp} models reusable resources, while ?X = (!X^\perp)^\perp captures consumable ones.[2] Duality in phase semantics distinguishes negative and positive phases: negative formulas (like atoms) generate closed sets under tensor, while positive ones (like exponentials) involve infinite resources, ensuring the De Morgan dualities (X \otimes Y)^\perp = X^\perp \parr Y^\perp and (!X)^\perp = ?X^\perp.[2] A sequent \vdash A_1, \dots, A_n, B is interpreted as the conjunction \&_{i=1}^n A_i \parr B containing the neutral element $1 of the monoid, with soundness and completeness theorems establishing that provability corresponds exactly to this membership in every phase space.[2] These theorems guarantee adequacy (every proof yields a valid interpretation) and full completeness (every valid interpretation arises from a proof).[2]Coherence spaces offer a concreteinstantiation of phase semantics, modeling linear logic via graph-theoretic structures that simplify domain theory for proofs.[2] A coherence space is a pair (W, \sim), where W is a set of tokens (observations) and \sim \subseteq W \times W is a reflexive, symmetric coherencerelation; cliques are maximal subsets where every pair is coherent, representing valid proof tokens.[2] The interpretation of a formula is the set of its cliques: for multiplicative tensor, cliques of X \otimes Y consist of the Cartesian products of cliques from X and from Y; dually, for par X \parr Y, the construction follows from duality as cliques in the negation of the tensor of the negations.[2] Additives are direct: cliques of X \& Y are cliques that are cliques in both X and Y (intersections), while X \oplus Y uses disjoint unions of a clique from one and the empty set from the other.[2] Exponentials extend this: !W comprises finite multisets of cliques from W that are coherent as a web, modeling contraction and weakening via reusable finite supports.[2]In coherence spaces, proofs of a sequent \vdash \Gamma, A correspond to cliques in the coherence space of \& \Gamma \parr A, with entailment \vdash \Gamma holding if this space's cliques include those of \& \Gamma.[2] Adequacy ensures every proof term yields a clique, while soundness validates that only provable sequents have non-empty interpretations; duality preserves this through negation, where X^\perp inverts coherence relations.[2]Recent developments in the 2020s have extended these relational models to quantum linear logic, adapting coherence spaces to probabilistic and normed settings for handling superposition and measurement. Probabilistic coherence spaces, refined in normed cones, interpret linear logic formulas as convex sets of cliques with probabilistic weights, providing a relational framework for quantum resources while preserving adequacy for multiplicative-additive fragments.[25] These models support *-autonomous categories underlying quantum interpretations, bridging classical linear logic to quantum control protocols.
Algebraic and Denotational Semantics
Algebraic semantics for linear logic primarily rely on category-theoretic structures that capture the resource-sensitive nature of its connectives. Monoidal categories provide the foundational framework for the multiplicative fragment, where the tensor connective A \otimes B is interpreted as the monoidal product, ensuring that resources are combined without duplication or discard unless explicitly allowed. Compact closed categories extend this by modeling the linear implication A \multimap B as the internal hom, with duals enabling the par connective A \parr B as the dual to tensor, reflecting the full duality inherent in classical linear logic.[26]To incorporate the exponential connectives, which handle resource promotion and contraction, Seely categories augment symmetric monoidal closed categories with a strong comonad ! for the "of course" modality, satisfying Seely's promotion condition that ensures the exponential structure aligns with proof normalization. This construction models intuitionistic linear logic, where !A denotes reusable resources, and extends to classical variants via dual exponentials. Phase spaces serve as concrete precursors to these abstract categorical models, providing set-theoretic interpretations that inspired the shift to more general categorical semantics.[26][27]*-Autonomous categories offer a unified algebraic semantics for the full multiplicative-additive fragment, featuring a closed symmetric monoidal structure with a dualizing object that enforces contravariant duality between tensor and par. In these categories, every object has a dual, and the linear negation \neg A = A \multimap \top is an involution, capturing the reversible nature of linear proofs without additives. Bierman's work in the 1990s established coherence conditions for such categories, ensuring that proofs correspond to unique morphisms up to isomorphism in the intuitionistic setting, thus bridging proof theory and categorical models.[28][27]Denotational semantics for linear logic employ domain-theoretic models, interpreting formulas as domains and proofs as continuous linear functions between them. Linear domains, such as Scott domains equipped with linear maps—functions that preserve the monoidal structure without contraction unless via exponentials—provide a computational interpretation where resources are tracked through strictness conditions. In this setup, the tensor A \otimes B denotes the product domain, while linear implication corresponds to linear function spaces, and proofs are denoted as morphisms in the Kleisli category of the exponential comonad. For the multiplicative linear logic (MLL) fragment, full completeness holds in certain models, such as those based on linear information systems, where every provable sequent corresponds to a unique normalmorphism, equating proofs that are equivalent under cut-elimination.[2]Affine variants of linear logic, which permit weakening but not contraction, receive denotational models by relaxing strictness in the domain morphisms, allowing functions that may discard inputs without explicit resource management. This is modeled in categories where the hom-sets include affine maps—linear functions with optional weakening—extending standard linear domains to capture non-strict resource usage. For instance, in affine linear logic, the implication A \multimap B denotes maps from A to B that need not use all of A, interpreted as partial linear continuous functions on Scott domains.[2]Recent applications leverage enriched category theory to model probabilistic extensions of linear logic, where hom-objects are enriched over metric spaces or probability monads to handle uncertainty in resource consumption. In such frameworks, the multiplicative structure is preserved via enriched monoidal products, while exponentials incorporate probabilistic promotion, enabling semantics for quantum or stochastic processes as in reversible circuit models.[29]
Interpretations and Encodings
Resource-Sensitive Interpretation
Linear logic offers a resource-sensitive framework for reasoning, treating logical assumptions not as eternal truths but as finite, consumable resources that must be explicitly managed during inference. Introduced by Jean-Yves Girard, this perspective rejects the unrestricted structural rules of classical logic—specifically, weakening (discarding unused assumptions) and contraction (freely duplicating assumptions)—in its core multiplicative and additive connectives, enforcing that each resource is used exactly once to avoid waste or overuse. This "no free lunch" principle distinguishes linear logic from classical systems, where resources can be manipulated without cost, and instead emphasizes accountability in their transformation.[7]At the heart of this interpretation, the multiplicative conjunction \otimes models the parallel consumption of resources: to achieve A \otimes B, both A and B must be fully expended together, as in combining materials in production. Similarly, linear implication A \multimap B captures the directed consumption of A to yield B, akin to a process where one resource is sacrificed to generate another, such as converting fuel into motion. Additive connectives provide alternatives without duplication: A \& B offers a choice between consuming A or B (internal choice in resources), while A \oplus B imposes an external choice in production, ensuring no branching that would multiply resources illicitly. The exponentials modulate this sensitivity—!A denotes reusable resources like executable code that can be duplicated or discarded (via controlled contraction and weakening), contrasting with ?A for producible resources that can be introduced as needed.Philosophically, linear logic reframes deduction as a dynamic process of resource evolution rather than mere truth preservation, influencing computational paradigms by prioritizing flow and scarcity over static validity.[7] This has profoundly shaped type theory, where linear types enforce exact-once usage of variables to prevent aliasing and enable precise memory management, as in systems that track ownership to avoid unintended sharing.[5] In concurrency, linear logic interprets proofs as interacting processes, with encodings into calculi like the π-calculus mapping logical cuts to communication channels, thus providing a Curry-Howard-style correspondence for distributed computation. Economically, it models resource negotiations and game-theoretic allocations, representing multisets of goods and agent preferences as linear formulas to enforce equitable consumption without gratuitous duplication.[30] A key application arises in Petri nets, where resources correspond to tokens in places, and linear proofs translate to valid firing sequences, capturing concurrent resource dynamics in systems like manufacturing workflows.
Encoding Classical and Intuitionistic Logic
Linear logic provides a faithful embedding of classical logic by leveraging its exponential connectives ! (of course) and ? (why not) to recover the unrestricted contraction and weakening rules absent in the linear setting. This encoding, originally developed by Girard, translates classical propositional connectives into linear ones prefixed with exponentials to simulate resource reuse. Specifically, classical conjunction A \land B maps to !A \otimes !B, disjunction A \lor B to ?A \parr ?B, and implication A \to B to !A \multimap B.[3]The full translation schema for classical propositional logic into linear logic is given by the double negation embedding, where a classical formula A is interpreted as !!A in linear logic. This double exponential allows classical proofs to be reconstructed linearly by permitting multiple uses of assumptions through the contractionrule for !. Negation \neg A translates to ?A^\perp, the linear negation of the why-not modality, ensuring that classical excluded middle A \lor \neg A corresponds to provable linear formulas.[31]The following table summarizes the standard translation for classical propositional connectives:
Classical Connective
Linear Translation
A \land B
!A \otimes !B
A \lor B
?A \parr ?B
A \to B
!A \multimap B
\neg A
?A^\perp
\top
!1
\bot
?\bot
This embedding is faithful: a formula A is provable in classical sequent calculus if and only if !!A is provable in linear logic, with the translation preserving cut-elimination and yielding cut-free proofs.For first-order extensions, the universal quantifier \forall x . A translates to ! \forall x . !A, enabling the reuse of universally quantified assumptions in a manner analogous to contraction, while the existential \exists x . A maps to ? \exists x . ?A.[4]Intuitionistic logic embeds into linear logic in a polarized fashion, distinguishing between linear and reusable resources without requiring full double negation for classicality. The Girard translation uses the recursive mapping ^\circ, where assumptions are prefixed with !, and connectives are: intuitionistic conjunction A \land B to A^\circ \& B^\circ (additive with, allowing shared reusable context), disjunction A \lor B to !A^\circ \oplus !B^\circ (additive plus with ! on branches, capturing proof choice), reflecting the constructive nature of intuitionistic disjunction. The key distinction lies in implication: linear implication A \multimap B preserves resource sensitivity for intuitionistic contexts, while intuitionistic implication A \to B translates to !A^\circ \multimap B^\circ, allowing contraction on the antecedent.[32][4]This intuitionistic encoding preserves the Brouwer-Heyting-Kolmogorov (BHK) interpretation, where proofs correspond to constructive methods: a proof of A \to B yields a function transforming proofs of A (reusable via !) into proofs of B, maintaining the resource-aware computation inherent to linear logic.[4]The intuitionistic embedding is sound and complete with respect to provability: an intuitionistic formula A is provable if and only if its linear translation !A^\circ is provable in the intuitionistic fragment of linear logic, and the translation admits a cut-free characterization.
Advanced Topics
Remarkable Formulas and Examples
One notable example in linear logic is Peirce's law, expressed as ((A \multimap B) \multimap A) \multimap A, which is provable in classical logic but not in the multiplicative fragment of linear logic without the use of exponentials.[2] To derive it in full linear logic, the exponential modalities ! and ? are required to simulate the necessary resource duplication and discard, highlighting the resource-sensitive nature of the logic.[33]The distribution theorem provides another key illustration of linear logic's structure, stating that A \otimes (B \parr C) \vdash (A \otimes B) \parr (A \otimes C), while the converse does not hold.[2] This asymmetry underscores the distinction between the multiplicative connectives tensor (\otimes) and par (\parr), where tensor distributes over par but not vice versa, reflecting controlled resource parallelism without unrestricted sharing.[34]A fundamental example involving exponentials is the dereliction rule, which establishes !A \multimap A, permitting the use of a reusable resource as a linear one in a controlled manner.[2] Conversely, contraction is not provable without exponentials; attempting A, A \vdash A violates linearity unless A is prefixed by !, preventing arbitrary resource duplication and emphasizing linear logic's substructural constraints.[7]The exponentials further simulate S4 modal logic, where ! corresponds to necessity (\Box) and ? to possibility (\Diamond), with promotion and dereliction rules mirroring the reflexive and transitive properties of S4 accessibility.[35] This embedding allows linear proofs to model persistent necessities and storable possibilities, as in !A \multimap \Box A via promotion.[36]In quantum protocols, linear logic has been applied to model resource consumption, such as in quantum teleportation, where the cut rule represents entanglement-based state transfer as a linear interaction.[37] Recent work extends this to intuitionistic multiplicative-additive linear logic (IMALL) with mixed-state modalities, encoding teleportation as a proof of (A \otimes B) \multimap C, preserving quantum no-cloning via linearity.[38]Ludics, Girard's game semantics for linear logic introduced in the 2000s, provides interactive examples of proofs as strategies in arenas, updated in the 2020s to handle focalization.[39] For instance, the axiom A \multimap A becomes a basic strategy in an arena, while exponentials model infinite plays, illustrating non-deterministic resource games without classical completeness assumptions.[40]
Decidability and Computational Complexity
The provability problem for full propositional linear logic, which includes the exponential modalities, is undecidable. This undecidability arises because the exponentials enable the encoding of classical propositional logic and, more powerfully, the simulation of Turing machines or counter machines, allowing the reduction of the halting problem to linear logic provability.[41][42]In contrast, the multiplicative-additive fragment of linear logic (MALL), which excludes exponentials but includes both multiplicative and additive connectives, is decidable, with the provability problem being PSPACE-complete. This complexity result stems from reductions to alternating Turing machines and space-bounded computations, establishing both hardness and membership in PSPACE via tableau-like methods.[7][41]The elementary linear logic fragment (ELL), a restricted variant introduced by Girard, bounds the computational power to the elementary time complexity class through level annotations on formulas that limit the depth of exponential contractions. A related system, light linear logic (LLL), further refines these bounds to characterize exactly the polynomial-time computable functions, with proofs normalizing in polynomial time due to controlled resource usage in the sequent calculus.[43][19]For bounded fragments of linear logic, such as those with a fixed proof depth or restricted occurrences of connectives, decidability is achieved through translations to automata-based models, like pushdown or alternating automata, which enable effective model-checking procedures.[44][45]The complexity of cut-elimination in linear logic varies significantly between proof representations: in the sequent calculus, it can require exponential time and space due to repeated duplications from contractions, whereas in proof nets, the process is polynomial-time efficient, often linear for the multiplicative fragment, by avoiding redundant recomputations through parallel reductions.[7][22]Decision procedures for decidable fragments like MALL often employ constraint graphs to model the flow of resources and polarities, enabling satisfiability checks through graph algorithms that detect cycles or violations in linear time relative to the formula size.[41][46]
Variants and Extensions
Substructural Variants
Substructural variants of linear logic modify the strict resource management rules of the core system, which requires resources to be used exactly once without weakening or contraction. These variants relax aspects of structural rules to allow varying degrees of resource reusability or discardability while preserving the foundational multiplicative and additive connectives. Such adaptations enable finer control over logical resources in applications demanding partial relevance or bounded reuse.Relevant logic, a prominent substructural variant, prohibits weakening but permits contraction, ensuring that hypotheses are used at least once without allowing discard. This contrasts with core linear logic by encoding resource reuse through contraction rather than full exponentials, often using a fusion connective (denoted ◦ or ;) for multiplicative conjunction that enforces relevance by avoiding irrelevant premises. The fusion rule allows importation and exportation relative to implication, distinguishing it from the tensor ⊗ in linear logic, where resources must be exactly consumed. Seminal developments trace to J. Michael Dunn's work in the 1970s, building on earlier efforts by Alan Ross Anderson and Nuel D. Belnap to formalize relevance in entailment, with integration into linear frameworks occurring in the 1990s through relational semantics that extend the implication-fusion fragment.[47][48][49]Affine logic, in contrast, allows weakening but forbids contraction, permitting resources to be discarded but not duplicated or reused. This makes it suitable for scenarios where resources can be ignored but not multiplied, differing from core linear logic's prohibition on both rules; for instance, the implication A -o B in affine logic lacks the ! modality for reuse, relying instead on weakening to drop unused assumptions in sequents. Syntactically, affine systems extend multiplicative affine logic (without exponentials) by adding discard rules, and they can be mapped to linear logic via translations that treat affine functions as linear with optional consumption.[50][51]Bounded linear logic introduces resource bounds on exponentials to control computational complexity, restricting the ! operator to a finite number of uses, such as k!A denoting A usable up to k times. This variant modifies core linear logic's unbounded exponentials by indexing modalities with natural numbers, ensuring polynomial-time decidability for proofs within fixed bounds; for example, contraction and weakening are limited to the bounded instances, preventing unbounded resource duplication. Developed by Jean-Yves Girard in the early 1990s, it provides a modular framework for complexity classes like PTime through level-restricted proofs.[52][53]Ordered linear logic imposes a partial order on proof contexts to track dependencies among resources, refining intuitionistic linear logic with ordered hypotheses that must be accessed sequentially. Unlike core linear logic's unordered multisets, contexts here are sequences or posets, where rules respect the order (e.g., no commuting unrelated assumptions), enabling modeling of ordered data structures or constrained access. This variant emerged in the late 1990s, with key formalizations supporting applications in logic programming by treating ordered contexts as logical queues.[54][55]Fusion logics represent post-2010 hybrids blending relevant and linear features, incorporating fusion connectives from relevant logic into linear frameworks to balance at-least-once usage with exact consumption in substructural settings. These systems extend the basic implication-fusion fragment of relevant logics with linear additives and multiplicatives, allowing proof-theoretic characterizations that include fission (for disjunctive fusion) while maintaining cut-elimination properties. Recent work has explored their relational semantics to unify substructural behaviors.[56][57]
Applications in Computer Science
Linear logic has significantly influenced type systems in programming languages by providing a foundation for resource-sensitive typing, where values must be used exactly once to prevent unintended side effects and aliasing. In functional languages like Clean, uniqueness types enforce that unique references to data are not aliased, ensuring destructive updates without copying and enabling efficient compilation; this approach draws directly from linear logic's restriction on resource duplication and deletion. Similarly, Mercury employs uniqueness typing to achieve the same goals, supporting strict evaluation and garbage collection optimizations while maintaining type safety. Rust's ownership model, introduced in the 2010s, extends these ideas with borrowing rules to manage memory at compile time, preventing data races and null pointer dereferences; its design is explicitly inspired by linear logic, treating ownership transfers as linear uses of resources.In concurrency, linear logic enables precise modeling of communication protocols through encodings of process calculi. Kohei Honda's 1993 work demonstrated how the π-calculus can be encoded in linear logic, using linear types to track channel usage and ensure deadlock-free interactions by consuming channels after single use. This foundation evolved into session types, where linear logic propositions correspond to communication protocols via the Curry-Howard isomorphism; Simon Gay and colleagues in the 2000s developed asynchronous session types grounded in linear type theory, allowing subtyping for protocol refinement while guaranteeing session fidelity and progress in distributed systems.For verification, linear logic supports formal analysis of security protocols by modeling resources like nonces and keys as linear hypotheses that cannot be duplicated. Marco Bozzano's 2002 framework automates protocol verification in linear logic using constraint logic programming, checking secrecy and authenticity properties for protocols like Needham-Schroeder by searching for cut-free proofs. Extensions to quantum protocols leverage linear logic's resource management for no-cloning constraints; Samson Abramsky's 2020s categorical approaches to quantum computation interpret protocols like teleportation in monoidal categories akin to linear logic models, ensuring reversible resource flows in quantum game semantics.Game semantics, pioneered by Martin Hyland and Luke Ong in 1994, interprets linear logic proofs as winning strategies in impartial games, providing full completeness for multiplicative linear logic and bridging logic with computational processes. In AI planning, linear logic models resource consumption in domain-independent planners.Tools implementing linear logic include Forum, a multiple-conclusion logic programming system extending λProlog to full linear logic, enabling declarative specification and automated proof search for concurrent systems. Ludics, introduced by Jean-Yves Girard in 2001, offers an interactive framework for implicit computational content in proofs, representing linear logic designs as strategies in arenas for analyzing focalization and long proofs without explicit syntax.Recent applications extend to blockchain, where the Move language for platforms like Sui and Aptos (2024 updates) uses linear types to track digital assets in smart contracts, preventing double-spending by enforcing single-use resources. In machine learning, linear types manage tensor flows for memory efficiency; a 2025 analysis of JAX autodiff frames automatic differentiation as linear logic reductions, ensuring no intermediate tensor duplication during gradient computation.[12]Via the Curry-Howard correspondence, linear logic proofs correspond to type-safe programs, as in session-typed calculi where protocol adherence is proven by type derivation, guaranteeing deadlock-freedom and resource safety without runtime checks.