Fact-checked by Grok 2 weeks ago

Linear logic

Linear logic is a substructural logic system introduced by French mathematician Jean-Yves Girard in 1987 as a refinement of classical and , emphasizing the treatment of logical hypotheses as consumable resources that must be used exactly once, without allowing their free duplication () or deletion (weakening). This resource-aware approach decomposes traditional logical connectives into more granular ones, including multiplicative operators such as the (⊗) for parallel composition and linear implication (−∘) for sequential resource consumption, additive connectives like "with" (&) for choice-independent and "plus" (⊕) for disjunction, and exponential modalities ! ("of course") and ? ("why not") that permit controlled reuse or erasure of resources under specific conditions. Motivated by limitations in classical —particularly the unrestricted handling of assumptions that fails to capture , state transitions, or physical resource constraints—linear logic builds on Gentzen's to provide a framework where proofs model dynamic processes like chemical reactions or economic exchanges. Since its inception, linear logic has profoundly influenced and , enabling resource-sensitive models in concurrent and , type systems via the Curry-Howard correspondence, and categorical semantics using monoidal categories to interpret proofs as computations. Its duality via linear (⊥) and focus on cut-elimination ensure strong normalization properties, making it a foundational tool for studying complexity, ludics, and non-commutative logics in fields ranging from to .

Overview and History

Core Concepts and Motivation

Linear logic is a developed by Jean-Yves Girard in his seminal 1987 paper, designed to refine classical and intuitionistic logics by explicitly accounting for resource consumption in proofs. Unlike traditional logics, it treats hypotheses as consumable resources that must be used exactly once, unless explicitly marked for reuse via modalities, thereby modeling scenarios where assumptions represent limited assets rather than abstract truths. This approach emerged from Girard's efforts to address foundational challenges in , such as the non-deterministic nature of cut-elimination in classical , and in , where symmetric structures hindered meaningful . The primary motivation for linear logic stems from the inadequacies of in handling resource scarcity. In classical systems, structural rules like weakening (discarding unused assumptions) and (duplicating assumptions for reuse) allow premises to be treated as infinitely available, which fails to capture real-world dynamics such as economic transactions or computational state changes where resources are depleted upon use. By decoupling these structural rules from the logical connectives—effectively suspending weakening and by default—linear logic enforces precise , enabling a more faithful representation of , concurrency, and bounded computation. This innovation not only resolves theoretical paradoxes but also bridges logic with , providing a for programming languages that track resource usage. A illustrative example highlights this departure: in classical logic, the tautology A \vdash A \land A holds, as the single assumption A can be contracted to derive two instances via the conjunction. In linear logic, however, this is invalid without additional modalities, requiring two distinct resources to produce the conjunctive conclusion, thus emphasizing the "linear" consumption of each hypothesis exactly once. Girard's framework, originating in his 1987 work, was thus driven by the need for a logic that discriminates between stable truths and transient resources, laying the groundwork for applications in semantics and verification.

Historical Development

Linear logic was introduced by Jean-Yves Girard in his seminal 1987 paper, where he proposed it as a refinement of classical and , motivated by issues in bounded arithmetic and the desire to model resource consumption more precisely. This work emerged from Girard's earlier research on , the polymorphic lambda calculus, whose semantical analysis highlighted limitations in traditional logics regarding structural rules like contraction and weakening. Girard's introduction also drew on connections to , particularly monoidal categories, to provide a foundational framework for the logic's multiplicative and additive connectives. In the , developments focused on proof representation and applications. Girard introduced proof nets in his paper as a graphical alternative to derivations, which were refined by Vincent Danos and Laurent Regnier in 1989 for multiplicative linear logic, enabling more efficient parallel proof normalization. Concurrently, linear influenced concurrency models; Samson Abramsky and Radha Jagadeesan developed in , interpreting proofs as winning strategies in that captured the logic's resource-sensitive nature, bridging with process calculi like Petri nets. These advances, involving key figures like François Lamarche who extended the logic to intuitionistic variants in 1996, solidified linear logic's role in . The 2000s saw expansions into programming languages via the Curry-Howard isomorphism, where linear types—ensuring resources are used exactly once—were integrated into systems for memory safety and concurrency. This influence appeared in languages like and , and later in Rust's borrow checker around the , which enforces affine types inspired by linear logic to prevent data races without garbage collection. Girard's later work on ludics in 2001 further extended these ideas, modeling interactive proofs as games to address non-termination in computation. Post-2010 advances integrated linear logic with emerging fields. In , influenced by Vladimir Voevodsky's univalent foundations, linear variants emerged to handle higher-dimensional structures; for instance, Elies Harington's 2024 work modeled linear logic using polynomials in HoTT for synthetic reasoning about resources. Applications in grew, with linear logic modeling quantum resources and error correction; a 2024 paper by Ehrhard, Tortora de Falco, and Dal Lago used multiplicative proof nets for quantum error-correcting codes, linking logical reductions to quantum operations. In verification, recent proceedings highlighted linear logic's utility; for example, a 2023 paper by Horne and Padovani provided a logical account of for session types, aiding of AI protocols with resource constraints. In 2025, further applications appeared, such as Giusti and Pagani's encoding of in JAX via a linear , advancing resource-aware computational models. These developments underscore linear logic's enduring impact through 2025.

Syntax and Connectives

Basic Syntax and Formulas

Linear logic employs a to express resource-sensitive reasoning, where the propositional fragment forms the foundational syntax. Atomic propositions, denoted by variables such as p, q, r, serve as the basic building blocks, along with their linear negations p^\perp, q^\perp, r^\perp. Formulas are inductively constructed by applying the connectives of linear logic to these atoms and constants like the multiplicative units [1](/page/1) and \bot. This construction emphasizes the absence of structural rules for and weakening, ensuring that assumptions and conclusions are treated as consumable resources. Sequent notation in linear logic takes the form \Gamma \vdash \Delta, where \Gamma (the antecedent) and \Delta (the succedent) are finite of rather than sets. The use of multisets is crucial, as it permits multiple instances of the same , thereby tracking the exact count of resources without allowing implicit duplication or deletion. For example, the multiset \{ A, A, B \} distinguishes two occurrences of A from a single one, reflecting the logic's sensitivity to multiplicity. The extension of linear logic incorporates quantifiers \forall x . A and \exists x . A, where x is a and A is a possibly containing free occurrences of x. These quantifiers adhere to De Morgan duality under linear negation, such that (\forall x . A)^\perp = \exists x . A^\perp and (\exists x . A)^\perp = \forall x . A^\perp. imposes restrictions on eigenvariables: in derivations, the variable introduced by a quantifier must not appear free in the surrounding , preventing unauthorized resource reuse. Illustrative examples highlight the distinct flavor of linear formulas. Consider the par connective A \parr B, which encodes a form of disjunction where the choice between A and B consumes resources without permitting classical-style duplication, in contrast to the classical A \lor B. Similarly, linear implication A \multimap B demands the full consumption of A to produce B, underscoring the resource-aware structure. Higher-order linear logic extends this syntax to support functional and application, incorporating terms such as \lambda x . M for linear functions, where M is a , and applications M \, N. These terms are typed over linear formulas, enabling the encoding of higher-order in type systems. The linear provides a duality between positive and negative connectives, foundational for these extensions.

Connectives, Duality, and Polarity

Linear logic employs a refined set of connectives that distinguish between different modes of combination and consumption, categorized into multiplicative, additive, and types. These connectives enable precise control over the use of s and conclusions in proofs, avoiding the unrestricted duplication or discarding typical of . The multiplicative connectives are the tensor \otimes, interpreted as multiplicative conjunction, and the par \parr, interpreted as multiplicative disjunction. The tensor \otimes combines resources in parallel, requiring both components to be used exactly once without sharing, as in proofs where premises from separate contexts yield a joint conclusion. For instance, if \Gamma \vdash A and \Gamma' \vdash B, then \Gamma, \Gamma' \vdash A \otimes B. The par \parr allows for disjunctive choice across independent contexts, where the conclusion can be derived from either side without , as in \Gamma, A \vdash \Delta and \Gamma', B \vdash \Delta' implying \Gamma, \Gamma', A \parr B \vdash \Delta, \Delta'. These connectives emphasize focused usage, where each is consumed precisely once. The additive connectives are the plus \oplus, denoting additive disjunction, and the with \&, denoting additive conjunction. The plus \oplus represents a choice between alternatives without resource duplication, where the same context must support either premise but not both simultaneously; for example, \Gamma, A \vdash \Delta and \Gamma, B \vdash \Delta yield \Gamma, A \oplus B \vdash \Delta. In contrast, the with \& provides simultaneous availability of options within a shared context, allowing the conclusion to draw from both sides independently, as in \Gamma \vdash A, \Delta and \Gamma \vdash B, \Delta implying \Gamma \vdash A \& B, \Delta. These additives facilitate branching without the multiplicative parallelism, preserving resource integrity by avoiding replication across choices. The exponential connectives introduce modalities for handling reusable or discardable resources: the of course !A, which promotes a formula to allow contraction and weakening, enabling multiple uses of A as if it were a classical assumption; and the why not ?A, which permits dereliction, allowing A to be ignored or used optionally. These exponentials restore some classical behavior selectively, with !A enabling reuse (contraction) and ?A allowing discard (weakening), thus balancing linearity with flexibility. Duality in linear logic follows De Morgan-like relations under linear \neg, which is involutive (\neg\neg A = A) and swaps positive and negative polarities. Each connective has a counterpart, ensuring that proofs of dual formulas correspond symmetrically. The full set of dual pairs is as follows:
ConnectiveSymbolDual SymbolDuality Relation
TensorA \otimes BPar\neg A \parr \neg B\neg(A \otimes B) = \neg A \parr \neg B
ParA \parr BTensor\neg A \otimes \neg B\neg(A \parr B) = \neg A \otimes \neg B
PlusA \oplus BWith\neg A \& \neg B\neg(A \oplus B) = \neg A \& \neg B
WithA \& BPlus\neg A \oplus \neg B\neg(A \& B) = \neg A \oplus \neg B
Of course!AWhy not?(\neg A)\neg(!A) = ?(\neg A)
Why not?AOf course!(\neg A)\neg(?A) = !(\neg A)
Multiplicative unit$1$Bottom\bot\neg 1 = \bot
Additive unit\topZero$0$\neg \top = 0
These relations preserve the logical structure, with units transforming accordingly (\neg \bot = 1, \neg 0 = \top). assigns formulas as positive or negative to structure proofs, distinguishing irreversible (positive) from reversible (negative) rules and enabling focused proof search. Positive formulas include atoms P, ?P (where P is positive), P \oplus P, P \otimes P, existentials \exists x. P, units $1 and $0, and !N (where N is negative); they support focalization, bundling invertible steps into a single phase. Negative formulas encompass atoms N, implications N \multimap N, N \& N, universals \forall x. N, units \bot and \top, and ?P; their rules are reversible, allowing backward propagation. Duality interchanges polarities: the dual of a positive formula is negative, and vice versa, as in \neg(A \otimes B) = \neg A \parr \neg B shifting from positive to negative. This distinction creates asynchronous (additive/multiplicative without exponentials) and synchronous (exponential-driven) phases in proofs, enhancing efficiency. In modal extensions like affine logic, restricts further, treating resources as strictly linear without reuse, as a variant where !A lacks full .

Proof Theory

Sequent Calculus Rules

Linear logic is formalized in using two-sided sequents of the form \Gamma \vdash \Delta, where \Gamma and \Delta are finite multisets of formulas representing assumptions and conclusions, respectively; this framework treats formulas as resources without implicit duplication or discarding, in contrast to . The system includes an identity axiom and a cut rule for proof composition, alongside introduction rules for each connective, grouped into multiplicative, additive, and exponential categories; negation provides duality between left and right rules. The identity axiom establishes the base case for provability: \frac{}{A \vdash A} \quad \text{(Id)} The cut rule allows eliminating a formula by composing two sequents: \frac{\Gamma_1 \vdash A, \Delta_1 \quad \Gamma_2, A \vdash \Delta_2}{\Gamma_1, \Gamma_2 \vdash \Delta_1, \Delta_2} \quad \text{(Cut)} where commas denote multiset union. Negation rules exchange formulas across the sequent turnstile using the perpendicular A^\perp: \frac{\Gamma \vdash A, \Delta}{\Gamma, A^\perp \vdash \Delta} \quad \text{(L$\perp$)} \qquad \frac{\Gamma, A \vdash \Delta}{\Gamma \vdash A^\perp, \Delta} \quad \text{(R$\perp$)} Multiplicative connectives model resource consumption and production in parallel. The tensor \otimes (multiplicative conjunction) right rule combines independent proofs: \frac{\Gamma_1 \vdash A, \Delta_1 \quad \Gamma_2 \vdash B, \Delta_2}{\Gamma_1, \Gamma_2 \vdash A \otimes B, \Delta_1, \Delta_2} \quad \text{(R$\otimes$)} Its left dual, the par \parr (multiplicative disjunction), mirrors this on the left side: \frac{\Gamma_1, A \vdash \Delta_1 \quad \Gamma_2, B \vdash \Delta_2}{\Gamma_1, \Gamma_2, A \parr B \vdash \Delta_1, \Delta_2} \quad \text{(L$\parr$)} The left \otimes and right \parr rules are contractions over multisets: \frac{\Gamma, A, B \vdash \Delta}{\Gamma, A \otimes B \vdash \Delta} \quad \text{(L$\otimes$)} \qquad \frac{\Gamma \vdash A, B, \Delta}{\Gamma \vdash A \parr B, \Delta} \quad \text{(R$\parr$)} Units are \mathbf{1} for \otimes (right: \vdash \mathbf{1}; left weakening-like but restricted) and \bot for \parr (left: \bot \vdash; right weakening). Unlike , no global weakening or contraction applies outside exponentials. Additive connectives handle independent choices without resource sharing. The plus \oplus (additive disjunction) right rule selects one branch: \frac{\Gamma \vdash A_i, \Delta \ (i=1,2)}{\Gamma \vdash A_1 \oplus A_2, \Delta} \quad \text{(R$\oplus$)} Its left dual, the with \& (additive ), requires both branches on the left: \frac{\Gamma, A_i \vdash \Delta \ (i=1,2)}{\Gamma, A_1 \& A_2 \vdash \Delta} \quad \text{(L\&)} The left \oplus and right \& rules synchronize contexts: \frac{\Gamma_1, A \vdash \Delta_1 \quad \Gamma_2, B \vdash \Delta_2 \ ( \Gamma_1 = \Gamma_2, \Delta_1 = \Delta_2 )}{\Gamma_1, A \oplus B \vdash \Delta_1} \quad \text{(L$\oplus$)} \qquad \frac{\Gamma \vdash A_i, \Delta \ (i=1,2)}{\Gamma \vdash A_1 \& A_2, \Delta} \quad \text{(R\&)} Units are \top for \oplus (right: \Gamma \vdash \top, \Delta) and $0 for \& (left: \Gamma, 0 \vdash \Delta). These rules enforce that additives operate over identical contexts, modeling non-interfering parallelism. Exponential connectives ! (of course) and ? (why not) recover structural rules locally. For !A, the rules are:
  • Dereliction: \frac{\Gamma, A \vdash \Delta}{\Gamma, !A \vdash \Delta} \quad \text{(L! der.)}
  • Weakening: \frac{\Gamma \vdash \Delta}{\Gamma, !A \vdash \Delta} \quad \text{(L! weak.)}
  • Contraction: \frac{\Gamma, !A, !A \vdash \Delta}{\Gamma, !A \vdash \Delta} \quad \text{(L! contr.)}
  • Promotion: \frac{! \Gamma \vdash A, ? \Delta}{! \Gamma \vdash !A, ? \Delta} \quad \text{(R!)} (where ! \Gamma means all formulas in \Gamma are !-formulas)
The dual ?A rules are symmetric, with promotion \frac{? \Gamma, A \vdash ? \Delta}{? \Gamma, ?A \vdash ? \Delta} \quad \text{(L?)}. These confine contraction and weakening to modal contexts, enabling reusable resources while preserving linearity elsewhere. For first-order linear logic, quantifiers follow standard sequent rules with eigenvariable conditions: the variable y must not occur free in the lower sequent's context. The universal quantifier \forall x . A rules are: \frac{\Gamma, A[t/x] \vdash \Delta}{\Gamma, \forall x . A \vdash \Delta} \quad \text{(L$\forall$)} \qquad \frac{\Gamma \vdash A[y/x], \Delta}{\Gamma \vdash \forall x . A, \Delta} \quad \text{(R$\forall$)} \ (y \notin \text{FV}(\Gamma, \Delta)) Existential \exists x . A rules are dual: \frac{\Gamma, A[y/x] \vdash \Delta}{\Gamma, \exists x . A \vdash \Delta} \quad \text{(L$\exists$)} \ (y \notin \text{FV}(\Gamma, \Delta)) \qquad \frac{\Gamma \vdash A[t/x], \Delta}{\Gamma \vdash \exists x . A, \Delta} \quad \text{(R$\exists$)} where t is a term and FV denotes free variables; these ensure proper substitution without resource leakage. Cut-elimination holds in linear logic: any proof with cuts can be transformed into an equivalent cut-free proof via , preserving sequent validity; this , proved by on proof complexity, implies strong and has implications for computational boundedness, as cut reduction simulates resource-aware evaluation. Light linear logic refines the to bound resources for -time computability, introducing double with multiplicative (M-) and additive (A-) structural rules, alongside a neutral \S and restrictions on to contexts. Key changes include excluding direct dereliction and requiring additive blocks for \oplus-like behavior; for !A is restricted to multiplicative contexts to ensure -time cut-elimination. Cut-elimination remains but is in proof size. Recent developments, such as generalized bounded linear logic, extend these by annotating derelictions with resource (e.g., !_{<p} A) to model finer resource bounds while maintaining soundness for time.

Proof Nets and Graphical Proofs

Proof nets provide a graphical, parallel representation of proofs in linear logic, serving as an alternative to the sequential structure of derivations. Introduced by Jean-Yves Girard, they consist of bipartite graphs where conclusions and premises are represented as distinct sets of nodes connected by three types of links: axiom links pairing dual atomic formulas, logical links corresponding to the multiplicative connectives (tensor, par, etc.), and cut links for connecting complementary formulas. Unlike sequent proofs, proof nets impose no linear order on inference steps, allowing for a more intrinsic depiction of proof structure that eliminates redundant reorderings. A proof net is constructed by translating a sequent proof into a , where axiom links form the base and logical links aggregate subproofs according to the connectives. For validity, the resulting proof structure must satisfy the Danos-Régnier correctness : for every choice of "switching" at logical links (selecting one incoming edge per premise node), the underlying undirected must be acyclic and connected, ensuring it corresponds to a unique proof up to reordering. This guarantees that the net represents a genuine proof without pathological cycles or disconnected components. One key advantage of proof nets is their support for parallel cut-elimination, where cuts are resolved simultaneously across the graph without sequential dependencies, achieving polynomial-time compared to the potentially exponential time in due to rule permutations. Additionally, since linear logic is contraction-free in its multiplicative fragment, proof nets naturally avoid the duplication issues plaguing classical proofs, enabling efficient verification and a form of known as the "garbage collector property." The treatment of exponentials in proof nets addresses resource duplication and erasure via the promotion rule, which encloses subproofs in a "box" to represent the ! modality, allowing controlled contractions within the box while maintaining linearity outside. For !A, free contraction graphs permit multiple uses of the formula, modeled by auxiliary links that replicate connections without violating the acyclicity criterion. This mechanism ensures that the non-linear behavior of exponentials is localized and graphically explicit. As an illustrative example, consider the proof net for the A \par B \vdash !A \otimes B, where \par denotes the multiplicative par connective. The features an axiom link connecting A^\perp and A (with the latter promoted to !A via a box containing a contraction-auxiliary link for potential reuse), another axiom link for B^\perp and B, a par link merging the A^\perp and B^\perp premises, and a tensor link outputting the promoted !A and B conclusions. This structure satisfies the Danos-Régnier criterion through its single connected, acyclic switching , visualizing the parallel resource flow without sequential steps. Subsequent developments refined the correctness criteria for broader fragments; in the 1990s, alternative formulations like Melliès' ribbon-based approach extended validity checks to proof nets including cuts, providing a differential topology for ensuring equivalence. More recently, in the 2020s, extensions to multiplicative-additive linear logic (MALL) proof nets have advanced complexity analysis, with new canonical representations enabling efficient local verification and resolving long-standing questions on cut-elimination decidability in polynomial space. Proof nets also encode implicit computational content relevant to lambda calculi, where linear terms correspond directly to net structures, facilitating translations that preserve and reveal concurrency in type systems; recent work on linear lambda nets () explores these encodings for resource-aware programming models.

Semantics

Phase and Coherence Semantics

Phase semantics, introduced by Jean-Yves Girard in , provides a foundational for linear logic by interpreting formulas as subsets of a commutative equipped with a distinguished subset called the pole. A is defined as a pair (M, \perp), where M is a commutative monoid and \perp \subseteq M is the pole satisfying closure properties such that facts—subsets equal to their biorthogonal \{ m \in M \mid \forall n \in X, mn \notin \perp \}—capture provable assertions. The multiplicative connectives are interpreted via tensor products: for phase spaces X and Y, the tensor X \otimes Y is \{mn \mid m \in X, n \in Y\}^{\perp\perp}, while the par X \parr Y is dually (X^\perp \otimes Y^\perp)^\perp, where X^\perp = \{ m \in M \mid \forall n \in X, mn \notin \perp \}. Additive connectives follow set-theoretic operations: X \oplus Y = (X \cup Y)^{\perp\perp} and X \& Y = X \cap Y, reflecting and without resource duplication. The exponential modalities handle resource promotion and dereliction through idempotent elements: let I be the set of idempotents \{ e \in M \mid e^2 = e \}; then !X = (X \cap I)^{\perp\perp} models reusable , while ?X = (!X^\perp)^\perp captures consumable ones. Duality in semantics distinguishes negative and positive phases: negative formulas (like atoms) generate closed sets under tensor, while positive ones (like exponentials) involve infinite , ensuring the De Morgan dualities (X \otimes Y)^\perp = X^\perp \parr Y^\perp and (!X)^\perp = ?X^\perp. A \vdash A_1, \dots, A_n, B is interpreted as the \&_{i=1}^n A_i \parr B containing the neutral element $1 of the , with and theorems establishing that provability corresponds exactly to this membership in every . These theorems guarantee adequacy (every proof yields a valid ) and full completeness (every valid arises from a proof). Coherence spaces offer a of phase semantics, modeling linear logic via graph-theoretic structures that simplify for proofs. A space is a pair (W, \sim), where W is a set of tokens (observations) and \sim \subseteq W \times W is a reflexive, symmetric ; cliques are maximal subsets where every pair is coherent, representing valid proof tokens. The of a is the set of its cliques: for multiplicative tensor, cliques of X \otimes Y consist of the Cartesian products of cliques from X and from Y; dually, for par X \parr Y, the construction follows from duality as cliques in the of the tensor of the negations. Additives are direct: cliques of X \& Y are cliques that are cliques in both X and Y (intersections), while X \oplus Y uses disjoint unions of a clique from one and the from the other. Exponentials extend this: !W comprises finite multisets of cliques from W that are coherent as a , modeling and weakening via reusable finite supports. In spaces, proofs of a \vdash \Gamma, A correspond to in the coherence space of \& \Gamma \parr A, with entailment \vdash \Gamma holding if this space's cliques include those of \& \Gamma. Adequacy ensures every proof term yields a , while validates that only provable sequents have non-empty interpretations; duality preserves this through , where X^\perp inverts coherence relations. Recent developments in the have extended these relational models to quantum linear logic, adapting coherence spaces to probabilistic and normed settings for handling superposition and . Probabilistic coherence spaces, refined in normed cones, interpret linear logic formulas as convex sets of cliques with probabilistic weights, providing a relational for quantum resources while preserving adequacy for multiplicative-additive fragments. These models support *-autonomous categories underlying quantum interpretations, bridging classical linear logic to quantum control protocols.

Algebraic and Denotational Semantics

Algebraic semantics for linear logic primarily rely on category-theoretic structures that capture the resource-sensitive nature of its connectives. Monoidal categories provide the foundational framework for the multiplicative fragment, where the tensor connective A \otimes B is interpreted as the monoidal product, ensuring that resources are combined without duplication or discard unless explicitly allowed. Compact closed categories extend this by modeling the linear implication A \multimap B as the internal hom, with duals enabling the par connective A \parr B as the dual to tensor, reflecting the full duality inherent in classical linear logic. To incorporate the exponential connectives, which handle resource and , Seely categories augment symmetric monoidal closed categories with a strong comonad ! for the "of course" , satisfying Seely's promotion condition that ensures the exponential structure aligns with proof . This construction models intuitionistic linear logic, where !A denotes reusable resources, and extends to classical variants via dual exponentials. Phase spaces serve as concrete precursors to these abstract categorical models, providing set-theoretic interpretations that inspired the shift to more general categorical semantics. *-Autonomous categories offer a unified algebraic semantics for the full multiplicative-additive fragment, featuring a closed symmetric monoidal structure with a that enforces contravariant duality between tensor and par. In these categories, every object has a , and the linear negation \neg A = A \multimap \top is an , capturing the reversible nature of linear proofs without additives. Bierman's work in the established conditions for such categories, ensuring that proofs correspond to unique morphisms up to isomorphism in the intuitionistic setting, thus bridging and categorical models. Denotational semantics for linear logic employ domain-theoretic models, interpreting formulas as and proofs as continuous between them. Linear , such as Scott domains equipped with linear maps—functions that preserve the monoidal structure without unless via —provide a computational where resources are tracked through strictness conditions. In this setup, the tensor A \otimes B denotes the product , while linear corresponds to linear function spaces, and proofs are denoted as in the Kleisli category of the exponential comonad. For the multiplicative linear logic (MLL) fragment, full completeness holds in certain models, such as those based on linear information systems, where every provable corresponds to a unique , equating proofs that are equivalent under cut-elimination. Affine variants of linear logic, which permit weakening but not , receive denotational models by relaxing strictness in the domain morphisms, allowing functions that may discard inputs without explicit . This is modeled in categories where the hom-sets include affine maps—linear functions with optional weakening—extending standard linear domains to capture non-strict usage. For instance, in affine linear logic, the implication A \multimap B denotes maps from A to B that need not use all of A, interpreted as partial linear continuous functions on Scott domains. Recent applications leverage theory to model probabilistic extensions of linear logic, where hom-objects are enriched over spaces or probability monads to handle in . In such frameworks, the multiplicative structure is preserved via enriched monoidal products, while exponentials incorporate probabilistic , enabling semantics for quantum or processes as in reversible models.

Interpretations and Encodings

Resource-Sensitive Interpretation

Linear logic offers a resource-sensitive framework for reasoning, treating logical assumptions not as eternal truths but as finite, consumable resources that must be explicitly managed during . Introduced by Jean-Yves Girard, this perspective rejects the unrestricted structural rules of —specifically, weakening (discarding unused assumptions) and (freely duplicating assumptions)—in its core multiplicative and additive connectives, enforcing that each resource is used exactly once to avoid waste or overuse. This "no free lunch" principle distinguishes linear logic from classical systems, where resources can be manipulated without cost, and instead emphasizes accountability in their transformation. At the heart of this interpretation, the multiplicative conjunction \otimes models the parallel consumption of resources: to achieve A \otimes B, both A and B must be fully expended together, as in combining materials in production. Similarly, linear implication A \multimap B captures the directed consumption of A to yield B, akin to a process where one resource is sacrificed to generate another, such as converting fuel into motion. Additive connectives provide alternatives without duplication: A \& B offers a choice between consuming A or B (internal choice in resources), while A \oplus B imposes an external choice in production, ensuring no branching that would multiply resources illicitly. The exponentials modulate this sensitivity—!A denotes reusable resources like executable code that can be duplicated or discarded (via controlled contraction and weakening), contrasting with ?A for producible resources that can be introduced as needed. Philosophically, linear logic reframes as a dynamic of resource evolution rather than mere truth preservation, influencing computational paradigms by prioritizing flow and scarcity over static validity. This has profoundly shaped , where linear types enforce exact-once usage of variables to prevent and enable precise , as in systems that track to avoid unintended . In concurrency, linear logic interprets proofs as interacting es, with encodings into calculi like the mapping logical cuts to communication channels, thus providing a Curry-Howard-style correspondence for distributed computation. Economically, it models resource negotiations and game-theoretic allocations, representing multisets of goods and agent preferences as linear formulas to enforce equitable consumption without gratuitous duplication. A key application arises in Petri nets, where s correspond to tokens in places, and linear proofs translate to valid firing sequences, capturing concurrent dynamics in systems like workflows.

Encoding Classical and Intuitionistic Logic

Linear logic provides a faithful of by leveraging its exponential connectives ! (of course) and ? (why not) to recover the unrestricted and weakening rules absent in the linear setting. This encoding, originally developed by Girard, translates classical propositional connectives into linear ones prefixed with exponentials to simulate resource reuse. Specifically, classical A \land B maps to !A \otimes !B, disjunction A \lor B to ?A \parr ?B, and A \to B to !A \multimap B. The full translation schema for classical propositional logic into linear logic is given by the embedding, where a classical formula A is interpreted as !!A in linear logic. This double exponential allows classical proofs to be reconstructed linearly by permitting multiple uses of assumptions through the for !. \neg A translates to ?A^\perp, the linear negation of the why-not modality, ensuring that classical excluded middle A \lor \neg A corresponds to provable linear formulas. The following table summarizes the standard translation for classical propositional connectives:
Classical ConnectiveLinear Translation
A \land B!A \otimes !B
A \lor B?A \parr ?B
A \to B!A \multimap B
\neg A?A^\perp
\top!1
\bot?\bot
This embedding is faithful: a A is provable in classical if and only if !!A is provable in linear logic, with the translation preserving cut-elimination and yielding cut-free proofs. For extensions, the universal quantifier \forall x . A translates to ! \forall x . !A, enabling the reuse of universally quantified assumptions in a manner analogous to , while the existential \exists x . A maps to ? \exists x . ?A. Intuitionistic logic embeds into linear logic in a polarized fashion, distinguishing between linear and reusable resources without requiring full for classicality. The Girard uses the recursive mapping ^\circ, where assumptions are prefixed with !, and connectives are: intuitionistic A \land B to A^\circ \& B^\circ (additive with, allowing shared reusable ), disjunction A \lor B to !A^\circ \oplus !B^\circ (additive plus with ! on branches, capturing proof ), reflecting the constructive nature of intuitionistic disjunction. The key distinction lies in : linear A \multimap B preserves resource for intuitionistic contexts, while intuitionistic A \to B translates to !A^\circ \multimap B^\circ, allowing on the antecedent. This intuitionistic encoding preserves the Brouwer-Heyting-Kolmogorov (BHK) interpretation, where proofs correspond to constructive methods: a proof of A \to B yields a function transforming proofs of A (reusable via !) into proofs of B, maintaining the resource-aware computation inherent to linear logic. The intuitionistic embedding is sound and complete with respect to provability: an intuitionistic formula A is provable if and only if its linear translation !A^\circ is provable in the intuitionistic fragment of linear logic, and the translation admits a cut-free characterization.

Advanced Topics

Remarkable Formulas and Examples

One notable example in linear logic is , expressed as ((A \multimap B) \multimap A) \multimap A, which is provable in but not in the multiplicative fragment of linear logic without the use of exponentials. To derive it in full linear logic, the exponential modalities ! and ? are required to simulate the necessary resource duplication and discard, highlighting the resource-sensitive nature of the logic. The distribution theorem provides another key illustration of linear logic's structure, stating that A \otimes (B \parr C) \vdash (A \otimes B) \parr (A \otimes C), while the converse does not hold. This asymmetry underscores the distinction between the multiplicative connectives tensor (\otimes) and par (\parr), where tensor distributes over par but not vice versa, reflecting controlled resource parallelism without unrestricted sharing. A fundamental example involving exponentials is the dereliction rule, which establishes !A \multimap A, permitting the use of a reusable resource as a linear one in a controlled manner. Conversely, contraction is not provable without exponentials; attempting A, A \vdash A violates linearity unless A is prefixed by !, preventing arbitrary resource duplication and emphasizing linear logic's substructural constraints. The exponentials further simulate S4 , where ! corresponds to necessity (\Box) and ? to possibility (\Diamond), with and dereliction rules mirroring the reflexive and transitive properties of S4 accessibility. This embedding allows linear proofs to model persistent necessities and storable , as in !A \multimap \Box A via . In quantum protocols, linear logic has been applied to model resource consumption, such as in , where the cut rule represents entanglement-based state transfer as a linear interaction. Recent work extends this to intuitionistic multiplicative-additive linear logic (IMALL) with mixed-state modalities, encoding teleportation as a proof of (A \otimes B) \multimap C, preserving quantum no-cloning via . Ludics, Girard's game semantics for linear logic introduced in the 2000s, provides interactive examples of proofs as strategies in arenas, updated in the to handle focalization. For instance, the A \multimap A becomes a basic in an , while exponentials model infinite plays, illustrating non-deterministic resource without classical completeness assumptions.

Decidability and Computational Complexity

The provability problem for full propositional linear logic, which includes the exponential modalities, is undecidable. This undecidability arises because the exponentials enable the encoding of classical propositional logic and, more powerfully, the of Turing machines or counter machines, allowing the reduction of the to linear logic provability. In contrast, the multiplicative-additive fragment of linear logic (MALL), which excludes exponentials but includes both multiplicative and additive connectives, is decidable, with the provability problem being . This complexity result stems from reductions to alternating Turing machines and space-bounded computations, establishing both hardness and membership in via tableau-like methods. The elementary linear logic fragment (ELL), a restricted variant introduced by Girard, bounds the computational power to the elementary class through level annotations on formulas that limit the depth of exponential contractions. A related system, light linear logic (LLL), further refines these bounds to characterize exactly the -time computable functions, with proofs normalizing in time due to controlled resource usage in the . For bounded fragments of linear logic, such as those with a fixed proof depth or restricted occurrences of connectives, decidability is achieved through translations to automata-based models, like pushdown or alternating automata, which enable effective model-checking procedures. The complexity of cut-elimination in linear logic varies significantly between proof representations: in the , it can require exponential time and space due to repeated duplications from contractions, whereas in proof nets, the process is polynomial-time efficient, often linear for the multiplicative fragment, by avoiding redundant recomputations through parallel reductions. Decision procedures for decidable fragments like MALL often employ graphs to model the flow of resources and polarities, enabling checks through algorithms that detect cycles or violations in linear time relative to the size.

Variants and Extensions

Substructural Variants

Substructural variants of linear logic modify the strict rules of the core system, which requires resources to be used exactly once without weakening or . These variants relax aspects of structural rules to allow varying degrees of resource reusability or discardability while preserving the foundational multiplicative and additive connectives. Such adaptations enable finer control over logical in applications demanding partial or bounded reuse. Relevant logic, a prominent substructural variant, prohibits weakening but permits contraction, ensuring that hypotheses are used at least once without allowing discard. This contrasts with core linear logic by encoding resource reuse through contraction rather than full exponentials, often using a fusion connective (denoted ◦ or ;) for multiplicative conjunction that enforces relevance by avoiding irrelevant premises. The fusion rule allows importation and exportation relative to implication, distinguishing it from the tensor ⊗ in linear logic, where resources must be exactly consumed. Seminal developments trace to J. Michael Dunn's work in the 1970s, building on earlier efforts by Alan Ross Anderson and Nuel D. Belnap to formalize relevance in entailment, with integration into linear frameworks occurring in the 1990s through relational semantics that extend the implication-fusion fragment. Affine logic, in contrast, allows weakening but forbids , permitting resources to be discarded but not duplicated or d. This makes it suitable for scenarios where resources can be ignored but not multiplied, differing from core linear logic's on both rules; for instance, the A -o B in affine logic lacks the ! modality for , relying instead on weakening to drop unused assumptions in sequents. Syntactically, affine systems extend multiplicative affine logic (without exponentials) by adding discard rules, and they can be mapped to linear logic via translations that treat affine functions as linear with optional consumption. Bounded linear logic introduces resource bounds on exponentials to control , restricting the ! operator to a finite number of uses, such as k!A denoting A usable up to k times. This variant modifies core linear logic's unbounded exponentials by indexing modalities with natural numbers, ensuring polynomial-time decidability for proofs within fixed bounds; for example, and weakening are limited to the bounded instances, preventing unbounded resource duplication. Developed by Jean-Yves Girard in the early , it provides a modular for classes like PTime through level-restricted proofs. Ordered linear logic imposes a partial order on proof contexts to track dependencies among resources, refining intuitionistic linear logic with ordered hypotheses that must be accessed sequentially. Unlike core linear logic's unordered multisets, contexts here are sequences or posets, where rules respect the order (e.g., no commuting unrelated assumptions), enabling modeling of ordered data structures or constrained access. This variant emerged in the late 1990s, with key formalizations supporting applications in by treating ordered contexts as logical queues. Fusion logics represent post-2010 hybrids blending relevant and linear features, incorporating connectives from relevant into linear frameworks to balance at-least-once usage with exact consumption in substructural settings. These systems extend the basic implication-fusion fragment of relevant logics with linear additives and multiplicatives, allowing proof-theoretic characterizations that include (for disjunctive fusion) while maintaining cut-elimination properties. Recent work has explored their relational semantics to unify substructural behaviors.

Applications in Computer Science

Linear logic has significantly influenced type systems in programming languages by providing a foundation for resource-sensitive typing, where values must be used exactly once to prevent unintended side effects and . In functional languages like , uniqueness types enforce that unique references to data are not aliased, ensuring destructive updates without copying and enabling efficient compilation; this approach draws directly from linear logic's restriction on resource duplication and deletion. Similarly, Mercury employs uniqueness typing to achieve the same goals, supporting strict evaluation and garbage collection optimizations while maintaining . Rust's model, introduced in the , extends these ideas with borrowing rules to manage memory at , preventing data races and dereferences; its design is explicitly inspired by linear logic, treating ownership transfers as linear uses of resources. In concurrency, linear logic enables precise modeling of communication protocols through encodings of process calculi. Kohei Honda's 1993 work demonstrated how the can be encoded in linear logic, using linear types to track channel usage and ensure deadlock-free interactions by consuming channels after single use. This foundation evolved into session types, where linear logic propositions correspond to communication protocols via the Curry-Howard isomorphism; Simon Gay and colleagues in the 2000s developed asynchronous session types grounded in , allowing for protocol refinement while guaranteeing session fidelity and progress in distributed systems. For verification, linear logic supports formal analysis of protocols by modeling resources like nonces and keys as linear hypotheses that cannot be duplicated. Bozzano's 2002 framework automates verification in linear logic using , checking and properties for protocols like Needham-Schroeder by searching for cut-free proofs. Extensions to quantum protocols leverage linear logic's for no-cloning constraints; Abramsky's 2020s categorical approaches to interpret protocols like in monoidal categories akin to linear logic models, ensuring reversible resource flows in . Game semantics, pioneered by Martin Hyland and Luke Ong in 1994, interprets linear logic proofs as winning strategies in impartial games, providing full completeness for multiplicative linear logic and bridging logic with computational processes. In AI planning, linear logic models resource consumption in domain-independent planners. Tools implementing linear logic include Forum, a multiple-conclusion logic programming system extending λProlog to full linear logic, enabling declarative specification and automated proof search for concurrent systems. Ludics, introduced by Jean-Yves Girard in 2001, offers an interactive framework for implicit computational content in proofs, representing linear logic designs as strategies in arenas for analyzing focalization and long proofs without explicit syntax. Recent applications extend to , where the Move language for platforms like and Aptos (2024 updates) uses linear types to track digital assets in smart contracts, preventing by enforcing single-use resources. In , linear types manage tensor flows for memory efficiency; a 2025 analysis of autodiff frames as linear logic reductions, ensuring no intermediate tensor duplication during gradient computation. Via the Curry-Howard correspondence, linear logic proofs correspond to type-safe programs, as in session-typed calculi where protocol adherence is proven by type derivation, guaranteeing deadlock-freedom and resource safety without runtime checks.