Semantics of logic
The semantics of logic is the branch of mathematical logic that studies the meaning and truth conditions of expressions in formal logical languages, primarily through the use of models or structures that interpret the symbols and evaluate the satisfaction of formulas.[1] It contrasts with syntax, which concerns the formal rules for constructing well-formed formulas, by focusing on how these formulas relate to reality or abstract domains via interpretations that assign denotations to non-logical symbols like predicates and constants.[1] Originating with Alfred Tarski's 1933 definition of truth, semantics provides a foundational framework for understanding logical consequence, validity, and the soundness and completeness of deductive systems.[1] In classical propositional logic, semantics assigns truth values (true or false) to atomic propositions and extends this recursively to compound formulas using truth tables for connectives such as negation (¬), conjunction (∧), disjunction (∨), and implication (→).[2] A formula is valid if it is true in every possible interpretation, ensuring that logical consequence preserves truth: if premises Γ are true in a model, then the conclusion θ must also be true.[2] This model-theoretic approach, formalized in the mid-20th century by figures like Tarski, Abraham Robinson, and Anatoly Mal’tsev, defines a model as a structure consisting of a non-empty domain and an interpretation function that makes specific axioms true, such as those defining an abelian group.[1] For predicate logic (first-order logic), semantics extends to quantifiers (∀ and ∃) by interpreting them over the domain of the structure, with satisfaction defined recursively for open formulas under variable assignments.[2] Key results include the soundness theorem (every syntactically provable formula is semantically valid) and the completeness theorem (every semantically valid formula is syntactically provable), establishing equivalence between syntactic deduction and semantic entailment.[2] Model theory thus not only validates logical systems but also enables applications in algebra, geometry, and philosophy, such as analyzing nonstandard models or the semantics of natural language.[1] Beyond classical logic, semantic frameworks adapt to nonclassical systems like modal or intuitionistic logic, using possible worlds or alternative satisfaction relations to capture notions of necessity, possibility, or constructive proof.[2]Foundations of Logical Semantics
Definition and Scope
Semantics in logic refers to the branch of logical study concerned with the meaning of formal languages, achieved by assigning truth values or interpretations to logical formulas within specified structures, often called models. This approach, known as model-theoretic semantics, provides a precise way to determine when a formula is true relative to a given domain and interpretation of its symbols. Alfred Tarski's foundational work established this framework by defining truth in formalized languages through satisfaction in models, ensuring that semantic notions align with intuitive understandings of truth and entailment.[1] In contrast to syntax, which focuses solely on the formal structure of expressions—such as well-formed formulas and rules for inference without reference to meaning—semantics emphasizes model-based evaluations that confer interpretive content to syntactic objects. Syntax operates independently of worldly connections, dealing only in symbols and derivations, whereas semantics bridges the gap by relating formulas to possible worlds or structures, thus enabling assessments of truth and validity. This distinction is fundamental in logic, as it separates the mechanical manipulation of expressions from their substantive interpretation.[2] The scope of logical semantics extends across classical logics, including propositional and predicate logics, as well as non-classical variants such as intuitionistic, modal, and fuzzy logics, each employing tailored interpretive mechanisms to capture different aspects of reasoning. For instance, in propositional logic, semantics can be illustrated briefly through truth tables, which enumerate all possible assignments of truth values to atomic propositions to evaluate compound formulas. Overall, semantics underpins the analysis of diverse logical systems by providing tools to explore how meanings vary under different interpretive constraints.[2][3] Central to semantics is its role in defining core concepts like logical consequence, where a formula semantically follows from a set of premises if it holds true in every model satisfying those premises; validity, requiring truth in all possible models; and satisfiability, indicating the existence of at least one supporting model. These notions allow for rigorous evaluation of arguments and theories, distinguishing sound inferences from mere syntactic manipulations and facilitating applications in mathematics, philosophy, and computer science.[2]Historical Development
The semantics of logic originated in ancient philosophy with Aristotle's syllogistic logic in the 4th century BCE, as detailed in his Prior Analytics and On Interpretation. Aristotle treated categorical propositions—such as "All humans are mortal"—as assertions capable of being true or false, implicitly assuming a semantic framework where truth arises from the correspondence between predicates and subjects in the world, enabling valid deductions through figures of syllogisms like Barbara (All A are B; All B are C; therefore All A are C).[4] This approach presupposed bivalence, where every assertion is either true or false, except for future contingents, laying the groundwork for evaluating logical validity based on semantic consistency rather than mere syntactic form.[5] In the 19th century, George Boole formalized an algebraic semantics for propositional logic in his 1847 treatise The Mathematical Analysis of Logic.[6] Boole interpreted logical connectives (such as conjunction and disjunction) as operations on sets, reducing validity to set-theoretic inclusion—for instance, validating an argument by checking if the class of instances satisfying the premises is contained within those satisfying the conclusion. This innovation shifted focus from qualitative syllogisms to quantitative, model-based evaluations, influencing later developments in Boolean algebra. Gottlob Frege advanced logical semantics in his 1879 Begriffsschrift by inventing the first-order predicate calculus, which explicitly connected syntax to semantics through functions mapping arguments to truth values (the True or the False).[7] Frege bridged semantics to set theory in Grundgesetze der Arithmetik (1893–1903), defining concepts as the extensions of predicates—sets of objects satisfying them—and numbers as equivalence classes of such extensions, though this system encountered inconsistency via Russell's paradox.[8] Bertrand Russell contributed significantly by introducing model-theoretic notions in The Principles of Mathematics (1903), where he analyzed denotation and truth in terms of set-theoretic structures, and later developed the ramified theory of types in Principia Mathematica (1910–1913, with Alfred North Whitehead) to resolve paradoxes while formalizing logical semantics within a hierarchical set framework.[9] Alfred Tarski provided the cornerstone of modern logical semantics in his 1933 Polish paper "Pojęcie prawdy w językach nauk dedukcyjnych" (translated as "The Concept of Truth in Formalized Languages" in 1956), defining truth for formalized languages via satisfaction in models. Tarski's theory employed a metalanguage to recursively specify when sequences satisfy open formulas, yielding a T-schema like "'Snow is white' is true if and only if snow is white," ensuring adequacy by entailing all such instances without semantic paradoxes through object-metalanguage distinction.[10] Following Tarski's work, model theory emerged as a distinct field in the 1950s, with Abraham Robinson playing a pivotal role in its development and application. In his 1963 book Introduction to Model Theory and to the Metamathematics of Algebra, Robinson formalized models as structures interpreting logical languages, extending Tarskian semantics to algebraic systems and proving results like the completeness of certain theories.[11] Robinson's 1950 address to the International Congress of Mathematicians highlighted model theory's potential for geometry and analysis, coining aspects of the framework that Alfred Tarski also advanced, thus solidifying semantics as a tool for metamathematical investigation.[12]Semantics of Propositional Logic
Truth Valuations
In propositional logic, a truth valuation is defined as a function that assigns one of two truth values—true (often denoted T) or false (F)—to each atomic proposition in a given language.[13][14] This assignment provides the foundational semantic interpretation, determining the truth status of basic statements without reference to external models or interpretations.[15] The valuation extends recursively to compound formulas through truth-functional connectives, where the truth value of a complex expression depends solely on the truth values of its components.[13] For negation (\neg p), the value is the opposite of p's value: true if p is false, and false if p is true.[16] For conjunction (p \land q), the result is true if and only if both p and q are true; otherwise, it is false.[13] Similar rules apply to other connectives like disjunction (p \lor q), which is true unless both are false, ensuring that all compound formulas receive a determinate truth value based on this recursive application.[14] Consider the formula p \lor \neg q, where the valuation assigns true to p and false to q. Here, \neg q evaluates to true, and thus p \lor \neg q is true, as disjunction holds when at least one operand is true.[15][16] A formula is said to be satisfied (or true) under a particular valuation if the extended assignment yields the value true for that formula; conversely, it is falsified if the value is false.[13] This notion of satisfaction under valuations underpins further semantic concepts, such as entailment, where one formula entails another if every valuation satisfying the former also satisfies the latter.[14]Semantic Entailment and Validity
In propositional logic, semantic entailment defines the inferential relationship between formulas based on truth valuations. Specifically, a formula \phi semantically entails another formula \psi, written \phi \models \psi, if every truth valuation that makes \phi true also makes \psi true.[13] This relation ensures that the truth of \psi is preserved whenever \phi holds, capturing the core semantic notion of logical consequence without reference to syntactic rules.[13] Validity in propositional logic extends this idea to individual formulas. A formula \phi is valid, or a tautology, denoted \models \phi, if it evaluates to true under every possible truth valuation of its atomic propositions.[13] For instance, the law of excluded middle, p \vee \neg p, is valid because it holds regardless of whether p is true or false.[13] Tautologies represent universally true statements in the logic, forming the semantic foundation for reliable inferences. Truth tables provide a systematic method to verify semantic entailment and validity by exhaustively enumerating all $2^n possible truth assignments for n atomic propositions.[13] Each row of the table represents a valuation, and the final column indicates whether the formula (or pair of formulas for entailment) is true across all relevant cases. For example, to demonstrate the equivalence (p \to q) \leftrightarrow (\neg p \vee q)—which holds if each side entails the other—the truth table below shows that the biconditional is true in all four valuations:| p | q | p \to q | \neg p | \neg p \vee q | (p \to q) \leftrightarrow (\neg p \vee q) |
|---|---|---|---|---|---|
| T | T | T | F | T | T |
| T | F | F | F | F | T |
| F | T | T | T | T | T |
| F | F | T | T | T | T |