Logical truth
Logical truth is a fundamental concept in formal logic referring to a sentence or proposition that is true under every possible interpretation or model, meaning it cannot be false regardless of the assignment of meanings to its non-logical components.[1] This notion ensures that such truths hold necessarily due to their logical structure alone, independent of empirical facts about the world.[2] In the model-theoretic framework pioneered by Alfred Tarski, a sentence is logically true if it is satisfied in every structure or sequence that makes the premises true, effectively treating it as a logical consequence of the empty set of premises.[3] The development of logical truth as a precise concept emerged in the early 20th century amid efforts to formalize semantics and avoid paradoxes in truth theories.[4] Tarski's seminal work on logical consequence in 1936 provided the model-theoretic foundation, defining logical truth as preservation of truth across all models where the antecedent conditions are met, which distinguishes it from contingent or factual truths.[3] This approach contrasts with proof-theoretic views, where logical truths are those provable from axioms using inference rules, highlighting ongoing debates about whether logical truth is best captured semantically or syntactically.[5] Key characteristics of logical truths include their necessity—they are true in all possible worlds or configurations—and their role in distinguishing tautologies, which arise solely from truth-functional connectives like negation and conjunction in propositional logic.[6] For example, the sentence "Either it is raining or it is not raining" is a logical truth because it holds in every interpretation, verifiable via truth tables that exhaust all combinations of truth values.[6] In predicate logic, logical truths extend to quantified statements, such as "All objects are identical to themselves," true due to the universal quantifier and identity predicate across all domains.[1] Philosophically, logical truths raise questions about their metaphysical status, with some interpretations linking them to genuine possibilities of the world rather than mere linguistic conventions.[5] They underpin deductive reasoning and are central to distinguishing logical from non-logical vocabulary in semantic theories.[3]Fundamentals
Definition
A logical truth is a proposition that holds true in every possible world or interpretation, relying solely on its logical structure rather than any empirical or contingent content.[7] This independence from specific facts about the world distinguishes logical truths from other kinds of statements, ensuring their validity stems purely from the form of reasoning involved.[8] Formally, in the context of model theory, a sentence φ is a logical truth if it is true in every model, where a model is a structure consisting of a non-empty domain and an interpretation of the language's symbols.[9] This model-theoretic account captures the idea that logical truths are preserved across all possible assignments of meanings to non-logical terms, making them necessarily valid. The concept of logical truth traces its origins to Aristotle's syllogistic logic, where certain argument forms were recognized as invariably yielding true conclusions from true premises, independent of the particular terms used.[8] This idea was formalized in modern predicate logic by Gottlob Frege and Bertrand Russell, who developed systems to express general logical relations and truths through quantifiers and predicates.[10] For instance, the statement "All bachelors are unmarried" might appear logical but depends on the semantic connection between its terms; true logical truths, by contrast, depend only on form, such as the proposition \forall x (P(x) \to P(x)), which is valid regardless of what P denotes. Tautologies in propositional logic serve as simpler instances of such form-based truths.[7]Distinction from Other Truths
Logical truths differ fundamentally from contingent truths, which are propositions that obtain in some possible worlds but not in all others, relying on specific empirical circumstances rather than holding universally. For instance, the statement "It is raining now" exemplifies a contingent truth, as its veracity depends on observable facts at a particular time and place, and it could be false in alternative scenarios without contradiction.[11] In contrast, logical truths possess necessity, remaining true irrespective of empirical contingencies or variations in the world. This distinction extends to factual truths, which are typically contingent and verified through posteriori experience, requiring sensory evidence to confirm their status. Logical truths, however, bypass such empirical validation, deriving their certainty from the structure of reasoning itself rather than observation of particular events or states of affairs.[12] Immanuel Kant further delineates logical truths by aligning them closely with analytic judgments, where the predicate is contained within the subject concept, yielding truths that are informative only in an explicative sense without adding new content. Synthetic truths, by comparison, are ampliative, introducing predicates not inherent in the subject and thus providing genuinely new knowledge, often grounded in empirical relations as per Kant's framework.[13] While logical truths overlap with the analytic category, they emphasize formal necessity over mere definitional inclusion. The necessity of logical truths manifests a priori, known independently of experience and immune to revision by sensory data, in opposition to factual truths established posteriori through empirical testing. This a priori character underscores their non-empirical foundation, ensuring invariance across all conceivable contexts.[12] Willard Van Orman Quine partially challenges this framework by treating logical truths as a subset of analytic truths but questioning the sharpness of the analytic-synthetic divide, suggesting that no clear boundary separates meaning-based truths from those informed by experience.[14]Classical Logic
Tautologies and Truth Values
In classical propositional logic, a tautology is defined as a formula that evaluates to true for every possible assignment of truth values to its atomic propositions.[15] This property ensures that the formula's truth depends solely on its logical structure, independent of the specific content of the propositions involved.[16] Tautologies serve as the core exemplars of logical truths within this framework, capturing statements that are necessarily true due to the meanings of logical connectives.[17] Classical propositional logic employs binary truth values: true (T) and false (F).[15] These values are assigned to atomic propositions, and compound formulas are evaluated recursively using truth-functional connectives such as negation (¬), conjunction (∧), disjunction (∨), and implication (→).[16] To determine if a formula is a tautology, one constructs a truth table that exhaustively lists all possible combinations of truth values for the atomic propositions and computes the resulting value for the entire formula.[15] A classic example is the law of excluded middle, expressed as p \lor \neg p. The truth table for this formula is as follows:| p | \neg p | p \lor \neg p |
|---|---|---|
| T | F | T |
| F | T | T |
Logical Constants
In formal logic, logical constants are the fixed symbols that define the structural features responsible for the necessity of logical truths, distinguishing them from non-logical vocabulary whose interpretations can vary without affecting the logical form. In first-order logic, the standard logical constants comprise the truth-functional connectives—negation (¬), conjunction (∧), disjunction (∨), material implication (→), and biconditional (↔)—along with the universal quantifier (∀) and the existential quantifier (∃). These symbols operate uniformly across all models, preserving truth values based on their semantic definitions: for instance, ¬φ is true if and only if φ is false, while ∀x φ(x) holds if φ(x) is true for every element in the domain under a given interpretation.[19] The role of logical constants in guaranteeing logical truth stems from their invariance under reinterpretations of non-logical elements, such as predicates (e.g., P denoting "is even") or individual constants (e.g., c denoting a specific number). Alfred Tarski formalized this through the invariance criterion, arguing that a constant qualifies as logical if its extension remains unchanged by any permutation of the domain's elements, ensuring that sentences built from these constants are true in all structures regardless of how extra-logical terms are assigned meanings. This criterion underpins the semantic definition of logical consequence, where a formula follows from premises solely due to the arrangement of logical constants, not the content of predicates. For example, the formula ∀x (P(x) → P(x)) exemplifies a logical truth: its validity arises from the quantifier's scope and implication's semantics, holding true irrespective of whether P interprets as "is mortal" or any other unary predicate.[20] Debates persist over precisely which symbols merit inclusion as logical constants, particularly the identity predicate (=), which asserts indiscernibility between objects. Proponents argue for its logical status based on permutation invariance, as the relation {⟨a, b⟩ | a = b} is preserved under domain rearrangements, aligning with Tarski's criterion and enabling essential inferences like distinguishing domains with at least two elements via ∃x∃y (x ≠ y). Critics, however, contend that identity introduces substantive metaphysical commitments about objects, failing harmony tests in inferentialist frameworks where introduction and elimination rules do not balance without invoking non-logical coordination of variables; some systems thus treat = as a non-logical predicate to maintain logic's topic-neutrality. These discussions highlight the tension between semantic invariance and the boundaries of pure logical structure, influencing extensions beyond classical first-order logic.[21][22]Rules of Inference
In formal systems of classical logic, rules of inference provide the mechanisms for deriving new statements from existing ones while preserving truth. These rules ensure that if the premises are true, the conclusion must also be true, thereby maintaining the validity of logical derivations. Key examples include modus ponens, which allows inference of \psi from \phi \rightarrow \psi and \phi, and universal instantiation, which permits replacing a universally quantified variable with a specific term to obtain an instance of the quantified statement.[23][24] Logical truth can be understood syntactically as a theorem derivable from axioms using these rules of inference, or semantically as a statement true in all models of the system. In classical logic, the syntactic approach emphasizes formal proofs constructed step-by-step via inference rules applied to axioms and prior lines, while the semantic view assesses truth based on interpretations. The equivalence between these notions is established through soundness (every provable statement is semantically valid) and completeness (every semantically valid statement is provable).[23] Gödel's completeness theorem demonstrates that for first-order classical logic, all logical truths—formulas true in every model—are indeed provable using a suitable set of axioms and rules of inference, such as those in Hilbert-style or natural deduction systems.[23] A concrete illustration is the natural deduction proof of the tautology (p \rightarrow q) \rightarrow (\neg q \rightarrow \neg p), which exemplifies how rules like implication elimination (modus ponens), negation introduction, and implication introduction derive logical truths:- Assume p \rightarrow q (hypothesis).
- Assume \neg q (hypothesis).
- Assume p (hypothesis for subproof).
- From 1 and 3, infer q (implication elimination).
- From 2 and 4, infer contradiction (negation elimination).
- Discharge 3, infer \neg p (negation introduction).
- Discharge 2, infer \neg q \rightarrow \neg p (implication introduction).
- Discharge 1, infer (p \rightarrow q) \rightarrow (\neg q \rightarrow \neg p) (implication introduction).