Truth value
In logic, a truth value is the designation of a proposition as either true, denoted by T, or false, denoted by F.[1] A proposition, in this context, refers to a declarative sentence that asserts something about the world and is capable of being true or false, but not both simultaneously.[1] For example, the statement "2 + 2 = 4" has the truth value true, while "2 + 2 = 5" has the truth value false.[1] Truth values form the foundation of propositional logic, also known as sentential logic, where every well-formed formula (wff) is assigned exactly one of these two values, embodying the principle of bivalence.[2] This bivalent system assumes no intermediate or third truth values, distinguishing it from multivalued logics that might include options like "undetermined" or "possible."[3] In practice, the truth value of a compound proposition—formed by combining simpler propositions using logical connectives such as negation (¬), conjunction (∧), or disjunction (∨)—is determined through truth-functional rules, where the overall value depends solely on the truth values of its atomic components.[4][3] These concepts enable the construction of truth tables, systematic listings of all possible truth value combinations for propositions and their compounds, which are essential for evaluating the validity of arguments and identifying tautologies (statements always true) or contradictions (statements always false).[2] For instance, in a truth table for the conjunction P ∧ Q, the result is true only when both P and Q are true; otherwise, it is false.[4] Beyond formal logic, truth values underpin philosophical inquiries into the nature of truth.[2] For example, correspondence theories hold that a proposition's truth value consists in its correspondence to reality,[5] though debates persist on whether all statements, such as ethical or modal claims, possess determinate truth values.[6]Fundamentals
Definition
In logic and philosophy, a truth value is the semantic attribute assigned to a truth-bearer—such as a proposition, declarative sentence, or statement—indicating whether it holds as true, false, or, in non-classical systems, some other designation like indeterminate or partially true.[1] This assignment reflects the proposition's correspondence to reality or its satisfaction within a given interpretive framework.[7] The distinction between a truth value and its bearer is fundamental: the truth-bearer is the entity capable of being true or false, while the truth value is the property or designation it receives upon evaluation. For instance, the simple proposition "It is raining" serves as a truth-bearer; if precipitation is occurring at the relevant time and place, it receives the truth value true, but false otherwise.[8] This separation allows logical analysis to focus on how bearers acquire values without conflating the content with its assessment. The concept of truth value originated in early 20th-century formal logic, coined by Gottlob Frege in his 1891 lecture "Function and Concept," where he treated truth values as objects resulting from the application of concepts to arguments, and further elaborated in his 1892 paper "On Sense and Reference," identifying the reference of a sentence with its truth value.[7] Bertrand Russell adopted and extended the notion in his collaborative work with Alfred North Whitehead on Principia Mathematica (1910–1913), using truth values to ground the semantics of propositional logic. In classical systems, this typically involves bivalence, limiting truth values to true and false.Bivalence in Classical Systems
In classical logical systems, the principle of bivalence asserts that every proposition possesses exactly one of two possible truth values: true, denoted as T or \top, or false, denoted as F or \bot, with no intermediate or additional options available.[6] This binary framework forms the semantic foundation of classical logic, ensuring that declarative sentences are exhaustively and exclusively partitioned into truth-valuing categories without gaps or overlaps.[6] The philosophical basis for bivalence traces back to Aristotle, who articulated it through the law of excluded middle, or tertium non datur, stating that for any proposition P, either P or its negation \neg P must hold, leaving no third alternative.[9] In his Metaphysics (Book IV, chapters 3–6), Aristotle defends this as an indemonstrable first principle essential for rational discourse and scientific inquiry, positing that contradictory assertions cannot both be true simultaneously.[9] A key implication of bivalence is the law of excluded middle, formalized as P \lor \neg P being invariably true for any proposition P, which guarantees the exhaustive coverage of all possibilities in binary terms.[6] Complementing this is the law of non-contradiction, expressed as \neg (P \land \neg P), which prohibits a proposition from being both true and false at once, thereby maintaining the mutual exclusivity of the two truth values.[9] Together, these laws underpin the stability and decisiveness of classical reasoning. However, bivalence faces challenges in natural language, particularly with vague statements that give rise to paradoxes like the sorites, where incremental changes (e.g., removing one grain from a heap) blur the boundary between true and false, suggesting potential truth-value gaps or indeterminacy.[10] Such cases, as explored in semantic theories of vagueness, highlight tensions with strict bivalence, though classical systems uphold it as the default for precise propositional analysis.[10]Logical Frameworks
Classical Logic
In classical propositional logic, propositions are assigned one of two truth values drawn from the Boolean domain: true, denoted \top, or false, denoted \bot. This bivalence underpins the system's semantics, where every proposition must take exactly one of these values, with no intermediates or gaps.[11] The logic employs truth-functional semantics, such that the truth value of any compound proposition is fully determined by the truth values of its atomic components via specific functions associated with the logical connectives. The primary connectives are negation (\neg), conjunction (\land), disjunction (\lor), and material implication (\to). Negation \neg P yields \bot if P is \top and \top if P is \bot. Conjunction P \land Q is \top if and only if both P and Q are \top; otherwise, it is \bot. Disjunction P \lor Q is \top if at least one of P or Q is \top; otherwise, it is \bot. Material implication P \to Q is \bot only if P is \top and Q is \bot; in all other cases, it is \top. These definitions ensure that compound propositions inherit their truth values systematically from simpler ones.[12] Truth tables provide a complete enumeration of how these connectives operate across all possible input combinations. For negation:| P | \neg P |
|---|---|
| \top | \bot |
| \bot | \top |
| P | Q | P \land Q |
|---|---|---|
| \top | \top | \top |
| \top | \bot | \bot |
| \bot | \top | \bot |
| \bot | \bot | \bot |
| P | Q | P \lor Q |
|---|---|---|
| \top | \top | \top |
| \top | \bot | \top |
| \bot | \top | \top |
| \bot | \bot | \bot |
| P | Q | P \to Q |
|---|---|---|
| \top | \top | \top |
| \top | \bot | \bot |
| \bot | \top | \top |
| \bot | \bot | \top |
Intuitionistic Logic
In intuitionistic logic, truth values are interpreted within the framework of Heyting algebras, which provide a semantic foundation distinct from the binary true/false dichotomy of classical logic. A Heyting algebra forms an ordered lattice bounded by falsehood (⊥) at the bottom and truth (⊤) at the top, allowing for intermediate truth values that reflect degrees of provability, but lacking the classical complements where every element has a precise negation.[13] These structures capture the intuitionistic emphasis on constructive proofs, where a proposition's truth is established only through an explicit verification rather than by elimination of falsity.[14] The semantics of logical connectives in this system are defined relative to the lattice order. For implication, denoted P \to Q, its truth value is the maximal element x in the algebra such that P \wedge x \leq Q, ensuring that assuming P constructively leads to Q.[13] Negation is derived as \neg P = P \to \perp, but double negation \neg \neg P does not necessarily equate to P, as the absence of a proof of falsehood for P does not constructively yield a proof of P.[15] This contrasts with classical logic, where classical logic emerges as a special case when the Heyting algebra reduces to a Boolean algebra with only ⊥ and ⊤. Intuitionistic logic rejects the law of excluded middle, P \lor \neg P, which is not generally valid since a proposition may lack a decisive proof in either direction without an intermediate truth value resolving it.[13] Truth values are assigned to propositions only when they are provable constructively; otherwise, they remain undetermined, aligning with the Brouwer-Heyting-Kolmogorov interpretation that equates truth to the existence of a proof. In realizability interpretations, such as Kleene's recursive realizability, truth values for a proposition correspond to the sets of programs (realizers) that witness its constructibility, where a proposition is true if there exists a computable function or index that verifies it relative to the natural numbers.[16] For instance, the truth value of an existential statement \exists x \, P(x) is realized by a pair consisting of a witness for x and a realizer for P(x), emphasizing effective computation over abstract existence. Applications of these truth values extend to type theory and computer science via the Curry-Howard isomorphism, which equates proofs in intuitionistic logic with programs in typed lambda calculi, where types represent propositions and terms represent proofs with constructive truth.[17] This correspondence enables formal verification of software and automated theorem proving, as a proof's habitability (inhabited type) directly corresponds to the existence of a terminating program realizing the proposition's truth.[18]Non-Classical Extensions
Multi-Valued Logic
Multi-valued logics extend classical bivalent systems by incorporating more than two truth values, typically to handle uncertainty, indeterminacy, or incompleteness in propositions. Unlike binary truth values of true (⊤) and false (⊥), these logics assign values such as an intermediate "unknown" (U) or "undefined" to statements, allowing for finer-grained representations of logical status. This approach maintains truth-functionality, where the truth value of a compound formula depends solely on the truth values of its components via defined operations.[19] The historical roots of multi-valued logic trace back to Jan Łukasiewicz, who in 1920 proposed a three-valued system to address Aristotle's problem of future contingents, such as statements about events that are neither determinately true nor false at present (e.g., "There will be a sea battle tomorrow"). In this framework, the third value represented possibility or indeterminacy, challenging the principle of bivalence for tensed propositions. Łukasiewicz's innovation laid the groundwork for broader many-valued systems, later generalized to n-valued logics for finite n greater than two.[20] A prominent example is Kleene's strong three-valued logic, developed in 1938 to model partial recursive functions and computational indeterminacy. Here, truth values are false (F), unknown (U), and true (T), with connectives extended as follows: negation ¬U = U; conjunction P ∧ Q = min(P, Q), treating U as intermediate between F and T; and disjunction P ∨ Q = max(P, Q). This preserves classical behavior for determinate cases while assigning U to expressions involving undefined components, such as in halting problem analyses.[19] Łukasiewicz logic, originally three-valued but extended to infinitely many values in [0,1], uses finite approximations for discrete cases. The implication connective is defined as P → Q = min(1, 1 - P + Q), enabling the logic to quantify degrees of entailment in a lattice structure. For instance, in the three-valued version, U → T = T and U → F = U, reflecting graded necessity. This system has been axiomatized and applied to modal interpretations of indeterminacy.[20] Supervaluationism provides another multi-valued approach, particularly for vague predicates like "tall" or "heap," where borderline cases receive intermediate values. A proposition is true if it holds in all admissible valuations (e.g., all precise sharpenings of the vague concept), false if it fails in all, and indeterminate otherwise. This preserves classical logic for non-vague sentences while accommodating gaps in truth-value for vagueness, without altering connectives directly.[21]Probabilistic and Fuzzy Variants
Probabilistic and fuzzy variants of truth values extend classical bivalence by allowing degrees of truth to model vagueness, uncertainty, and partial belief, typically drawing from the unit interval [0,1] where 0 represents complete falsity and 1 complete truth.[22] These approaches address limitations in discrete multi-valued logics by incorporating continuous scales, enabling nuanced representations of real-world ambiguity.[22] Fuzzy logic, introduced by Lotfi A. Zadeh in his seminal 1965 paper, formalizes truth values as membership degrees in fuzzy sets, allowing propositions to hold to varying extents rather than strictly true or false.[23] For instance, a statement like "this person is tall" might have a truth value of 0.8, reflecting partial applicability of the vague predicate "tall."[23] Logical operations in fuzzy logic are defined accordingly: conjunction is often the minimum function \min(P, Q) or the product P \times Q, while disjunction uses the maximum \max(P, Q) or probabilistic sum P + Q - P \times Q, preserving the interval [0,1].[22] In probabilistic logic, truth values are interpreted as probabilities measuring the degree of belief in a proposition, integrating uncertainty through frameworks like Bayesian inference.[24] Here, the truth of a sentence is its probability under a probability distribution over possible worlds, updated via Bayes' theorem to reflect new evidence; for example, the posterior probability P(H|E) incorporates prior beliefs P(H) and likelihood P(E|H).[25] This approach, formalized in early works like Nilsson's 1986 probabilistic logic, treats logical entailment as probabilistic consequence, where a premise entails a conclusion if the latter's probability exceeds a threshold given the former.[24] The Dunn-Belnap four-valued logic provides another variant by combining classical truth values {T, F} with epistemic dimensions of knowledge and ignorance, yielding values T (true and known), F (false and known), B (both true and false, or inconsistent), and N (neither, or unknown).[26] Introduced by Nuel Belnap in 1977, this system models information states in reasoning systems, such as databases with conflicting or incomplete data, where truth is assessed along truth/falsity and information/gap dimensions independently.[27] In modern AI applications, fuzzy and probabilistic truth values model uncertainty in machine learning, such as through confidence scores in neural networks where softmax outputs yield probabilistic degrees of class membership for predictions. For example, type-2 fuzzy sets enhance interpretability in explainable AI frameworks by modeling higher-order uncertainty, as in image classification tasks.[28] These variants also facilitate Bayesian neural networks, where probability distributions over weights quantify epistemic uncertainty in high-stakes domains like autonomous driving.[29]Algebraic Semantics
Boolean Algebras
In the context of truth values, a Boolean algebra provides the algebraic semantics for classical bivalence, modeling the two truth values—true (⊤) and false (⊥)—as the top and bottom elements of a partially ordered set. Formally, a Boolean algebra is a distributive lattice equipped with a complementation operation ¬ that satisfies specific axioms, ensuring every element has a unique complement. The lattice operations are the meet ∧ (corresponding to logical conjunction) and join ∨ (corresponding to logical disjunction), with the lattice ordered by implication (a ≤ b if a ∧ b = a). Distributivity holds: for all elements a, b, c in the algebra,a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c)
and
a \vee (b \wedge c) = (a \vee b) \wedge (a \vee c).
Complementation ensures that for every element x, x ∧ ¬x = ⊥ and x ∨ ¬x = ⊤, where ⊥ is the least element (absorbing for meet) and ⊤ is the greatest element (absorbing for join). These properties make the two-element Boolean algebra {⊥, ⊤} the canonical model for classical truth values under conjunction, disjunction, and negation.[30] Any Boolean algebra admits homomorphisms to the two-element algebra of truth values {⊤, ⊥}, which assign consistent truth valuations to its elements. Such a homomorphism φ: B → {⊤, ⊥} preserves the operations: φ(a ∧ b) = φ(a) ∧ φ(b), φ(a ∨ b) = φ(a) ∨ φ(b), and φ(¬a) = ¬φ(a), with φ(⊥) = ⊥ and φ(⊤) = ⊤. These homomorphisms correspond precisely to the ultrafilters of the algebra, where an ultrafilter U is a maximal filter (upward-closed upset closed under meets, containing ⊤ but not ⊥) such that for every a in B, either a ∈ U or ¬a ∈ U but not both. The homomorphism is defined by φ(a) = ⊤ if a ∈ U and ⊥ otherwise, effectively evaluating the algebra at the "truth assignment" given by the ultrafilter. This connection underscores how Boolean algebras generalize the assignment of truth values in classical logic, with ultrafilters representing complete, consistent valuations.[31][32] A foundational result linking Boolean algebras to topology and set theory is Stone's representation theorem, which shows that every Boolean algebra is isomorphic to a field of clopen sets in a compact, totally disconnected Hausdorff space known as its Stone space. Specifically, for a Boolean algebra B, the Stone space X consists of the ultrafilters (or equivalently, maximal ideals) of B, equipped with the topology generated by basis sets {U_a | a ∈ B}, where U_a = {ultrafilters containing a}. The isomorphism maps each element a ∈ B to the clopen set U_a, preserving the algebra operations: U_a ∧ U_b = U_{a ∧ b}, U_a ∨ U_b = U_{a ∨ b}, and ¬U_a = U_{¬a}. This representation, proved in 1936, reveals Boolean algebras as concrete set algebras, with truth values emerging as the fixed points under homomorphisms to {⊤, ⊥}.[33] In classical propositional logic, the Lindenbaum–Tarski algebra provides a direct link between syntactic formulas and Boolean structure. For a set of propositional formulas, define equivalence φ ∼ ψ if the theory proves φ ↔ ψ; the equivalence classes [φ] form the carrier of the algebra, with operations [φ] ∧ [ψ] = [φ ∧ ψ], [φ] ∨ [ψ] = [φ ∨ ψ], and ¬[φ] = [¬φ]. The constants are ⊥ = [φ ∧ ¬φ] and ⊤ = [φ ∨ ¬φ]. This construction yields a Boolean algebra, known as the Lindenbaum–Tarski algebra of the logic, which is free on the generators corresponding to atomic propositions. It models how classical tautologies and contradictions align with the algebraic identities x ∨ ¬x = ⊤ and x ∧ ¬x = ⊥, providing an algebraic quotient of the formula syntax that captures bivalent truth valuations.[34]