The principle of bivalence is a foundational doctrine in classical logic and philosophy asserting that every declarative statement possesses exactly one of two truth values—true or false—and neither both nor any intermediate value.[1] This principle underpins the binary semantics of classical propositional logic, where truth values enable the evaluation of compound statements via truth tables and support inferences based on the law of excluded middle (every proposition or its negation holds).[1]The doctrine traces its origins to Aristotle's De Interpretatione, particularly chapters 4 and 9, where he defines declarative sentences as capable of being true or false while distinguishing them from other linguistic forms like questions or commands.[2] In chapter 9, Aristotle qualifies bivalence by arguing that statements about future contingents—such as "There will be a sea battle tomorrow"—are neither true nor false at present, as their outcomes remain indeterminate, thereby avoiding deterministic implications that would undermine human agency and contingency.[2] This nuanced position reconciles bivalence with the law of excluded middle (for any contradictory pair, one must hold) but rejects the unrestricted rule that contradictory opposites cannot both be false, especially for future particulars.[2] Later ancient philosophers, including the Stoics, temporalized bivalence to accommodate changing truth values over time, applying it to assertibles at specific moments.[1]In contemporary philosophy and logic, bivalence remains central to classical systems but faces significant challenges from non-classical alternatives.[1] Critics argue that it fails for vague predicates, as in the Sorites paradox (e.g., a series of gradually fading red tubes where no precise boundary exists between "red" and "not red"), leading to indeterminate truth values that bivalence cannot accommodate without arbitrary cutoffs.[3] Similarly, future contingents and statements in set theory (like the Continuum Hypothesis) may lack determinate truth values due to indeterminacy rather than falsity, prompting revisions like "every determinate statement is bivalent."[1] Alternatives include multivalent logics, which introduce intermediate values (e.g., "indeterminate" or degrees of truth from 0% to 100%) to handle vagueness, quantum indeterminacy, and higher-order vagueness, while preserving the law of excluded middle through exclusion negation.[4] These debates highlight bivalence's role in broader questions of semantics, determinism, and the nature of truth, influencing fields from metaphysics to computer science.[1]
Fundamentals
Definition
The principle of bivalence is a fundamental doctrine in philosophy and logic asserting that every declarative sentence or proposition possesses exactly one of two possible truth values—true or false—with no allowance for a third truth value, indeterminacy, or degrees of truth.[5][6] This means that for any well-formed proposition, its truth status is decisively binary, ensuring that it is not possible for a statement to lack a truth value entirely or to hold multiple values simultaneously.[7]While related to the concept of binary truth values in general, bivalence specifically emphasizes the exhaustive and exclusive nature of these values. In classical logic, it applies to all propositions.[5] For instance, a simple proposition such as "The current temperature in Paris exceeds 15 degrees Celsius" is either true or false at any specific moment, based on the actual conditions, with no intermediate or undefined status permitted. Similarly, "There are exactly three apples on the table" holds precisely one truth value, determined by empirical fact, underscoring the principle's commitment to clear semantic evaluation.[8]The term "bivalence" derives from the prefix "bi-" (meaning two) combined with "valence" (referring to value or capacity), highlighting the duality of truth values central to the principle.[8] Philosophically, this doctrine is motivated by the requirement for resolute truth determinations in rational discourse and inference, enabling logical systems to operate on unambiguous foundations that mirror the perceived structure of reality.[5] It underpins classical logic as a core semantic assumption, facilitating reliable reasoning without the complications of truth-value gaps.[8]
Formal Statement
The principle of bivalence formally asserts that every proposition P is assigned exactly one of two truth values: true (T) or false (F), with no proposition lacking a truth value or possessing both simultaneously. Symbolically, this is captured by the semantic entailment \models P \lor \neg P, indicating that in all possible valuations, the disjunction of P and its negation holds, but the emphasis lies on the semantic valuation function V, where V(P) \in \{T, F\} for every proposition P.[7] This formulation ensures totality and exclusivity in truth assignment, forming the semantic foundation of classical logic.In bivalent semantics, truth-value functions operate over a domain of propositions, mapping each to precisely one element of the codomain {T, F} via a total evaluation relation. Such functions determine the truth values of compound propositions based on their atomic components, adhering strictly to binary outcomes. For instance, the negationoperator \neg in a bivalent system inverts the truth value, as specified by the following truth table:
P
\neg P
T
F
F
T
[7]This binary structure contrasts with multi-valued systems, which permit additional truth values beyond T and F, thereby relaxing the exhaustive dichotomy.[7] The principle of bivalence thus ensures the completeness of semantic valuations in classical logic by guaranteeing that every proposition receives a determinate truth value.[7]
Historical Development
Ancient Origins
The principle of bivalence finds its earliest systematic philosophical articulation in the works of Aristotle, particularly in his Metaphysics Book Gamma (IV), where it is intertwined with the defense of the law of non-contradiction. In Metaphysics Gamma 4 (1006b), Aristotle asserts the exclusivity of truth and falsity, stating, "it is not possible to say truly at the same time that the same thing is and is not a man," emphasizing that contradictory predicates cannot simultaneously hold true of the same subject in the same respect. This formulation underscores bivalence as a foundational principle, where every affirmative or negative statement about a definite matter must be either true or false, excluding any third possibility. Aristotle links this directly to the law of non-contradiction, arguing that denying bivalence would lead to absurdities, such as the collapse of rational discourse.[9]Further elaborating in Metaphysics Gamma 7 (1011b25–27), Aristotle provides a canonical definition of truth and falsity that implicitly affirms bivalence for atomic propositions: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, or of what is not that it is not, is true." Here, he applies this to definite statements about present or past realities, treating simple declarative sentences as possessing exactly one truth-value. However, in De Interpretatione chapter 9, Aristotle introduces a qualification regarding future contingents, such as the famous sea-battle example, where statements about undetermined future events (e.g., "There will be a sea battle tomorrow") lack a definite truth-value at the present moment to preserve human agency and contingency, though he maintains bivalence for non-contingent, definite assertions. This nuanced view positions bivalence as holding robustly for atomic propositions concerning settled facts, while allowing flexibility for the open future.Aristotle's ideas exerted significant influence on subsequent ancient Greek philosophy, notably in Stoic logic. The Stoic logician Chrysippus (c. 279–206 BCE), building on Aristotelian foundations, extended bivalence to encompass all assertoric sentences, including those about future events, rejecting any truth-value gaps to align with their deterministic worldview.[9]Chrysippus argued that every proposition (axiōma) is either true or false at all times, thereby universalizing bivalence across temporal contexts and reinforcing its role in Stoic propositional logic.[10] This development marked a shift from Aristotle's provisional exception for future contingents, solidifying bivalence as a cornerstone of Hellenistic logical inquiry, with early ties to the law of excluded middle as an inferential principle.[9]
Modern Formulations
In the medieval era, refinements to the principle of bivalence emerged through efforts to integrate it with Aristotelian syllogistic logic, particularly in addressing challenges posed by future contingents. Boethius (c. 480–524), in his Second Commentary on Aristotle's De Interpretatione, maintained bivalence by interpreting future contingent propositions as true or false in an indefinite sense, thereby preserving contingency in divine foreknowledge while ensuring compatibility with syllogistic deductions that assume binary truth values for categorical statements.[11] This approach influenced subsequent scholastic logic by embedding bivalence within the structure of syllogisms, avoiding determinism without abandoning the principle's universality. Peter Abelard (1079–1142) advanced this integration in works like the Dialectica, affirming bivalence for all assertoric propositions but qualifying future contingents as indeterminately true or false based on undetermined truth-makers, thus refining syllogistic inference to handle modal and temporal nuances without multi-valued alternatives.[12]The 19th century marked a shift toward formalization, with Gottlob Frege's Begriffsschrift (1879) establishing bivalence as the cornerstone of two-valued truth-functions in symbolic logic. Frege posited that every assertoric sentence denotes one of two objects—the True or the False—forming the basis for truth-functional connectives like negation and implication, which operate exclusively on this binary framework to compose complex propositions. This metatheoretical commitment to bivalence enabled the precise notation and inference rules that distinguished modern propositional logic from traditional syllogistics.A pivotal development occurred in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913), which presupposed bivalence to construct the classical propositional calculus as the foundation for logicism. The system's primitive propositions and derivation rules, such as those for implication and negation, rely on every well-formed formula possessing exactly one truth value, true or false, ensuring the completeness and consistency of deductions in reducing arithmetic to logical primitives.Alfred Tarski's semantic theory of truth, introduced in his 1933 work Pojęcie prawdy w językach nauk dedukcyjnych, formalized bivalence through the notion of satisfaction in models. Tarski defined truth recursively: atomic sentences are true if satisfied by assignments in a model, and compound sentences inherit truth values bivalently via connectives and quantifiers, yielding a materially adequate definition where every sentence is either true or false relative to an interpretation. This framework provided a model-theoretic semantics that solidified bivalence as essential for classical logical systems, influencing subsequent developments in formal semantics.
Logical Relations
Law of Excluded Middle
The law of the excluded middle (LEM) is a fundamental principle in classical logic, asserting that for any proposition P, the disjunction P \lor \neg P is always true, constituting a tautology. This syntactic rule guarantees that no proposition can occupy a middle ground between truth and falsity in the logical framework.[5]Bivalence semantically underpins the LEM by positing that every proposition must possess exactly one of two truth values—true or false—in a two-valued semantics. Under this assignment, the disjunction P \lor \neg P invariably evaluates to true, as the proposition cannot lack a truth value or hold an intermediate one; if P is true, the first disjunct succeeds, and if false, the second does. This semantic validation ensures the LEM's status as a logical necessity in classical systems.[5]To see how bivalence entails the LEM, consider the following sketch: Assume bivalence holds, so P is either true or false. In the case where P is true, P \lor \neg P is true via the first disjunct. If P is false, then \neg P is true, making the disjunction true via the second disjunct. Thus, regardless of P's value, P \lor \neg P holds, proving the LEM as a direct consequence.[5]Importantly, bivalence operates as a semantic principle dictating the nature of truth-value assignments in the metalanguage, whereas the LEM functions syntactically as a theorem or axiom schema within the object language of classical logic, enabling valid inferences without direct reference to semantics. The LEM shares historical roots with Aristotle's foundational principles of opposition in logic.[5][13]
Law of Non-Contradiction
The Law of Non-Contradiction (LNC) states that for any proposition P, it is impossible for both P and its negation \neg P to be true simultaneously, formally expressed as \neg (P \wedge \neg P).[14] This principle asserts the impossibility of contradictions in logical and ontological terms, ensuring that no statement can affirm and deny the same thing at the same time in the same respect.[15]The principle of bivalence, which posits that every proposition is either true or false (and exactly one), plays a crucial role in upholding the LNC by assigning exclusive truth values to propositions and their negations. Under bivalence, the truth of P precludes the truth of \neg P, and vice versa, thereby rendering any conjunction P \wedge \neg P necessarily false and preventing the overlap that would constitute a contradiction.[16] This exclusivity ensures that contradictions cannot obtain, as the binary framework leaves no room for a proposition to hold both values simultaneously.[14]Logically, bivalence implies the LNC because the assignment of a unique truth value to every proposition guarantees that contradictory pairs cannot both be true, deriving the non-contradictoriness directly from the exhaustive and mutually exclusive nature of truth and falsity.[16] In classical semantics, this implication holds tautologically, as the semantic structure of bivalence enforces the falsity of all contradictory statements across any interpretation.[14]In Aristotle's metaphysics, the LNC holds primacy as the most certain and indemonstrable principle, foundational to rational thought and being itself.[15] Aristotle prioritizes the LNC ontologically, arguing it is impossible for the same attribute to belong and not belong to the same subject simultaneously.[14]
Role in Classical Logic
Propositional Logic
In classical propositional logic, the principle of bivalence asserts that every well-formed formula, under a given interpretation, evaluates to exactly one of two truth values: true or false. This binary valuation enables the systematic construction of truth tables, which exhaustively determine the semantic behavior of compound formulas built from propositional variables and connectives like conjunction (\land), disjunction (\lor), negation (\neg), and material implication (\to). For any formula involving n distinct propositional variables, bivalence guarantees precisely $2^n possible truth assignments, each row of the truth table representing one such assignment and its resulting truth value for the entire formula. This finite structure, as utilized by Emil Post in his 1921 proof of completeness, provides a complete semantic evaluation method that directly reflects the exhaustive dichotomy of truth and falsity.[17]The reliance on bivalent models is central to the soundness and completeness theorems of propositional logic, which state that a formula is provable from a set of premises if and only if it is true in every bivalent interpretation satisfying those premises. Soundness ensures that every provable formula is semantically valid, while completeness guarantees that every semantically valid formula is provable; both properties hold because bivalence defines a discrete, total semantics without intermediate or undefined values. Post established these theorems in 1921, demonstrating the equivalence between syntactic deduction and bivalent truth preservation for propositional systems.[18]A representative example is the truth table for the implication P \to Q, which under bivalence evaluates as follows:
P
Q
P \to Q
True
True
True
True
False
False
False
True
True
False
False
True
This table confirms that P \to Q is false only when P is true and Q is false, true in all other cases, illustrating how bivalence resolves the connective's semantics across all four possible assignments for two variables.Bivalence further ensures the decidability of propositional validity: to determine if a formula is a tautology (true under every interpretation), one constructs its truth table and verifies if the final column contains only true values across the $2^n rows. Since n is finite for any given formula, this enumerationalgorithm always terminates, providing a mechanical decision procedure for semantic entailment and logical equivalence in propositional logic.[19]
Predicate Logic
In first-order predicate logic, the principle of bivalence extends to quantified statements through a recursive definition of truth in models, where every well-formed formula is either true or false relative to a given structure.[20] The universal quantifier \forall x \, P(x) is interpreted as true in a model \mathcal{M} with domain D if and only if P(d) holds for every d \in D, and false otherwise; the existential quantifier \exists x \, P(x) is true if P(d) holds for at least one d \in D, and false if it holds for none. This treatment ensures that complex formulas involving quantifiers and connectives inherit the two-valued semantics from simpler atomic predicates, maintaining overall truth or falsity without intermediate values.[21]Tarski-style models formalize this via satisfaction relations in structures \mathcal{M} = \langle D, I \rangle, where D is the domain and I assigns interpretations to predicates and functions. Atomic formulas P(t_1, \dots, t_n) are satisfied (true) if the interpretations of the terms under I stand in the relation I(P) \subseteq D^n, and false otherwise; satisfaction for quantified and compound formulas is defined recursively, yielding a strict true/false dichotomy for all sentences.[21] This framework presupposes bivalence at the atomic level and propagates it upward, providing a semantic foundation for classical predicate logic distinct from its propositional basis by incorporating variable domains and relations.[20]Bivalence holds semantically in these models even though first-order logic faces undecidability: Church demonstrated in 1936 that there is no effective procedure to determine the truth of arbitrary sentences in all models, despite each individual sentence being bivalently true or false in any given structure.[22] A key result illustrating the robustness of bivalent models is the Löwenheim-Skolem theorem, which states that if a first-order theory has an infinite model, then for every infinite cardinal \kappa, it has a model of cardinality \kappa, each with bivalent satisfaction relations.[23] This theorem underscores how bivalence applies uniformly across models of varying sizes, enabling diverse interpretations while preserving the true/false assignment.[23]
Advanced Concepts
Suszko's Thesis
Roman Suszko, a Polish logician, proposed in 1975 that logics presented as many-valued, such as Jan Łukasiewicz's three-valued logic, can be semantically characterized using only two logical values: true and false. According to Suszko, these systems employ additional values not as fundamental logical truths but as auxiliary algebraic constructs, with semantics ultimately relying on a bivalent partition into designated (true-like) and non-designated (false-like) categories. This reduction demonstrates that the apparent many-valuedness is illusory at the level of logical valuation, preserving the core inferential structure through bivalent models.[24]A concrete illustration of Suszko's approach appears in his analysis of Kleene's strong three-valued logic, where the truth values are true (T), false (F), and undefined (U).[24] In this system, the intermediate U value functions as non-designated, akin to false, while T is designated as true; the semantics can thus be reformulated using bivalent assignments that map formulas to true or false based on whether they preserve designated status under the logic's operations.[25] This bivalent core captures the entailment relations without requiring the third value as a primitive logical truth, reducing the system to an equivalent two-valued framework.[24]Philosophically, Suszko's thesis, elaborated in his 1977 paper, asserts the universality of bivalence by challenging the notion of genuine multi-valued semantics, arguing that any proliferation of logical values beyond true and false represents a conceptual error.[26] He famously described such multiplications as a "mad idea," emphasizing that structural Tarskian logics inherently admit bivalent interpretations, thereby affirming bivalence as the foundational principle underlying all propositional logics, even those ostensibly many-valued.[26] This perspective underscores the primacy of true and false as the sole logical values in semantic theorizing.[26]
Bivalence in Non-Classical Logics
Non-classical logics depart from the principle of bivalence by introducing alternative semantics or truth-value schemes that accommodate undecidability, indeterminacy, or inconsistency without assuming every proposition is strictly true or false. These systems maintain logical coherence while addressing limitations in classical frameworks, often by rejecting the law of excluded middle or allowing intermediate or glutty truth values.[27]Intuitionistic logic, developed by L.E.J. Brouwer in the 1920s, rejects bivalence for propositions that are undecidable, asserting that a statement is true only if a constructive proof of it exists. In this proof-based semantics, undecidable mathematical statements, such as those involving infinite collections without effective decision procedures, lack a definite truth value until proven or disproven. Brouwer argued that the law of excluded middle, which underpins bivalence, applies solely to finite cases and is unwarranted for broader mathematical domains.[27]Many-valued logics extend classical bivalence by incorporating additional truth values, as pioneered by Jan Łukasiewicz in 1920 with his three-valued system. This logic assigns values of true (T), false (F), or indeterminate (1/2) to propositions, particularly those concerning future contingents, where the third value captures possibilities not yet determined. Łukasiewicz's framework challenges bivalence by allowing statements to occupy a neutral status, thereby supporting logical systems that model modality and indeterminism without forcing binary outcomes.[28]Supervaluationism offers a bivalent-preserving strategy for handling vagueness through multiple admissible valuations or precisifications of predicates. A statement is deemed true if it holds under every such precisification (supertruth) and false otherwise, effectively maintaining bivalence at the level of supervaluations while permitting truth-value gaps in borderline cases. This approach, elaborated by Kit Fine in 1975 and David Lewis in 1982, upholds classical theorems like the law of excluded middle by ensuring supervaluations assign definite truth values to disjunctions.[29]Paraconsistent logics challenge bivalence's exclusivity by tolerating contradictions without the explosive consequences of classical logic, where a single inconsistency entails everything. These systems permit truth-value gluts, where propositions can be both true and false, while preserving non-trivial reasoning through restricted inference rules that avoid deriving arbitrary conclusions from contradictions. By deviating from strict bivalence, paraconsistent logics enable coherent handling of inconsistent data, as seen in semantics like Graham Priest's Logic of Paradox, which uses a three-valued scheme.[30]While some non-classical logics appear to require multiple truth values, Roman Suszko's thesis contends that many such systems can be reduced to bivalent semantics via abstract algebraic interpretations.[31]
Criticisms
Future Contingents
The problem of future contingents poses a significant challenge to the principle of bivalence, particularly through statements about events that are not yet determined. In Chapter 9 of De Interpretatione, Aristotle presents the famous sea-battle argument, questioning whether a proposition such as "There will be a sea-battle tomorrow" is true or false at present.[32] He argues that assigning a definite truth value now would imply determinism, rendering the future necessary rather than contingent, yet denying bivalence to such statements risks undermining the law of excluded middle.[32] This dilemma suggests that future contingents may lack a determinate truth value in the present, threatening bivalence's universal applicability.[33]Ancient responses from the Megarian and Stoic schools sought to uphold strict bivalence despite these concerns. The Megarian philosopher Diodorus Cronus, through his Master Argument, contended that there are no true future contingents, as past truths are necessary and the impossible cannot follow from the possible, thereby eliminating indeterminacy to preserve bivalence at the cost of contingency.[32] In contrast, Stoic logicians like Chrysippus affirmed bivalence for all propositions, including those about the future, by positing that truth values are fixed across all times in a deterministic framework, where branching futures do not alter eternal truths.[32] These positions reinforced bivalence but often aligned it with fatalism, interpreting Aristotle's concern as resolvable through a commitment to logical determinism.The modern debate revived these issues with advancements in formal logic during the mid-20th century. In the 1950s, logician Arthur N. Prior pioneered tense logic to model temporal statements, incorporating operators such as "F" (it will be the case that) that accommodate indeterminacy in future contingents by allowing branching time structures, where propositions may not have fixed truth values relative to the present.[34]Prior's framework highlighted how traditional bivalence could falter in open futures, influencing subsequent discussions on whether indeterminacy requires rejecting bivalence or reinterpreting truth temporally.A prominent defense of bivalence amid these challenges posits that truth values for future contingents are assigned retroactively once the event transpires, ensuring every statement eventually receives a definite value without predetermining outcomes.[32] This Ockhamist approach, echoing medieval solutions, maintains bivalence by relativizing truth to the actual historical sequence, avoiding present indeterminacy while preserving contingency.[32] Alternatives, such as three-valued logics, briefly consider an "indeterminate" status for such statements but remain minority views in preserving classical principles.[32]
Vagueness
The principle of bivalence faces challenges from vagueness, particularly through predicates that admit borderline cases without determinate truth values, such as determinations of whether someone is "tall" or a collection of grains constitutes a "heap." These cases suggest that some statements may neither be true nor false, thereby undermining the strict dichotomy of truth and falsity.A prominent illustration of this issue is the sorites paradox, which arises with vague terms like "heap" in the classic example of a pile of sand. If one grain constitutes a non-heap, adding a single grain cannot plausibly transform it into a heap; yet, by iterative application of this tolerance principle, even a vast collection of grains would fail to qualify as a heap, leading to an absurd conclusion. This paradox highlights how incremental changes in a continuum blur the true/false boundary, generating a series of borderline instances that resist bivalent assignment.[35]Philosophers have proposed various responses to preserve or modify bivalence in the face of such vagueness. Epistemicism, as defended by Timothy Williamson, maintains that bivalence holds universally, with borderline cases possessing definite but unknowable truth values due to human epistemic limitations rather than semantic indeterminacy. In contrast, supervaluationism, developed by Kit Fine, rejects strict bivalence by allowing statements to lack truth values in vague contexts but introduces a "super-truth" predicate that is bivalent and validates the law of excluded middle across admissible sharpenings of the vague language.[36]An alternative approach abandons bivalence altogether through fuzzy logic, pioneered by Lotfi Zadeh, which assigns truth values along a continuum from 0 to 1, reflecting degrees of membership in vague sets rather than binary true/false distinctions.[37] This framework accommodates the sorites series by permitting gradual shifts in truth degree, but it thereby rejects the excluded middle for borderline cases, as statements can hold partially without being fully true or false. The core tension remains that the continuum of vague cases in sorites-like paradoxes appears to erode the law of excluded middle, forcing a choice between upholding classical bivalence at the cost of counterintuitive hidden cutoffs or adopting non-bivalent semantics to capture intuitive gradations.
Self-Referential Statements
Self-referential statements pose a profound challenge to the principle of bivalence by generating paradoxes that resist assignment to either truth or falsity. The most prominent example is the liar paradox, formulated as the sentence "This sentence is false." Assuming the sentence is true leads to the conclusion that it is false, as it claims its own falsity; assuming it is false, however, implies that it is true, since it correctly asserts its falsity. This cyclic contradiction reveals an instability in truth valuation for self-referential constructions, threatening the exhaustive dichotomy of true or false required by bivalence.[38]Alfred Tarski addressed this issue in his seminal 1933 paper by introducing a hierarchy of languages to prevent paradoxical self-reference. He argued that paradoxes like the liar arise in semantically closed languages, where a language can express its own truth predicate, leading to antinomies. To avoid this, Tarski distinguished the object language—containing the sentences whose truth is evaluated—from the metalanguage, in which the truth definition is formulated. Each level in the hierarchy defines truth for the level below, ensuring no language refers to its own semantics. Within each object language, bivalence is preserved, as sentences are either true or false relative to that level's interpretation, but the hierarchy itself precludes a universal, self-applicable truth predicate across all levels.[39]Saul Kripke extended this analysis in his 1975 theory of truth, proposing a framework with truth-value gaps to accommodate self-reference within a single language. Employing fixed-point semantics, Kripke constructs partial interpretations where sentences are evaluated stage by stage: "grounded" sentences, built from non-truth-involving bases, receive definite truth values (true or false), upholding bivalence locally. Ungrounded sentences, such as the liar, fail to settle in any fixed point and thus lack a truth value altogether, creating gaps rather than contradictions. This approach allows a bivalent base for non-paradoxical discourse while excepting self-referential cases, demonstrating that universal bivalence is untenable without such exceptions.[38]The implications of these self-referential challenges are far-reaching, as they compel logicians to adopt partial logics with gaps or stratified systems rather than insisting on strict bivalence for all declarative sentences. Tarski's and Kripke's solutions highlight that while bivalence may hold for grounded or lower-level expressions, self-reference necessitates revisions to avoid paradox, thereby questioning the principle's applicability in fully expressive languages.[39][38]
Dialetheism
Dialetheism is the view that some contradictions, termed dialetheia, can be both true and false simultaneously.[40] This position, advanced by Graham Priest since 1979, directly challenges the mutual exclusivity inherent in the classical principle of bivalence by permitting certain propositions to possess a "glut" of truth values rather than adhering strictly to true or false alone.[41]The primary motivation for dialetheism arises from semantic paradoxes, such as the Liar paradox, where a self-referential sentence like "This sentence is false" leads to the conclusion that it must be both true and false in a coherent logical framework. Priest argues that accepting such dialetheia resolves these paradoxes without resorting to revisions of natural language or ad hoc restrictions, as classical logic's explosion principle (ex falso quodlibet) would render the entire system trivial if contradictions are allowed.[41]Priest's dialetheic approach employs the Logic of Paradox (LP), a three-valued paraconsistent logic with truth values true (T), false (F), and both (B).[41] In LP's glutty semantics, B-valued sentences are designated as true, preserving bivalence in a modified form where every proposition is either true, false, or both, but negation and conjunction behave such that contradictions do not propagate uncontrollably. This framework validates the law of excluded middle (A ∨ ¬A) for all sentences while rejecting the law of non-contradiction (¬(A ∧ ¬A)) for paradoxical cases.[41]The implications of dialetheism extend to a critique of classical logic's foundational assumptions, particularly the explosion principle, which dialetheists avoid through paraconsistency to maintain nontrivial theories despite true contradictions. By rejecting the absolute prohibition on contradictions, dialetheism offers a way to handle self-referential statements without abandoning logical coherence, though it remains controversial for undermining long-held intuitions about truth.[41]