Fact-checked by Grok 2 weeks ago

Principle of bivalence

The principle of bivalence is a foundational doctrine in classical logic and philosophy asserting that every declarative statement possesses exactly one of two truth values—true or false—and neither both nor any intermediate value. This principle underpins the binary semantics of classical propositional logic, where truth values enable the evaluation of compound statements via truth tables and support inferences based on the law of excluded middle (every proposition or its negation holds). The doctrine traces its origins to 's De Interpretatione, particularly chapters 4 and 9, where he defines declarative sentences as capable of being true or false while distinguishing them from other linguistic forms like questions or commands. In chapter 9, Aristotle qualifies bivalence by arguing that statements about future contingents—such as "There will be a sea battle tomorrow"—are neither true nor false at present, as their outcomes remain indeterminate, thereby avoiding deterministic implications that would undermine human agency and contingency. This nuanced position reconciles bivalence with the (for any contradictory pair, one must hold) but rejects the unrestricted rule that contradictory opposites cannot both be false, especially for future particulars. Later ancient philosophers, including the Stoics, temporalized bivalence to accommodate changing truth values over time, applying it to assertibles at specific moments. In and , bivalence remains central to classical systems but faces significant challenges from non-classical alternatives. Critics argue that it fails for vague predicates, as in the (e.g., a series of gradually fading red tubes where no precise boundary exists between "red" and "not red"), leading to indeterminate truth values that bivalence cannot accommodate without arbitrary cutoffs. Similarly, future contingents and statements in (like the ) may lack determinate truth values due to indeterminacy rather than falsity, prompting revisions like "every determinate statement is bivalent." Alternatives include multivalent logics, which introduce intermediate values (e.g., "indeterminate" or degrees of truth from 0% to 100%) to handle , quantum indeterminacy, and higher-order , while preserving the through exclusion negation. These debates highlight bivalence's role in broader questions of semantics, , and the nature of truth, influencing fields from metaphysics to .

Fundamentals

Definition

The principle of bivalence is a fundamental doctrine in and asserting that every declarative sentence or possesses exactly one of two possible s—true or false—with no allowance for a third truth value, indeterminacy, or degrees of truth. This means that for any well-formed , its truth status is decisively binary, ensuring that it is not possible for a to lack a truth value entirely or to hold multiple values simultaneously. While related to the of truth values in general, bivalence specifically emphasizes the exhaustive and exclusive nature of these values. In , it applies to all . For instance, a simple proposition such as "The current temperature in exceeds 15 degrees Celsius" is either true or false at any specific moment, based on the actual conditions, with no intermediate or undefined status permitted. Similarly, "There are exactly three apples on the table" holds precisely one , determined by empirical fact, underscoring the principle's commitment to clear semantic evaluation. The term "bivalence" derives from the prefix "bi-" (meaning two) combined with "valence" (referring to value or capacity), highlighting the duality of truth values central to the principle. Philosophically, this doctrine is motivated by the requirement for resolute truth determinations in rational discourse and inference, enabling logical systems to operate on unambiguous foundations that mirror the perceived structure of reality. It underpins classical logic as a core semantic assumption, facilitating reliable reasoning without the complications of truth-value gaps.

Formal Statement

The principle of bivalence formally asserts that every proposition P is assigned exactly one of two truth values: true (T) or false (F), with no proposition lacking a truth value or possessing both simultaneously. Symbolically, this is captured by the semantic entailment \models P \lor \neg P, indicating that in all possible valuations, the disjunction of P and its negation holds, but the emphasis lies on the semantic valuation function V, where V(P) \in \{T, F\} for every proposition P. This formulation ensures totality and exclusivity in truth assignment, forming the semantic foundation of classical logic. In bivalent semantics, truth-value functions operate over a of propositions, mapping each to precisely one element of the {T, F} via a total evaluation . Such functions determine the truth values of propositions based on their components, adhering strictly to outcomes. For instance, the \neg in a bivalent system inverts the truth value, as specified by the following :
P\neg P
TF
FT
This structure contrasts with multi-valued systems, which permit additional s beyond T and F, thereby relaxing the exhaustive . The principle of bivalence thus ensures the completeness of semantic valuations in by guaranteeing that every proposition receives a determinate .

Historical Development

Ancient Origins

The finds its earliest systematic philosophical articulation in the , particularly in his Metaphysics Book Gamma (IV), where it is intertwined with the defense of . In Metaphysics Gamma 4 (1006b), asserts the exclusivity of truth and falsity, stating, "it is not possible to say truly at the same time that the same thing is and is not a man," emphasizing that contradictory predicates cannot simultaneously hold true of the same subject in the same respect. This formulation underscores bivalence as a foundational , where every affirmative or negative about a definite matter must be either true or false, excluding any third possibility. links this directly to , arguing that denying bivalence would lead to absurdities, such as the collapse of rational discourse. Further elaborating in Metaphysics Gamma 7 (1011b25–27), provides a canonical definition of truth and falsity that implicitly affirms bivalence for atomic propositions: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, or of what is not that it is not, is true." Here, he applies this to definite statements about present or past realities, treating simple declarative sentences as possessing exactly one truth-value. However, in De Interpretatione chapter 9, Aristotle introduces a qualification regarding future contingents, such as the famous sea-battle example, where statements about undetermined future events (e.g., "There will be a sea battle tomorrow") lack a definite truth-value at the present moment to preserve human agency and contingency, though he maintains bivalence for non-contingent, definite assertions. This nuanced view positions bivalence as holding robustly for atomic propositions concerning settled facts, while allowing flexibility for the open future. Aristotle's ideas exerted significant influence on subsequent ancient Greek philosophy, notably in Stoic logic. The Stoic logician (c. 279–206 BCE), building on Aristotelian foundations, extended bivalence to encompass all assertoric sentences, including those about future events, rejecting any truth-value gaps to align with their deterministic . argued that every (axiōma) is either true or false at all times, thereby universalizing bivalence across temporal contexts and reinforcing its role in Stoic propositional logic. This development marked a shift from Aristotle's provisional exception for future contingents, solidifying bivalence as a cornerstone of Hellenistic logical inquiry, with early ties to the as an inferential principle.

Modern Formulations

In the medieval era, refinements to the principle of bivalence emerged through efforts to integrate it with Aristotelian syllogistic logic, particularly in addressing challenges posed by future contingents. (c. 480–524), in his Second Commentary on Aristotle's De Interpretatione, maintained bivalence by interpreting future contingent propositions as true or false in an indefinite sense, thereby preserving contingency in divine foreknowledge while ensuring compatibility with syllogistic deductions that assume binary truth values for categorical statements. This approach influenced subsequent scholastic logic by embedding bivalence within the structure of syllogisms, avoiding without abandoning the principle's universality. (1079–1142) advanced this integration in works like the Dialectica, affirming bivalence for all assertoric propositions but qualifying future contingents as indeterminately true or false based on undetermined truth-makers, thus refining syllogistic inference to handle modal and temporal nuances without multi-valued alternatives. The marked a shift toward formalization, with Gottlob Frege's (1879) establishing bivalence as the cornerstone of two-valued truth-functions in symbolic logic. Frege posited that every assertoric sentence denotes one of two objects—the True or the False—forming the basis for truth-functional connectives like and , which operate exclusively on this binary framework to compose complex propositions. This metatheoretical commitment to bivalence enabled the precise notation and inference rules that distinguished modern propositional logic from traditional syllogistics. A pivotal development occurred in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913), which presupposed bivalence to construct the classical as the foundation for . The system's primitive propositions and derivation rules, such as those for and , rely on every possessing exactly one , true or false, ensuring the completeness and consistency of deductions in reducing to logical primitives. Alfred Tarski's , introduced in his 1933 work Pojęcie prawdy w językach nauk dedukcyjnych, formalized bivalence through the notion of satisfaction in models. Tarski defined truth recursively: atomic are true if satisfied by assignments in a model, and compound inherit truth values bivalently via connectives and quantifiers, yielding a materially adequate definition where every is either true or false relative to an interpretation. This framework provided a model-theoretic semantics that solidified bivalence as essential for classical logical systems, influencing subsequent developments in formal semantics.

Logical Relations

Law of Excluded Middle

The law of the excluded middle (LEM) is a fundamental principle in , asserting that for any P, the disjunction P \lor \neg P is always true, constituting a . This syntactic rule guarantees that no can occupy a middle ground between truth and falsity in the logical framework. Bivalence semantically underpins the LEM by positing that every must possess exactly one of two s—true or false—in a two-valued semantics. Under this assignment, the disjunction P \lor \neg P invariably evaluates to true, as the cannot lack a truth value or hold an intermediate one; if P is true, the first disjunct succeeds, and if false, the second does. This semantic validation ensures the LEM's status as a logical in classical systems. To see how bivalence entails the LEM, consider the following sketch: Assume bivalence holds, so P is either true or false. In the case where P is true, P \lor \neg P is true via the first disjunct. If P is false, then \neg P is true, making the disjunction true via the second disjunct. Thus, regardless of P's value, P \lor \neg P holds, proving the LEM as a direct consequence. Importantly, bivalence operates as a semantic principle dictating the nature of truth-value assignments in the metalanguage, whereas the LEM functions syntactically as a theorem or axiom schema within the object language of classical logic, enabling valid inferences without direct reference to semantics. The LEM shares historical roots with Aristotle's foundational principles of opposition in logic.

Law of Non-Contradiction

The Law of Non-Contradiction (LNC) states that for any P, it is impossible for both P and its \neg P to be true simultaneously, formally expressed as \neg (P \wedge \neg P). This asserts the impossibility of in logical and ontological terms, ensuring that no statement can affirm and deny the same thing at the same time in the same respect. The principle of bivalence, which posits that every is either true or false (and exactly one), plays a crucial role in upholding the LNC by assigning exclusive truth values to propositions and their negations. Under bivalence, the truth of P precludes the truth of \neg P, and vice versa, thereby rendering any conjunction P \wedge \neg P necessarily false and preventing the overlap that would constitute a . This exclusivity ensures that contradictions cannot obtain, as the binary framework leaves no room for a proposition to hold both values simultaneously. Logically, bivalence implies the LNC because the assignment of a unique to every guarantees that contradictory pairs cannot both be true, deriving the non-contradictoriness directly from the exhaustive and mutually exclusive nature of truth and falsity. In classical semantics, this implication holds tautologically, as the semantic structure of bivalence enforces the falsity of all contradictory statements across any . In Aristotle's metaphysics, the LNC holds primacy as the most certain and indemonstrable principle, foundational to rational thought and being itself. Aristotle prioritizes the LNC ontologically, arguing it is impossible for the same attribute to belong and not belong to the same subject simultaneously.

Role in Classical Logic

Propositional Logic

In classical propositional logic, the principle of bivalence asserts that every well-formed formula, under a given interpretation, evaluates to exactly one of two truth values: true or false. This binary valuation enables the systematic construction of truth tables, which exhaustively determine the semantic behavior of compound formulas built from propositional variables and connectives like conjunction (\land), disjunction (\lor), negation (\neg), and material implication (\to). For any formula involving n distinct propositional variables, bivalence guarantees precisely $2^n possible truth assignments, each row of the truth table representing one such assignment and its resulting truth value for the entire formula. This finite structure, as utilized by Emil Post in his 1921 proof of completeness, provides a complete semantic evaluation method that directly reflects the exhaustive dichotomy of truth and falsity. The reliance on bivalent models is central to the and theorems of propositional logic, which state that a is provable from a set of premises it is true in every bivalent satisfying those premises. ensures that every provable is semantically valid, while guarantees that every semantically valid is provable; both properties hold because bivalence defines a , total semantics without or values. Post established these theorems in , demonstrating the equivalence between syntactic deduction and bivalent truth preservation for propositional systems. A representative example is the for the P \to Q, which under bivalence evaluates as follows:
PQP \to Q
TrueTrueTrue
TrueFalseFalse
FalseTrueTrue
FalseFalseTrue
This table confirms that P \to Q is false only when P is true and Q is false, true in all other cases, illustrating how bivalence resolves the connective's semantics across all four possible assignments for two variables. Bivalence further ensures the decidability of propositional validity: to determine if a is a (true under every interpretation), one constructs its and verifies if the final column contains only true values across the $2^n rows. Since n is finite for any given , this always terminates, providing a decision procedure for semantic entailment and in propositional logic.

Predicate Logic

In first-order predicate logic, the principle of bivalence extends to quantified statements through a recursive definition of truth in models, where every is either true or false relative to a given structure. The universal quantifier \forall x \, P(x) is interpreted as true in a model \mathcal{M} with D if and only if P(d) holds for every d \in D, and false otherwise; the existential quantifier \exists x \, P(x) is true if P(d) holds for at least one d \in D, and false if it holds for none. This treatment ensures that complex formulas involving quantifiers and connectives inherit the two-valued semantics from simpler atomic predicates, maintaining overall truth or falsity without intermediate values. Tarski-style models formalize this via satisfaction relations in structures \mathcal{M} = \langle D, I \rangle, where D is the and I assigns interpretations to predicates and functions. Atomic formulas P(t_1, \dots, t_n) are (true) if the interpretations of the terms under I stand in the relation I(P) \subseteq D^n, and false otherwise; satisfaction for quantified and compound formulas is defined recursively, yielding a strict true/false dichotomy for all sentences. This framework presupposes bivalence at the atomic level and propagates it upward, providing a semantic for classical predicate logic distinct from its propositional basis by incorporating variable domains and relations. Bivalence holds semantically in these models even though first-order logic faces undecidability: demonstrated in 1936 that there is no effective procedure to determine the truth of arbitrary sentences in all models, despite each individual sentence being bivalently true or false in any given structure. A key result illustrating the robustness of bivalent models is the Löwenheim-Skolem theorem, which states that if a first-order theory has an infinite model, then for every infinite cardinal \kappa, it has a model of cardinality \kappa, each with bivalent satisfaction relations. This theorem underscores how bivalence applies uniformly across models of varying sizes, enabling diverse interpretations while preserving the true/false assignment.

Advanced Concepts

Suszko's Thesis

Roman Suszko, a logician, proposed in 1975 that logics presented as many-valued, such as Jan Łukasiewicz's , can be semantically characterized using only two logical values: true and false. According to Suszko, these systems employ additional values not as fundamental logical truths but as auxiliary algebraic constructs, with semantics ultimately relying on a bivalent into designated (true-like) and non-designated (false-like) categories. This reduction demonstrates that the apparent many-valuedness is illusory at the level of logical valuation, preserving the core inferential structure through bivalent models. A concrete illustration of Suszko's approach appears in his analysis of Kleene's strong , where the truth values are true (T), false (F), and undefined (U). In this , the intermediate U value functions as non-designated, akin to false, while T is designated as true; the semantics can thus be reformulated using bivalent assignments that map formulas to true or false based on whether they preserve designated status under the logic's operations. This bivalent core captures the entailment relations without requiring the third value as a primitive , reducing the system to an equivalent two-valued framework. Philosophically, Suszko's thesis, elaborated in his 1977 paper, asserts the universality of bivalence by challenging the notion of genuine multi-valued semantics, arguing that any proliferation of logical values beyond true and false represents a conceptual error. He famously described such multiplications as a "mad idea," emphasizing that structural Tarskian logics inherently admit bivalent interpretations, thereby affirming bivalence as the foundational underlying all propositional logics, even those ostensibly many-valued. This perspective underscores the primacy of true and false as the sole logical values in semantic theorizing.

Bivalence in Non-Classical Logics

Non-classical logics depart from the principle of bivalence by introducing alternative semantics or truth-value schemes that accommodate undecidability, indeterminacy, or inconsistency without assuming every proposition is strictly true or false. These systems maintain logical coherence while addressing limitations in classical frameworks, often by rejecting the or allowing intermediate or glutty truth values. Intuitionistic logic, developed by L.E.J. Brouwer in the 1920s, rejects bivalence for propositions that are undecidable, asserting that a statement is true only if a of it exists. In this proof-based semantics, undecidable mathematical statements, such as those involving infinite collections without effective decision procedures, lack a definite until proven or disproven. Brouwer argued that the , which underpins bivalence, applies solely to finite cases and is unwarranted for broader mathematical domains. Many-valued logics extend classical bivalence by incorporating additional truth values, as pioneered by in 1920 with his three-valued system. This logic assigns values of true (T), false (F), or indeterminate (1/2) to propositions, particularly those concerning future contingents, where the third value captures possibilities not yet determined. Łukasiewicz's framework challenges bivalence by allowing statements to occupy a neutral status, thereby supporting logical systems that model and without forcing binary outcomes. Supervaluationism offers a bivalent-preserving strategy for handling vagueness through multiple admissible valuations or precisifications of predicates. A statement is deemed true if it holds under every such precisification (supertruth) and false otherwise, effectively maintaining bivalence at the level of supervaluations while permitting truth-value gaps in borderline cases. This approach, elaborated by Kit Fine in 1975 and David Lewis in 1982, upholds classical theorems like the law of excluded middle by ensuring supervaluations assign definite truth values to disjunctions. Paraconsistent logics challenge bivalence's exclusivity by tolerating contradictions without the explosive consequences of , where a single inconsistency entails everything. These systems permit truth-value gluts, where propositions can be both true and false, while preserving non-trivial reasoning through restricted inference rules that avoid deriving arbitrary conclusions from contradictions. By deviating from strict bivalence, paraconsistent logics enable coherent handling of inconsistent data, as seen in semantics like Graham Priest's Logic of Paradox, which uses a three-valued scheme. While some non-classical logics appear to require multiple truth values, Roman Suszko's thesis contends that many such systems can be reduced to bivalent semantics via abstract algebraic interpretations.

Criticisms

Future Contingents

The problem of future contingents poses a significant challenge to the principle of bivalence, particularly through statements about events that are not yet determined. In Chapter 9 of De Interpretatione, presents the famous sea-battle argument, questioning whether a proposition such as "There will be a sea-battle tomorrow" is true or false at present. He argues that assigning a definite now would imply , rendering the future necessary rather than contingent, yet denying bivalence to such statements risks undermining the . This dilemma suggests that future contingents may lack a determinate in the present, threatening bivalence's universal applicability. Ancient responses from the Megarian and schools sought to uphold strict bivalence despite these concerns. The Megarian philosopher Diodorus Cronus, through his Master Argument, contended that there are no true future contingents, as past truths are necessary and the impossible cannot follow from the possible, thereby eliminating indeterminacy to preserve bivalence at the cost of contingency. In contrast, logicians like affirmed bivalence for all propositions, including those about the , by positing that truth values are fixed across all times in a deterministic framework, where branching futures do not alter eternal truths. These positions reinforced bivalence but often aligned it with , interpreting Aristotle's concern as resolvable through a commitment to logical determinism. The modern debate revived these issues with advancements in formal logic during the mid-20th century. In the 1950s, logician pioneered tense logic to model temporal statements, incorporating operators such as "F" (it will be the case that) that accommodate indeterminacy in future contingents by allowing branching time structures, where propositions may not have fixed truth values relative to the present. highlighted how traditional bivalence could falter in open futures, influencing subsequent discussions on whether indeterminacy requires rejecting bivalence or reinterpreting truth temporally. A prominent defense of bivalence amid these challenges posits that truth values for future contingents are assigned retroactively once the event transpires, ensuring every statement eventually receives a definite value without predetermining outcomes. This Ockhamist approach, echoing medieval solutions, maintains bivalence by relativizing truth to the actual historical sequence, avoiding present indeterminacy while preserving contingency. Alternatives, such as three-valued logics, briefly consider an "indeterminate" status for such statements but remain minority views in preserving classical principles.

Vagueness

The principle of bivalence faces challenges from , particularly through predicates that admit borderline cases without determinate truth values, such as determinations of whether someone is "tall" or a collection of grains constitutes a "." These cases suggest that some statements may neither be true nor false, thereby undermining the strict of truth and falsity. A prominent illustration of this issue is the , which arises with vague terms like "heap" in the classic example of a pile of . If one grain constitutes a non-heap, adding a single grain cannot plausibly transform it into a heap; yet, by iterative application of this tolerance principle, even a vast collection of grains would fail to qualify as a heap, leading to an absurd conclusion. This paradox highlights how incremental changes in a blur the true/false boundary, generating a series of borderline instances that resist bivalent assignment. Philosophers have proposed various responses to preserve or modify bivalence in the face of such . Epistemicism, as defended by , maintains that bivalence holds universally, with borderline cases possessing definite but unknowable truth values due to human epistemic limitations rather than semantic indeterminacy. In contrast, supervaluationism, developed by , rejects strict bivalence by allowing statements to lack truth values in vague contexts but introduces a "super-truth" predicate that is bivalent and validates the across admissible sharpenings of the vague language. An alternative approach abandons bivalence altogether through fuzzy logic, pioneered by Lotfi Zadeh, which assigns truth values along a continuum from 0 to 1, reflecting degrees of membership in vague sets rather than binary true/false distinctions. This framework accommodates the sorites series by permitting gradual shifts in truth degree, but it thereby rejects the excluded middle for borderline cases, as statements can hold partially without being fully true or false. The core tension remains that the continuum of vague cases in sorites-like paradoxes appears to erode the law of excluded middle, forcing a choice between upholding classical bivalence at the cost of counterintuitive hidden cutoffs or adopting non-bivalent semantics to capture intuitive gradations.

Self-Referential Statements

Self-referential statements pose a profound to of bivalence by generating paradoxes that resist assignment to either truth or falsity. The most prominent example is the , formulated as the sentence "This sentence is false." Assuming the sentence is true leads to the conclusion that it is false, as it claims its own falsity; assuming it is false, however, implies that it is true, since it correctly asserts its falsity. This cyclic contradiction reveals an instability in truth valuation for self-referential constructions, threatening the exhaustive of true or false required by bivalence. Alfred Tarski addressed this issue in his seminal 1933 paper by introducing a of languages to prevent paradoxical . He argued that paradoxes like the liar arise in semantically closed , where a language can express its own truth , leading to antinomies. To avoid this, Tarski distinguished the object language—containing the sentences whose truth is evaluated—from the , in which the truth definition is formulated. Each level in the hierarchy defines truth for the level below, ensuring no language refers to its own semantics. Within each object language, bivalence is preserved, as sentences are either true or false relative to that level's , but the hierarchy itself precludes a universal, self-applicable truth predicate across all levels. Saul Kripke extended this analysis in his 1975 theory of truth, proposing a framework with truth-value gaps to accommodate self-reference within a single language. Employing fixed-point semantics, Kripke constructs partial interpretations where sentences are evaluated stage by stage: "grounded" sentences, built from non-truth-involving bases, receive definite truth values (true or false), upholding bivalence locally. Ungrounded sentences, such as the liar, fail to settle in any fixed point and thus lack a truth value altogether, creating gaps rather than contradictions. This approach allows a bivalent base for non-paradoxical discourse while excepting self-referential cases, demonstrating that universal bivalence is untenable without such exceptions. The implications of these self-referential challenges are far-reaching, as they compel logicians to adopt partial logics with gaps or stratified systems rather than insisting on strict bivalence for all declarative sentences. Tarski's and Kripke's solutions highlight that while bivalence may hold for grounded or lower-level expressions, necessitates revisions to avoid , thereby questioning the principle's applicability in fully expressive languages.

Dialetheism

Dialetheism is the view that some contradictions, termed dialetheia, can be both true and false simultaneously. This position, advanced by since 1979, directly challenges the mutual exclusivity inherent in the classical principle of bivalence by permitting certain propositions to possess a "glut" of truth values rather than adhering strictly to true or false alone. The primary motivation for arises from semantic paradoxes, such as the , where a self-referential like "This sentence is false" leads to the conclusion that it must be both true and false in a coherent logical framework. argues that accepting such dialetheia resolves these paradoxes without resorting to revisions of or restrictions, as classical logic's explosion principle (ex falso quodlibet) would render the entire system trivial if contradictions are allowed. Priest's dialetheic approach employs the Logic of Paradox (LP), a three-valued paraconsistent logic with truth values true (T), false (F), and both (B). In LP's glutty semantics, B-valued sentences are designated as true, preserving bivalence in a modified form where every proposition is either true, false, or both, but negation and conjunction behave such that contradictions do not propagate uncontrollably. This framework validates the law of excluded middle (A ∨ ¬A) for all sentences while rejecting the law of non-contradiction (¬(A ∧ ¬A)) for paradoxical cases. The implications of extend to a of classical logic's foundational assumptions, particularly the explosion principle, which dialetheists avoid through paraconsistency to maintain nontrivial theories despite true contradictions. By rejecting the absolute prohibition on contradictions, offers a way to handle self-referential statements without abandoning logical coherence, though it remains controversial for undermining long-held intuitions about truth.