Fact-checked by Grok 2 weeks ago

Cox's theorem

Cox's theorem is a foundational result in probability theory, named after physicist Richard Threlkeld Cox, which demonstrates that the standard rules of probability provide the unique calculus for quantitatively representing and combining degrees of plausibility or belief, subject to a minimal set of qualitative postulates. First articulated in Cox's 1946 paper "Probability, Frequency and Reasonable Expectation," the theorem posits that any measure of belief satisfying certain reasonable conditions must be isomorphic to a probability measure, thereby justifying the use of probability theory as an extension of classical Boolean logic to uncertain reasoning. Cox expanded and refined this derivation in his 1961 monograph The Algebra of Probable Inference, where he formalized the argument using Boolean algebra to handle propositions and their combinations. The core of Cox's theorem rests on two primary postulates concerning how plausibilities combine. The first postulate states that the plausibility of the negation of a is determined solely by the plausibility of the proposition itself, expressed as a functional relationship. The second postulate asserts that the plausibility of the of two propositions depends only on the conditional plausibility of one given the other and the marginal plausibility of the second. Assuming these plausibilities are represented by real numbers in a continuous and monotonic scale (with certainty normalized to 1 and impossibility to 0), the functional forms that satisfy these conditions uniquely yield the additive and multiplicative rules of probability, including for updating beliefs. Cox's theorem has profound implications for and , providing a normative foundation for by showing that alternative systems of plausible reasoning would violate basic desiderata of coherence and rationality. It bridges subjective interpretations of probability—as degrees of belief—with objective frequency-based views, arguing that both can be unified under the same . However, the theorem's scope is limited to classical propositional logic and real-valued measures, and critiques highlight sensitivities to additional assumptions like or the exclusion of infinite domains, which may not hold in all practical or finite settings. Despite these limitations, the theorem remains influential in justifying probability as the canonical framework for handling uncertainty in fields ranging from physics to .

Background

Historical Development

The development of Cox's theorem occurred amid mid-20th-century advancements in the foundations of probability, influenced by wartime applications in and post-war efforts to formalize . During , probabilistic models gained prominence in , , and , fostering a broader interest in subjective interpretations of that extended into the post-war era. This context encouraged physicists and philosophers to seek axiomatic justifications for probability as a tool for rational rather than mere frequency counting. Key precursors included Frank Ramsey's foundational ideas on subjective probability, articulated in his 1926 essay "Truth and Probability," which posited degrees of belief as measurable quantities obeying logical consistency rules. Building on this, advanced inductive logic in his 1939 book Theory of Probability, employing to update beliefs based on evidence and critiquing frequentist approaches for their limitations in scientific inference. Cox acknowledged Jeffreys' emphasis on reasonable expectation as an inspiration for deriving probability from qualitative plausibility relations. Richard T. Cox introduced the core ideas of the theorem in his 1946 paper "Probability, Frequency and Reasonable Expectation," published in the , where he proposed three axioms to represent degrees of rational belief with real numbers, leading to the standard rules of probability. Motivated by the need to bridge physics and logical inference, Cox's analysis critiqued earlier axiomatizations for residual frequentist elements and aimed to establish probability as an . He refined these concepts over the following decade, culminating in his 1961 book The of Probable Inference, which presented the theorem in its complete form, emphasizing its uniqueness within the framework of real-valued representations.

Motivations in Inductive Logic

Deductive logic provides a framework for drawing certain conclusions from given , ensuring that if the are true, the conclusion must follow necessarily. In contrast, inductive logic addresses reasoning under , where inferences are drawn from incomplete or partial data, leading to conclusions that are probable but not guaranteed. This distinction is central to the motivations for developing an axiomatic foundation for , as traditional deductive systems like propositional logic fail to capture the graded nature of belief in real-world scenarios. A key motivation for Cox's theorem lies in the need for a quantitative to represent degrees of , allowing for consistent and comparable assessments of plausibility without relying on arbitrary or ad hoc rules. In inductive logic, beliefs about hypotheses must be expressible in a numerical form to enable operations like and , ensuring across different propositions. Cox sought to establish such a system by deriving the standard rules of probability—such as and axioms—from minimal, intuitively appealing assumptions about the structure of rational , rather than postulating them as . This approach avoids circularity and provides a logical justification for using probability as the unique measure of uncertainty. In the context of scientific , particularly in empirical fields like physics where worked as a researcher, handling is essential for modeling natural phenomena based on limited observations. Inductive methods must quantify the strength of evidence supporting theories, facilitating predictions and updates in light of new data. Cox's theorem addresses this by demonstrating that any adequate representation of such must conform to the calculus of probabilities, thereby grounding scientific reasoning in a rigorous logical framework. This motivation builds on earlier ideas from philosophers like Frank Ramsey and , who explored subjective interpretations of probability.

Core Postulates

Postulate of Qualitative Probability

The postulate of qualitative probability, as formulated by , establishes a partial on the degrees of assigned to propositions, allowing for comparative assessments of without requiring numerical quantification. Specifically, for any two propositions A and B in a given , the in A is either greater than or equal to the in B, less than or equal to it, or the two are incomparable, thereby reflecting the relative plausibility or one assigns to them based on available evidence. This ing is transitive: if the in A exceeds that in B, and the in B exceeds that in C, then the in A exceeds that in C. Such a structure captures the intuitive process of comparative reasoning in uncertain situations, where one might judge one hypothesis as more likely than another without specifying exact probabilities. A key requirement of this postulate is consistency with , ensuring that the ordering aligns with deductive . If A logically implies B (i.e., whenever A is true, B must also be true), then the belief in A cannot exceed the belief in B; formally, the belief in A is less than or equal to the belief in B. This monotonicity prevents reversals, such as assigning higher to a stronger claim than to a weaker one it entails. For instance, if one believes "it is ing" more certainly than "the is ," this would violate the postulate since rain entails wetness. Cox introduced this condition to extend to plausible while preserving its foundational principles. The postulate further addresses how beliefs combine under logical operations, providing qualitative bounds for and disjunctions. For the A \land B, the belief is at most as great as the minimum of the individual beliefs: P(A \land B) \leq \min(P(A), P(B)), reflecting that joint occurrence cannot be more plausible than the least plausible component. Similarly, for the disjunction A \lor B, the belief is at least as great as the maximum of the individual beliefs: P(A \lor B) \geq \max(P(A), P(B)), as the union covers at least the more certain event. These inequalities ensure that the ordering respects the semantics of logical connectives, mirroring how humans intuitively adjust when considering combined —for example, deeming both "the light is on" and "the switch is up" less certain than either alone if one is doubtful. This qualitative framework motivates the subsequent postulate of functional representation by suggesting that such orderings can be consistently mapped to numerical measures.

Postulate of Functional Representation

The Postulate of Functional Representation, the second core axiom of Cox's theorem, assumes that the comparative ordering of plausibilities from the qualitative probability relation can be embedded into a quantitative scale via a real-valued function defined over an appropriate of propositions. This embedding transforms ordinal beliefs into measurable degrees of plausibility, represented as real numbers, which enables arithmetic operations and quantitative reasoning about . The postulate ensures that any rational system of plausible satisfying this assumption will align with numerical probability measures. In the mathematical setup, the set of propositions forms a equipped with operations for conjunction (∧), disjunction (∨), and negation (¬), providing a structure that captures logical entailment and compatibility. The real-valued , typically denoted P(\cdot \mid \cdot), maps pairs of propositions to the real numbers such that it preserves the qualitative order: if proposition A is more plausible than C given background B (i.e., A \succeq_Q C under the qualitative relation), then P(A \mid B) \geq P(C \mid B). This order-preserving property establishes an between the qualitative structure and a numerical one, allowing the function to act as a from the to the of real numbers. This numerical representation is multiplicative for the combination of independent evidence, with the plausibility of a conjunction AB \mid C given by P(AB \mid C) = P(A \mid BC) \cdot P(B \mid C). Transformations of the plausibility measures, such as the logarithm of odds ratios \log(P / (1 - P)), impose a vector space structure over the reals, where beliefs in compound propositions can be expressed through linear combinations that reflect logical compositions in the . Such a structure supports the manipulation of uncertainties in complex scenarios, like sequential integration, by leveraging the inherent in these transformed spaces. The existence of this real-valued representation requires the qualitative ordering to satisfy —meaning every pair of propositions is comparable—and , ensuring the order is a without gaps that would prevent a continuous numerical mapping. These conditions guarantee the homomorphism's validity, as they align with foundational results in for partially ordered sets into the reals, thereby avoiding pathologies like incomparable elements or discontinuous jumps in plausibility. Without them, no such faithful quantitative extension would exist.

Postulate of Symmetry and Uniqueness

The postulate of symmetry requires that the representation of degrees of plausibility remains invariant under arbitrary choices in labeling or coordinates for compound events, ensuring that the logical structure of propositions does not depend on superficial designations. In Cox's framework, this symmetry arises from the inherent duality of , where interchanging (denoted as .) and disjunction (denoted as ) in valid equations yields equally valid forms, maintaining consistency across different ways of combining propositions. This invariance prevents biases from arbitrary event orderings or groupings, aligning the representation with the objective structure of . Complementing this, the postulate establishes up to a monotonic , meaning that while the specific of the plausibility function may vary (e.g., via a power like P^r for some constant r), its functional form is uniquely determined by the requirement of additivity when combining pieces of . This ensures that the measure of plausibility preserves ordinal relations and additive properties for disjoint or propositions, without allowing arbitrary nonlinear distortions that would violate logical . Cox argued that such stems from the need for the representation to consistently aggregate from separate sources, fixing the form except for choices that are resolved by later conventions. A key formal condition within this postulate addresses the joint probability for propositions A, B, and C, stipulating that P(A \land B \mid C) must satisfy a that enforces the , such as i.j \mid h = (i \mid h) \cdot (j \mid h.i), where i, j, and h denote propositions. This arises from requiring the representation to handle compound events in a way that is symmetric and associative, leading naturally to multiplicative combination for conjunctions under fixed . By imposing this on the broader functional representation, the postulate guarantees that the operations mirror the logical connectives without favoring any particular of events. To ensure the representation is specifically probabilistic, Cox employed differential constraints, formulating the functional equations in terms of infinitesimal variations that enforce associativity and continuity, such as F(x, F(y, z)) = F(F(x, y), z), which resolve to logarithmic or exponential forms characteristic of probability theory. These constraints derive the additive and multiplicative rules from local consistency requirements, confirming that only probability-like measures satisfy the symmetry and uniqueness criteria across all scales. This approach ties directly into the overall derivation of probability calculus by providing the rigorous bridge from qualitative postulates to quantitative operations.

Theorem Statement and Derivation

Formal Statement

Cox's theorem asserts that, given the three core postulates of qualitative probability, functional representation, and symmetry and uniqueness, any consistent system for representing degrees of belief about propositions is mathematically isomorphic to the standard calculus of probabilities. Specifically, the theorem proves that beliefs can be quantified by a real-valued function P that satisfies the axioms of probability theory: $0 \leq P(A) \leq 1 for any proposition A, P(\top) = 1 where \top denotes the tautology (true), and for mutually exclusive propositions A and B, P(A \lor B) = P(A) + P(B). The theorem's scope is limited to finite partitions of the space of possible or propositions, where the additivity axiom applies directly to a finite number of mutually exclusive and exhaustive outcomes; extensions to infinite partitions require additional assumptions, such as or limits of finite approximations, to ensure countable additivity. In this framework, conditional are denoted as P(E \mid H), representing the degree of belief in or E given H. A central result of the theorem is that the only functions satisfying the postulates for conditional plausibility are those of the form P(A \mid B) = f\left( \frac{P(A \land B)}{P(B)} \right), where f is a strictly , and P(B) > 0. This form implies that updating beliefs via Bayes' rule is uniquely determined up to the choice of f, which can be normalized to the through the , yielding the standard Bayesian update P(A \mid B) = \frac{P(A \land B)}{P(B)}.

Derivation of Probability Calculus

The derivation of the standard probability calculus from Cox's postulates proceeds in several logical steps, establishing the familiar axioms of additivity, the , normalization, and conditional probabilities. Starting from the qualitative ordering of plausibility relations among propositions and the requirement of a functional , Cox demonstrates that these lead to a quantitative measure that is additive for disjoint events. Specifically, for mutually exclusive propositions A and B, the plausibility P(A \cup B \mid H) = P(A \mid H) + P(B \mid H), where H is the given , ensuring that probabilities behave as a measure over the of events. The symmetry postulate, which demands that the order of in inferences does not affect the overall consistency, implies the for joint probabilities. This yields P(A \cap B \mid H) = P(A \mid H) \cdot P(B \mid A \cap H), reflecting the symmetric treatment of evidence in plausible reasoning. From this, the handling of conditional probabilities follows, governed by a key derived from the consistency of the representation: \frac{\partial}{\partial u} \log P(u \mid v) = \frac{\partial}{\partial v} \log P(v \mid u). Solving this equation under the postulates results in the standard form P(A \mid B) = P(A \cap B) / P(B), provided P(B) > 0, which uniquely determines the structure of conditional probabilities. Normalization is obtained by setting the probability of the universal event (certainty) to 1, so P(\Omega \mid H) = 1, and bounding probabilities between 0 and 1, with P(A \mid H) + P(\neg A \mid H) = 1 for any proposition A. This establishes the full scale of the probability measure. As a direct corollary, Bayes' theorem emerges from combining the product rule and the conditional form: P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}, justifying the inversion of conditional probabilities in inference. These steps collectively derive the axioms of Kolmogorov's probability theory as the unique representation satisfying Cox's desiderata.

Implications

Justification for Bayesian Inference

Cox's theorem establishes a rigorous foundation for by demonstrating that any consistent system for representing and updating degrees of belief under uncertainty must adhere to the axioms of . Specifically, the theorem shows that rational plausibility assignments—intended to quantify subjective confidence in propositions—uniquely correspond to probabilities, ensuring logical coherence in reasoning. This derivation, based on qualitative conditions of order, additivity for disjoint events, and functional dependence, implies that beliefs cannot be manipulated without violating consistency, thereby justifying the use of probability as the calculus of inductive inference. In contrast to frequentist interpretations, which define probability in terms of long-run frequencies in repeatable experiments, Cox's approach derives the probability rules from of rational belief, independent of empirical repetition. This supports the , where subjective priors encapsulate initial knowledge and are updated via conditional probabilities to incorporate new , rendering Bayes' rule an inevitable consequence of rather than an ad hoc assumption. As noted by Jaynes, this framework unifies by treating probability as an extension of deductive to partial , applicable beyond strict . The theorem's implications extend to , where coherent probabilities prevent inconsistencies such as arguments—scenarios in which inconsistent beliefs lead to guaranteed losses in betting. In scientific testing, it ensures that updating beliefs about competing models given maintains additivity and multiplication rules, promoting rational evaluation of evidence. For instance, when assessing a like the fairness of a after observing a sequence of heads, the initial degree of belief () must multiply with the likelihood of the to yield the updated belief (posterior), while the beliefs in fair and biased hypotheses remain additive as mutually exclusive options. This process, derived directly from Cox's postulates, exemplifies how Bayesian updating achieves coherence in practical inference.

Representation of Uncertainty

Cox's theorem demonstrates that any rational quantification of uncertainty, satisfying the specified qualitative postulates, must be isomorphic to a probability measure, thereby establishing probability distributions as the fundamental representation of degrees of belief in uncertain propositions. This implication underscores the theorem's role in formalizing across diverse domains, including scientific testing where probabilities quantify evidential support, systems for under incomplete information, and everyday reasoning for assessing plausibility in practical scenarios. The theorem's framework extends naturally to continuous cases through limits of discrete representations, yielding probability densities that capture uncertainty over uncountable spaces, such as real-valued parameters in statistical models. This extension preserves the to standard , including Kolmogorov's axioms, ensuring that continuous measures adhere to the same logical structure as their counterparts. A key consequence is the uniqueness of this representation: all valid measures of are equivalent up to a monotonic relabeling of values, which unifies seemingly diverse models of under the probabilistic and eliminates the need for alternative formalisms that violate the postulates. This equivalence provides a theoretical foundation for practical implementations, such as Bayesian networks, which operationalize these probability-based rules for efficient computation in complex inference tasks across and .

Interpretations and Extensions

Philosophical Perspectives

Cox's theorem has been interpreted primarily within a subjectivist , where probabilities represent degrees of or credence, aligning with the view that is encoded through subjective expectations that must conform to the structure of to ensure coherence. This subjectivist foundation, as articulated by Cox himself, treats probability as a for "reasonable " rather than objective frequencies, allowing beliefs to be updated systematically in response to evidence. However, the theorem also permits objective constraints, as the incorporation of empirical evidence imposes intersubjective standards on these probabilities, bridging subjective interpretation with evidential objectivity without requiring a fully objective of probability. The theorem complements arguments, particularly those developed by , by providing a foundational of probability's form from qualitative postulates of , reinforcing the idea that rational systems must avoid inconsistencies that could lead to guaranteed losses in betting scenarios. While de Finetti's work demonstrates that deviations from invite Dutch books—sure-win bets against incoherent credences—Cox's approach establishes the uniqueness of probability as the representation satisfying minimal structural requirements for plausible reasoning, thus offering a logical complement to de Finetti's pragmatic criterion. This synergy underscores as a normative standard for epistemic , where both theorems highlight the inescapability of probabilistic structure for avoiding self-undermining beliefs. Debates surrounding the "reasonableness" of Cox's postulates center on their status as minimal conditions for rational , positing that any of plausible adhering to basic qualitative principles—such as representing via ordered scales and ensuring consistency with logical conjunctions—must be isomorphic to . Critics and proponents alike argue that these postulates capture the essence of under , serving as a weak yet indispensable foundation for epistemic norms without presupposing stronger commitments like countable additivity. This has influenced theory, where the theorem justifies probabilistic measures of evidential support, enabling formal assessments of how hypotheses are strengthened or weakened by data through Bayesian updating, thereby providing a rigorous for inductive . In modern philosophical discussions, Cox's theorem plays a role in critiques of Karl Popper's falsificationism, as Bayesian epistemologists leverage its derivation to advocate for graded degrees of rather than falsification or . By establishing probability as the unique for rational plausibility, the theorem supports the view that can incrementally increase a theory's credence, challenging Popper's insistence that scientific progress relies solely on refutation without positive , thus highlighting tensions between strict deductivism and probabilistic .

Criticisms and Limitations

One prominent criticism of Cox's theorem concerns its continuity assumption, which requires the range of plausibility measures to be dense in the real numbers, effectively necessitating an infinite domain of propositions to satisfy the theorem's conditions. This assumption fails in finite domains, common in artificial intelligence applications, where no such density exists, rendering the theorem inapplicable without additional constraints. Even in infinite domains, the theorem's derivation relies on continuity for the functional forms, but extending it to rigorous probability measures over infinite spaces requires supplementary measure-theoretic foundations, such as the Kolmogorov axioms, which Cox's original postulates do not explicitly incorporate. The theorem's treatment of qualitative probability has also been critiqued for assuming that plausibility relations form a , allowing representation as real numbers, whereas human judgments often involve incomparabilities where degrees of belief cannot be strictly compared. This postulate of universal comparability overlooks scenarios in or reasoning where propositions are partially ordered or incomparable, leading to representations that do not align with the theorem's isomorphic mapping to probability. Cox's dependence on real-valued measures has drawn challenges from alternative uncertainty frameworks that violate the theorem's uniqueness by relaxing additivity or order assumptions. For instance, employs operations like minimum and maximum for and , producing non-probabilistic plausibility assignments that satisfy qualitative without yielding additive probabilities. Similarly, non-additive measures in Dempster-Shafer allow functions that assign masses to subsets without full probabilistic decomposition, demonstrating coherent reasoning outside the theorem's scope. While Cox's theorem provides a foundational justification for probability in certain contexts, its historical formulations have been noted for incompleteness in addressing non-probabilistic logics, as highlighted in Halpern's 1999 analysis, which identifies counterexamples to the core assumptions even under weakened conditions. Recent extensions, such as applications of generalized Cox axioms to quantum probability, explore non-commutative structures in propositional logics, revealing limitations in the original theorem's Boolean framework for handling interference effects in . For example, works in the 2020s have derived the for quantum probabilities by adapting Cox's functional approach to non-distributive lattices, underscoring the need for broader axiomatizations beyond classical probability. Finally, the theorem does not directly accommodate imprecise or interval-valued probabilities, assuming sharp real-number assignments that fail to capture vague evidence where bounds on belief are more appropriate than point estimates. This limitation restricts its applicability in scenarios with partial , where imprecise probability models provide a more flexible representation of .

References

  1. [1]
  2. [2]
    [PDF] Algebra of Probable Inference
    THE ALGEBRA OF PROBABLE INFERENCE. Page 2. The Algebra of. Probable Inference by Richard T. Cox. PROFESSOR OF PHYSICS. THE JOHNS HOPKINS UNIVERSITY. BAl TIMORE ...
  3. [3]
    [PDF] The Philosophical Significance of Cox's Theorem - Mark Colyvan
    Cox's theorem states that, under certain assumptions, any mea- sure of belief is isomorphic to a probability measure. This theorem, although intended as a ...
  4. [4]
    [PDF] Cox's Theorem Revisited - Journal of Artificial Intelligence Research
    Cox, R. (1946). Probability, frequency, and reasonable expectation. American Journal of. Physics, 14(1), 1{13. Cox, R. (1978). Of inference and inquiry: An ...
  5. [5]
    [PDF] Postwar game and decision theory: a historical perspective
    Three central features of Bayesian decision theory: 1) Represent uncertainty in terms of probabilistic beliefs: uncertainty can always be probabilized → ...
  6. [6]
    [PDF] A Brief History of Decision Theory
    The history of decision theory is deeply interwoven with the history of probability theory (Glimcher, 2004, p. 178) since it is concerned with choice ...<|separator|>
  7. [7]
    [PDF] "Truth and Probability" (1926)
    In this essay the Theory of Probability is taken as a branch of logic, the logic of partial belief and inconclusive argument; but there is no intention of ...
  8. [8]
    Theory Of Probability : Jeffreys Harold - Internet Archive
    Jan 17, 2017 · Theory Of Probability. by: Jeffreys Harold. Publication date: 1948. Topics: C-DAC. Collection: digitallibraryindia; JaiGyan. Language: English.
  9. [9]
    [PDF] A Guide to Cox's Theorem”, by Kevin S. Van Horn - Glenn Shafer
    Mar 13, 2003 · This appears to have been due, at least in part, to the influence of Keynes. ... [9] Harold Jeffreys Theory of Probability. Clarendon Press.
  10. [10]
    [PDF] Probability, Frequency and Reasonable Expectation
    PROBABILITY, FREQUENCY AND REASONABLE EXPECTATION 5. Other authors, who, like Keynes, present an axiomatic development, choose somewhat differ- ent sets of ...Missing: T. | Show results with:T.
  11. [11]
    [PDF] Constructing a Logic of Plausible Inference: A Guide to Cox's Theorem
    Feb 28, 2003 · One motivation for using a two-dimensional theory is the concern that a one-dimensional theory cannot adequately represent ignorance. Belief ...
  12. [12]
    [PDF] Cox's Theorem and the Jaynesian Interpretation of Probability
    Apr 17, 2017 · Cox's Theorem states that true-false logic under uncertainty is isomorphic to conditional probability theory. Jaynes used this to develop a ...
  13. [13]
  14. [14]
    [PDF] Probability Theory: The Logic of Science
    Each of these is usable within a small domain for which it was invented but, as Cox's theorems guarantee, such arbitrary devices always generate.
  15. [15]
    Dutch Book Arguments - Stanford Encyclopedia of Philosophy
    Jun 15, 2011 · De Finetti identified degrees of belief with betting quotients and termed degrees of belief that are susceptible to a Dutch Book incoherent; ...
  16. [16]
    Cox's Theorem - LessWrong
    Aug 11, 2021 · Along with the Dutch book argument, Cox's theorem is an argument for considering probability theory as a normative theory of reasoning.
  17. [17]
    Constructing a logic of plausible inference: a guide to Cox's theorem
    Cox's theorem provides a theoretical basis for using probability theory as a general logic of plausible inference.Missing: postulate | Show results with:postulate
  18. [18]
    Bayesian Epistemology vs Popper - LessWrong
    Apr 6, 2011 · Cox's theorem is a proof of Bayes rule, from the conditions above. "Consistency" in t his context means (Jaynes 19): If a conclusion can be ...Missing: critiques | Show results with:critiques
  19. [19]
    None
    ### Summary of Main Criticisms and Limitations of Cox's Theorem by Joseph Y. Halpern
  20. [20]
    [PDF] A Counterexample to Theorems of Cox and Fine - Statistics
    The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to ...Missing: criticism | Show results with:criticism
  21. [21]
    Cox's Theorem: controversy surrounding universal comparability
    Jan 11, 2016 · The assumption that a plausibility (A|B) (A given B) can always be represented by a real number is often a point of controversy for various ...<|control11|><|separator|>
  22. [22]
    [PDF] Is Probability the Only Coherent Approach to Uncertainty?
    To use a theorem such as Cox's the- orem that is based on assumptions that fuzzy logic rejects is, thus, inappropriate, unless the assumptions in question can ...<|separator|>
  23. [23]
    [PDF] A Framework for Comparing Alternative Formalisms for Plausible ...
    In particular we will examine fuzzy logic, the Dempster-. Shafer theory of belief functions, and the MYCIN certainty factor model. VII EXAMINATION. OF ...
  24. [24]
    Cox's Theorem Revisited | Journal of Artificial Intelligence Research
    Dec 1, 1999 · The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Cox-style theorem can be ...
  25. [25]
    [PDF] Generalized Probabilities in Statistical Theories - arXiv
    Aug 2, 2021 · In this paper, we pay special attention to Cox's approach and make use of its extension to the quantum realm [27]. Cox's approach is based on a ...
  26. [26]
    [PDF] Statistical Modeling with Imprecise Probabilities - DTIC
    This restriction is a result of the additivity axiom of classical probability theory. Due to this axiom, elementary events are required to be pairwise disjoint ...
  27. [27]
    The philosophical significance of Cox's theorem - ResearchGate
    Aug 7, 2025 · Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although ...<|control11|><|separator|>