Fact-checked by Grok 2 weeks ago

Conditional probability

Conditional probability is a measure of the probability of an event occurring given that another specific event has already occurred, formally defined as P(A \mid B) = \frac{P(A \cap B)}{P(B)} where P(B) > 0. This concept adjusts the sample space to the conditioning event B, effectively renormalizing probabilities within that subspace. The origins of conditional probability trace back to the , with early discussions appearing in the correspondence between and in 1654, particularly in their analysis of the "" involving interrupted games of chance. The term itself emerged later, first documented in George Boole's An Investigation of the in 1854, where it was used in logical contexts. By the 18th century, incorporated conditional reasoning into what became known as in his 1763 essay, providing a framework for updating probabilities based on new evidence. In modern , conditional probability serves as a cornerstone for understanding dependence between events and underpins key results such as the and the chain rule for joint probabilities. It is essential in fields like statistics, where it enables in testing and predictive modeling; , for algorithms like naive Bayes classifiers in detection and recommendation systems; and , for applications in medical diagnostics and . Events A and B are independent if P(A \mid B) = P(A), a that simplifies computations and highlights non-dependence.

Foundations

Definition

Conditional probability is a fundamental measure in that quantifies the likelihood of an event occurring given that another event has already occurred. In the frequentist interpretation, it represents the limiting relative frequency with which event A occurs among the occurrences of event B, as the number of trials approaches . This intuitive notion aligns with empirical observations, where the conditional probability P(A|B) is the proportion of times A happens in the subsequence of trials where B is realized. Formally, in the axiomatic framework established by Andrey Kolmogorov, the conditional probability of event A given event B (with P(B) > 0) is defined as P(A|B) = \frac{P(A \cap B)}{P(B)}, where P(A \cap B) is the probability of the intersection of A and B. This definition extends the basic axioms of probability—non-negativity, normalization, and countable additivity—by introducing a normalized ratio that preserves probabilistic structure while conditioning on the restricting event B. As a core primitive concept, it underpins derivations of more advanced theorems and enables the modeling of dependencies in random phenomena. Unlike joint probability P(A \cap B), which measures the simultaneous occurrence of both events without restriction, conditional probability P(A|B) adjusts for the information provided by B, often yielding a different value that reflects updated likelihoods. This distinction is essential for distinguishing unconditional joint events from scenarios constrained by prior outcomes.

Notation

The standard notation for the conditional probability of an event A given an B is P(A \mid B), where the \mid signifies "given" or "conditioned on" B. This convention interprets P(A \mid B) as the restricted to the occurrence of B, normalized appropriately. For conditioning on multiple events, the notation extends to P(A \mid B, C), indicating the probability of A given the joint occurrence of B and C. In multivariate settings, the clearly delineates the conditioning set, with commas separating the conditioned events to prevent in grouping. notations appear in some probability literature, such as P_B(A) to emphasize the conditional probability measure induced by B. Another variant, P(A/B), has been used in some texts to denote the conditional probability, though it is less common today.

Conditioning Types

On Events

In the axiomatic framework established by in 1933, conditional probability is defined within the context of a consisting of a \Omega, an event algebra (specifically, a \sigma-algebra \mathcal{F} of measurable subsets of \Omega), and a P: \mathcal{F} \to [0,1] satisfying the standard axioms of non-negativity, , and countable additivity. For A, B \in \mathcal{F} with P(B) > 0, the conditional probability is given by P(A \mid B) = \frac{P(A \cap B)}{P(B)}, which quantifies the probability of A given that B has occurred, building directly on the measure-theoretic structure of . This definition implies an axiomatic treatment of conditional probability itself: for a fixed conditioning event B \in \mathcal{F} with P(B) > 0, the map Q(A) = P(A \mid B) for A \in \mathcal{F} forms a new on \mathcal{F}, inheriting the Kolmogorov axioms. Specifically, Q(A) \geq 0 for all A (non-negativity), Q(\Omega) = 1 (), and for a countable collection of pairwise disjoint events \{A_i\}_{i=1}^\infty \in \mathcal{F}, Q\left(\bigcup_{i=1}^\infty A_i\right) = \sum_{i=1}^\infty Q(A_i) (countable additivity). This perspective treats on B as restricting the probability space to the subspace B, renormalizing probabilities accordingly while preserving the of events. Bruno de Finetti offered a foundational reinterpretation in his subjective theory of probability, emphasizing operational and -based axioms over measure theory. He regarded P(A \mid B) not as a derived but as the direct probability ascribed to the conditional event "A given B," interpreted as the in A occurring under the explicit condition that B has occurred, with the joint relation P(A \cap B) = P(A \mid B) \cdot P(B) emerging as a consequence of to avoid arguments. This approach prioritizes conditional probabilities as primitives, suitable for expressing degrees of in event-based scenarios without assuming a full unconditional measure. Alfréd Rényi proposed a new axiomatic foundation in , taking conditional probabilities as primitives in conditional probability spaces, which allows for systems with unbounded measures where not all events in the algebra have assigned (normalized) unconditional probabilities. In Rényi's system, a conditional probability function is a primitive that assigns values P(X \mid Y) to pairs of events X, Y \in \mathcal{F} (with Y \neq \emptyset), satisfying axioms of non-negativity, normalization P(Y \mid Y) = 1, and additivity for compatible conditionals, without requiring a complete unconditional on \mathcal{F}. This enables axiomatic treatment in situations of partial knowledge about the event space.

On Random Variables

In , the conditional probability associated with discrete random variables X and Y is defined for values x and y in their respective supports, where P(Y = y) > 0, as P(X = x \mid Y = y) = \frac{P(X = x, Y = y)}{P(Y = y)}. This expression yields the conditional probability mass function (pmf) of X given Y = y, which fully characterizes the updated distribution of X after observing the specific value y of Y. The interpretation of this conditional pmf is that it represents the probabilities of the possible outcomes of X, revised based on the information provided by the realization Y = y; for instance, if X and Y model the outcomes of successive flips, conditioning on Y = y adjusts the likelihoods for X to reflect the observed flip. This framework extends the basic -based —where events are indicator functions of subsets—by allowing Y to take multiple values, thus enabling a over finer-grained conditional scenarios rather than binary or coarse event partitions. For continuous random variables, the analogous concept shifts to probability densities, assuming the joint distribution has a density function f_{X,Y} with respect to Lebesgue measure. The conditional probability density function (pdf) of X given Y = y, where the marginal density f_Y(y) > 0, is given by f_{X \mid Y}(x \mid y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}. This conditional pdf describes the updated density of X upon observing Y = y, with probabilities for intervals computed via integration over the conditional density. Unlike conditioning on events, which restricts to probabilities over fixed subsets and often relies on indicator random variables, conditioning on continuous random variables leverages the full structure to model dependencies across a of outcomes, providing a more precise tool for analyzing joint behaviors in stochastic processes.

On Zero-Probability Events

The standard definition of conditional probability, P(A \mid B) = \frac{P(A \cap B)}{P(B)}, is undefined when P(B) = 0. This limitation poses a significant challenge in continuous probability spaces, where events like a continuous attaining a precise value have measure zero, despite the intuitive need to condition on such events for modeling purposes. To address this, conditional probabilities are often resolved through the use of conditional densities in jointly continuous settings. The conditional density f_{Y \mid X}(y \mid x) = \frac{f_{X,Y}(x,y)}{f_X(x)} is defined for values x where the marginal density f_X(x) > 0, effectively extending the conditioning concept to points of positive density even though P(X = x) = 0. Heuristically, the Dirac delta function can represent these point conditions, allowing formal expressions like the joint density incorporating \delta(x - x_0) to model conditioning on exact values in continuous distributions. A foundational rigorous resolution stems from Joseph L. Doob's martingale-based approach in 1953, where conditional expectations are defined as L^2-projections onto sub-\sigma-algebras, enabling the construction of conditional distributions via the Doob-Dynkin lemma for measurable functions. This framework underpins regular conditional distributions, which are Markov kernels P(\cdot \mid \omega) satisfying P(A \mid \omega) = P(A \mid \mathcal{G})(\omega) for \mathcal{G}-measurable sets A, with the property that P(A \cap B) = \int_B P(A \mid \omega) \, dP(\omega) for relevant events. Such distributions exist uniquely (up to almost sure equivalence) in standard Borel probability spaces, including Polish spaces, ensuring well-defined conditioning even on null sets. In applications to continuous models, regular conditional distributions facilitate conditioning on exact values; for jointly normal random variables X and Y, the distribution of Y given X = x is with \mu_Y + \rho \frac{\sigma_Y}{\sigma_X} (x - \mu_X) and variance \sigma_Y^2 (1 - \rho^2), providing a realization despite P(X = x) = 0.

Illustrations

Basic Examples

A classic example of conditional probability arises when rolling two fair six-sided dice. Let B be the event that the sum of the numbers shown , and let A be the event that at least one die shows a 1. The conditional probability P(A | B) is the probability that at least one die is 1 given that the sum . The possible outcomes for sum 7 are the equally likely pairs: (1,6), (2,5), (3,4), (4,3), (5,2), (6,1), giving six outcomes in total. Among these, the outcomes with at least one 1 are (1,6) and (6,1). Thus, there are 2 favorable outcomes out of 6 possible, so P(A \mid B) = \frac{2}{6} = \frac{1}{3}. Another introductory example involves drawing a single card from a . Let C be the event of drawing a (jack, queen, or ; there are 12 such cards), and let D be the event of drawing an (there are 4 aces). The conditional probability P(D | C) is the probability of drawing an ace given that a face card was drawn. Since aces are not face cards, the events D and C are mutually exclusive, so there are 0 aces among the 12 face cards. Thus, P(D \mid C) = \frac{0}{12} = 0. This demonstrates that conditional probabilities can be zero when the conditioned event precludes the target event. The offers a well-known illustration of conditional probability in a context. A contestant selects one of three doors, one hiding a (prize) and the other two hiding . The host, aware of the contents, opens a different door revealing a . The contestant may then stick with their original choice or switch to the remaining unopened door. The probability of winning the by switching is 2/3. Initially, the probability that the is behind the chosen is 1/3, and the probability it is behind one of the other two doors is 2/3. By revealing a behind one unchosen door, the host transfers the entire 2/3 probability to the remaining unopened door, making switching advantageous. Tree diagrams provide a visual method to distinguish probabilities from conditional ones by representing sequential events and their probabilities as branches. For the two-dice example above, a tree diagram begins with the 6 possible outcomes for the first die (each with probability 1/6), branching to the second die's outcomes (each 1/6), yielding 36 outcomes. on 7 restricts the relevant paths to the 6 pairs that sum to 7, each now with equal conditional probability 1/6, allowing computation of further conditional events like at least one 1 (2 paths out of 6). This branching highlights how the full joint space narrows under conditioning.

Inference Applications

In statistical inference, conditional probability is fundamental to hypothesis testing via the likelihood function, which quantifies the probability of observing the given a specific , denoted as P(\text{[data](/page/Data)} \mid \text{[hypothesis](/page/Hypothesis)}). This measure evaluates how compatible the is with the , allowing researchers to compare the relative for alternative explanations without assigning probabilities to the hypotheses themselves. For example, in assessing whether a is fair, the likelihood compares the probability of observed toss outcomes under the of equal probabilities versus alternatives like a biased . A prominent application arises in medical diagnostics, where conditional probabilities distinguish test characteristics from diagnostic inferences. The probability P(\text{positive test} \mid \text{disease}), known as sensitivity, represents the likelihood of a positive result given the disease is present and is a fixed property of the test. In contrast, P(\text{disease} \mid \text{positive test}), the positive predictive value, is the probability of actual disease given a positive result, which depends on disease prevalence and test specificity. For a rare disease with 0.1% prevalence, 99% sensitivity, and 99% specificity, a positive test yields only about 9% probability of disease, as false positives dominate due to low prevalence, underscoring how conditional probabilities inform reliable inference beyond basic test performance. Conditional probability also facilitates updating beliefs through sequential conditioning, where each new piece of refines assessments by incorporating additional . This process treats the posterior distribution from one stage as the for the next, enabling efficient evidence accumulation without recomputing full likelihoods from scratch. In applications like analyzing large datasets from psychological experiments, such as reaction times in tasks, sequential updates partition into batches for real-time , separating effects like speed and caution while maintaining conceptual coherence. In frequentist inference, conditional probability underpins procedures by computing probabilities conditional on fixed parameter values, with the observed data serving as the basis for estimating unknowns and controlling error rates. This conditioning treats parameters as known under the hypothesis, generating p-values and confidence intervals that reflect long-run frequencies, such as the probability of data as extreme as observed under the null. Thus, inference conditions on the data to quantify uncertainty while adhering to the paradigm's emphasis on repeatable sampling properties.

Connections

Independence

In , two events A and B in a are defined to be statistically independent if the conditional probability of A given B equals the unconditional probability of A, that is, P(A \mid B) = P(A), provided P(B) > 0. This condition holds symmetrically for P(B \mid A) = P(B). Equivalently, independence is characterized by the joint probability satisfying P(A \cap B) = P(A) P(B). This equivalence follows directly from the definition of conditional probability, P(A \mid B) = \frac{P(A \cap B)}{P(B)}, which implies the product form when the conditional equals the marginal. For random variables, independence extends the event-based definition: two discrete random variables X and Y are independent if the conditional probability mass function satisfies P(X = x \mid Y = y) = P(X = x) for all x and y such that P(Y = y) > 0. This ensures that the distribution of X remains unchanged regardless of the observed value of Y. The definition generalizes to continuous random variables via probability density functions, where the conditional density f_{X \mid Y}(x \mid y) = f_X(x) for y in the support of Y. When considering multiple events or random variables, a distinction arises between pairwise independence and mutual independence. Pairwise independence requires that every pair satisfies the independence condition individually, such as P(A_i \cap A_j) = P(A_i) P(A_j) for all i \neq j. Mutual independence, however, demands that the independence holds for every finite subset, including the full collection; for three events A, B, and C, this includes pairwise conditions plus P(A \cap B \cap C) = P(A) P(B) P(C). Mutual independence implies pairwise independence, but the converse does not hold, as pairwise conditions alone may fail to capture higher-order dependencies. The same distinctions apply to collections of random variables. A key implication of independence is the simplification of joint distributions: for mutually random variables X_1, \dots, X_n, the joint probability mass or density function factors as the product of the marginals, p(x_1, \dots, x_n) = \prod_{i=1}^n p(x_i) (or f(x_1, \dots, x_n) = \prod_{i=1}^n f(x_i) for continuous cases). This greatly reduces in modeling joint behaviors, as expectations, variances, and other moments can often be computed separately and combined without cross-terms. For pairwise variables, the joint does not necessarily factor fully, limiting such simplifications to pairs.

Bayes' Theorem

Bayes' theorem is a of , enabling the inversion of to compute the probability of one event given another by relating it to the reverse conditional and marginal probabilities. This theorem facilitates updating beliefs or probabilities based on new evidence, making it essential in fields requiring under uncertainty. The theorem is stated as
P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)},
where the denominator P(B) is the marginal probability of B, often computed via the as P(B) = \sum_i P(B \mid A_i) P(A_i) over a partition of mutually exclusive and exhaustive events A_i. Named after the English mathematician , the theorem appeared in his posthumously published essay "An Essay Towards Solving a Problem in the Doctrine of Chances" in 1763. French mathematician independently rediscovered and formalized it in a more general version in his 1812 work Théorie Analytique des Probabilités, expanding its applicability to continuous cases and .
In , underpins the updating process, where P(A) represents the of the hypothesis A before observing B, P(B \mid A) is the likelihood of the given the hypothesis, and P(A \mid B) is the reflecting the updated belief after incorporating the . This framework allows for systematic incorporation of prior knowledge with observed data to refine probabilistic assessments. For continuous random variables, the theorem adapts to probability density functions, expressed proportionally as
f(\theta \mid x) \propto f(x \mid \theta) \pi(\theta),
where \pi(\theta) is the density of the parameter \theta, f(x \mid \theta) the likelihood of the x given \theta, and f(\theta \mid x) the posterior ; the is the marginal f(x) = \int f(x \mid \theta) \pi(\theta) \, d\theta. This form is fundamental to with continuous distributions.

Pitfalls

Inverse Probability Errors

One common error in probabilistic reasoning is the inverse probability fallacy, where individuals mistakenly equate the conditional probability P(A|B) with its inverse P(B|A), transposing the roles of event and condition without accounting for their differing magnitudes. For instance, observing wet streets might lead someone to assume the probability of rain given wet streets, P(\text{rain}|\text{wet streets}), is approximately equal to the probability of wet streets given rain, P(\text{wet streets}|\text{rain}), ignoring how rare rain might be relative to other causes of wetness like sprinklers. This confusion arises because intuitive judgments often overlook the directional dependency in conditional probabilities, leading to flawed inferences about causation or likelihood. A prominent real-world manifestation of this fallacy is the prosecutor's fallacy, frequently encountered in legal contexts where forensic evidence is misinterpreted. In this error, the probability of the evidence given innocence, P(\text{evidence}|\text{innocent}), is wrongly taken as the probability of innocence given the evidence, P(\text{innocent}|\text{evidence}). For example, if DNA evidence matches a suspect with a probability of 1 in 1 million under the assumption of innocence, prosecutors might erroneously claim this implies a 1 in 1 million chance of the suspect's innocence, neglecting the of the crime's occurrence in the population. This misstep has contributed to miscarriages of justice, as seen in cases like the trial, where the rarity of multiple cot deaths given innocence was flipped to suggest overwhelming guilt. Psychologically, the inverse probability fallacy often stems from base rate neglect, particularly when dealing with small probabilities, where people underweight the prior prevalence of events in favor of salient but directionally reversed evidence. This bias manifests as an overreliance on the likelihood of observed data under a hypothesis while disregarding how infrequently the hypothesis itself occurs, exacerbating errors in low-base-rate scenarios like rare diseases or crimes. Studies show that even trained individuals, such as medical professionals, commit this when interpreting diagnostic tests, confusing (P(\text{positive}|\text{disease})) with positive predictive value (P(\text{disease}|\text{positive})) in populations with low . Historically, early misapplications of emerged in 18th-century disputes over inferring causes from effects, as transitioned from games of chance to scientific inference. Pioneered by in his 1763 essay and expanded by in the 1770s, these methods aimed to compute probabilities of unobserved causes given observed effects but sparked debates on their validity, with critics like Arbuthnot questioning assumptions in applications to natural phenomena such as sex ratios at birth. Laplace's , for instance, applied inverse reasoning to estimate future events like sunrises but was later contested for overestimating probabilities by inadequately handling priors, fueling philosophical rifts that persisted into the . These early controversies highlighted the risks of inverting conditionals without rigorous Bayesian updating, as later formalized in .

Marginal-Conditional Confusions

One common error in probabilistic reasoning involves assuming that a conditional probability P(A \mid B) approximates the marginal probability P(A), thereby overlooking the influence of the conditioning B on the outcome A. This arises when analysts fail to adjust for dependencies, treating the probability of an as to new information provided by the conditioner. In the context of election polling, this confusion manifests when interpreting poll leads. For instance, a leading by 4 points in a national poll might lead observers to assume the probability of winning P(\text{win} \mid \text{lead}) is roughly equal to the unconditional probability P(\text{win}), often taken as near 50% in a competitive , without accounting for the margin's implications under . In reality, such a lead can translate to an 84% in a state-level model, as the conditioning on the observed margin incorporates sampling variability and historical patterns. This oversight ignores how the specific poll result shifts the posterior of voter support, leading to underestimation of the lead's evidentiary weight. A classic illustration of this marginal-conditional discrepancy is , where trends observed in aggregated marginal probabilities reverse or vanish when examined through conditional probabilities stratified by a confounding variable. For example, in a medical study, a may show no overall benefit in marginal success rates (e.g., 50% for both treated and control groups), yet prove effective within subgroups conditioned on gender (e.g., higher success for treated men and women separately). This paradox occurs because the marginal association averages over uneven subgroup sizes or distributions, masking the true conditional relationships. Seminal work by Simpson highlighted this issue in contingency tables, emphasizing how joint distributions drive the inconsistency between aggregated and stratified analyses. The root cause of these confusions lies in neglecting the P(A, B), from which conditional probabilities are derived via P(A \mid B) = \frac{P(A, B)}{P(B)}. Without properly integrating over the dependencies encoded in the joint, marginal summaries provide a misleading proxy for conditioned scenarios. Such errors are particularly prevalent when obscures subgroup heterogeneities, as noted in analyses of . Note that P(A \mid B) = P(A) holds precisely under independence, but this special case does not apply in dependent settings where matters.

Prior Weighting Issues

One common pitfall in conditional probability arises from the over- or under-weighting of initial prior probabilities, particularly in Bayesian contexts where priors represent base rates or background knowledge. This error, known as base rate neglect, occurs when decision-makers disproportionately emphasize new evidence or case-specific details while undervaluing the prior probability of an event, leading to distorted conditional probabilities. A classic illustration is in medical diagnosis: suppose a rare disease affects 0.1% of the population (the base rate or prior), and a test is 99% accurate for both positive and negative results; despite the high accuracy, the probability of having the disease given a positive test result remains low—around 9%—due to the scarcity of true positives relative to false positives from the large healthy population. However, people often intuitively estimate this conditional probability much higher, ignoring the low prior. Lindley's paradox exemplifies the challenges of prior weighting in hypothesis testing, revealing a tension between frequentist p-values and Bayesian posterior odds. Named after statistician Dennis Lindley, the paradox arises when testing a point null hypothesis with a diffuse prior on the alternative; for large sample sizes, even modest evidence can yield a statistically significant p-value (e.g., p < 0.05), prompting rejection of the null in frequentist terms, yet the Bayesian posterior probability of the null may remain high (e.g., over 0.5) because the broad prior dilutes the impact of the data on the alternative hypothesis. This discrepancy highlights how expansive priors can overweight the null relative to the likelihood, complicating the integration of priors in conditional inference. In email spam filtering, Bayesian classifiers such as depend heavily on priors representing the expected proportion of spam in incoming messages; an incorrect prior, such as underestimating spam prevalence in a high-volume inbox, can skew conditional probabilities and inflate false positives, where legitimate emails are erroneously flagged. For example, if the true prior probability of spam is 40% but the model assumes 20%, the posterior probability of spam given neutral word evidence rises unduly, misclassifying ham as spam and eroding user trust. Empirical evaluations of such filters show that prior misspecification can increase false positive rates by factors of 2–5 compared to tuned priors, emphasizing the need for domain-specific base rate calibration. To address prior weighting issues, sensitivity analysis serves as a key remedy, systematically varying prior distributions (e.g., from informative to diffuse) and examining their effects on posterior inferences to gauge robustness. This technique, often implemented via simulations, reveals whether results hinge on particular prior assumptions; for instance, if posteriors shift substantially across a range of reasonable priors, it signals the need for more data or elicitation of expert knowledge to refine them. Guidelines recommend conducting such analyses routinely in Bayesian modeling to enhance the reliability of conditional probability estimates.

Derivations

Axiomatic Basis

The axiomatic foundation of probability theory, as established by in 1933, provides a measure-theoretic framework where conditional probability emerges as a derived concept from the joint probability measure. Specifically, for a probability space (\Omega, \mathcal{F}, P) satisfying —non-negativity P(E) \geq 0 for all E \in \mathcal{F}, normalization P(\Omega) = 1, and countable additivity P\left(\bigcup_{n=1}^\infty E_n\right) = \sum_{n=1}^\infty P(E_n) for disjoint E_n \in \mathcal{F}—conditional probability P(A \mid B) for A, B \in \mathcal{F} with P(B) > 0 is defined as the P(A \cap B)/P(B), effectively extending the axioms to yield a family of probability measures P(\cdot \mid B) on the restricted \sigma-algebra \{A \cap B : A \in \mathcal{F}\}. This family of measures inherits the Kolmogorov axioms in a conditioned setting: for fixed B with P(B) > 0, P(A \mid B) \geq 0 for all A \in \mathcal{F}, P(\Omega \mid B) = 1 (or equivalently P(B \mid B) = 1), and countable additivity holds such that if A_n \in \mathcal{F} are pairwise disjoint, then P\left(\bigcup_{n=1}^\infty A_n \mid B\right) = \sum_{n=1}^\infty P(A_n \mid B). These properties ensure that each P(\cdot \mid B) behaves as a valid on the conditioned by B, preserving the foundational structure while allowing analysis of events relative to information. An alternative axiomatization treats conditional probability as a rather than a , as proposed by in 1938. In this approach, the basic object is a P(A \mid B) satisfying axioms such as non-negativity P(A \mid B) \geq 0, P(B \mid B) = 1, additivity for disjoint events, and additional relational properties like P((A \cap B) \mid C) = P(A \mid (B \cap C)) \cdot P(B \mid C) for appropriate events, from which unconditional probabilities P(A) = P(A \mid \Omega) and joint probabilities can be derived. This primitive treatment addresses limitations in the Kolmogorov framework, such as handling conditioning on events of zero probability, by prioritizing conditionals as the core primitive. For both extensions, consistency requirements are essential to ensure across the family of measures. In the Kolmogorov-derived case, consistency demands that the conditional measures align with the underlying measure, satisfying relations like P(A \mid B) \cdot P(B) = P(A \cap B) for all applicable events, preventing contradictions in probabilistic inferences. In primitive axiomatizations like Popper's, consistency axioms include monotonicity (e.g., if A \subseteq C, then P(A \mid B) \leq P(C \mid B) for B fixed) and conditions ensuring that derived joints reproduce the conditionals without , such as the requirement that P(A \mid B) = P(A \mid C) whenever B and C imply the same relevant information. These requirements guarantee that the supports rigorous derivations while maintaining interpretability in probabilistic reasoning.

Formal Proofs

The multiplication rule, also known as the , is a direct consequence of the definition of conditional probability. By definition, the conditional probability of event A given event B (with P(B) > 0) is given by P(A \mid B) = \frac{P(A \cap B)}{P(B)}. Rearranging this equation yields the multiplication rule: P(A \cap B) = P(A \mid B) P(B). This equivalence holds because the conditional probability normalizes the joint probability by the marginal probability of the conditioning event, preserving the measure-theoretic structure of probability spaces. The chain rule generalizes the multiplication rule to the joint probability of multiple events. For events A_1, A_2, \dots, A_n (each with positive probability where conditioned), the chain rule states that P(A_1 \cap A_2 \cap \cdots \cap A_n) = P(A_1) P(A_2 \mid A_1) P(A_3 \mid A_1 \cap A_2) \cdots P(A_n \mid A_1 \cap \cdots \cap A_{n-1}). To prove this, apply the multiplication rule iteratively. For two events, it reduces directly to P(A_1 \cap A_2) = P(A_1) P(A_2 \mid A_1). For three events, substitute the two-event case into the multiplication rule: P((A_1 \cap A_2) \cap A_3) = P(A_1 \cap A_2) P(A_3 \mid A_1 \cap A_2) = P(A_1) P(A_2 \mid A_1) P(A_3 \mid A_1 \cap A_2). Extending this process to n events by repeated substitution confirms the general form, relying on the additivity and non-negativity axioms of probability. Bayes' theorem provides a way to invert conditional probabilities and follows immediately from the multiplication rule applied symmetrically. Start with the joint probability expressed in two ways: P(A \cap B) = P(A) P(B \mid A) = P(B) P(A \mid B), where P(A) > 0 and P(B) > 0. Equating these expressions and solving for P(A \mid B) gives P(A \mid B) = \frac{P(B \mid A) P(A)}{P(B)}. This derivation highlights the symmetry in the definition of conditional probability, allowing computation of posterior probabilities from likelihoods and priors in probabilistic models. The law of total probability expresses the unconditional probability of an event as a weighted sum over a partition of the sample space. Let \{A_i\}_{i=1}^n be a partition of the sample space, meaning the A_i are mutually exclusive and their union is the entire space. Then, for any event B, P(B) = \sum_{i=1}^n P(B \mid A_i) P(A_i), assuming P(A_i) > 0 for all i. This follows from the additivity axiom: B = \bigcup_{i=1}^n (B \cap A_i), where the B \cap A_i are disjoint, so P(B) = \sum_{i=1}^n P(B \cap A_i). Applying the multiplication rule to each term yields the desired form, providing a foundational tool for marginalizing over conditioning events.

References

  1. [1]
    Conditional Probability
    CONDITIONAL PROBABILITY. Definition: For events A and B in some probability space with P(B) > 0, we define P(A|B) as P(A cap B)/P(B).
  2. [2]
    pr.probability - What is the early history of the ... - MathOverflow
    Apr 16, 2014 · The earliest discussion of conditional probabilities goes back to the analysis of Pascal and Fermat (1654) of the problem of points.
  3. [3]
    Comment: On the History and Limitations of Probability Updating
    It seems that “conditional probability” first appeared in George Boole's Laws of Thought (Boole, 1854, Chap- ter XX, Section 21). A logician, Boole was trying ...
  4. [4]
    Bayes' Rule for Clinicians: An Introduction - NIH
    Bayes was a minister interested in probability and stated a form of his famous rule in the context of solving a somewhat complex problem involving billiard ...<|control11|><|separator|>
  5. [5]
  6. [6]
    29 Conditional Probabilities and Expectations - rafalab
    In machine learning applications, we rarely can predict outcomes perfectly. For example, spam detectors often miss emails that are clearly spam, ...
  7. [7]
    [PDF] Conditional probability
    With this in mind, we give the following definition. Definition 4.1 (Conditional probability) If P(F) > 0, we define the probability of E given F as. P(E | F) ...
  8. [8]
    Interpretations of Probability - Stanford Encyclopedia of Philosophy
    Oct 21, 2002 · Among other things, it should make clear why, by and large, more probable events occur more frequently than less probable events.
  9. [9]
    [PDF] FOUNDATIONS THEORY OF PROBABILITY - University of York
    FOUNDATIONS. OF THE. THEORY OF PROBABILITY. BY. A.N. KOLMOGOROV. Second English Edition. TRANSLATION EDITED BY. NATHAN MORRISON. WITH AN ADDED BIBLIOGRPAHY BY.
  10. [10]
    Foundations for conditional probability - arXiv
    The probability foundations provided by A. N. Kolmogorov Kolmogorov (1933) define conditional probability as a ratio of unconditional probabilities.
  11. [11]
    [PDF] Conditional Probability - DSpace@MIT
    Apr 21, 2005 · There is a special notation for conditional probabilities. In general, Pr (A | B) denotes the probability of event A, given that event B happens ...
  12. [12]
    [PDF] 6.436J / 15.085J Fundamentals of Probability, Lecture 3
    For every event A ∈ J, the conditional probability that A occurs given ... As a result, suppose PB : J → [0, 1] is defined by PB(A) = P(A | B). Then ...
  13. [13]
    6.1 Probability Rundown | Introduction to Artificial Intelligence
    Conditional probabilities are defined as: P(A|B)=P(A,B)P(B). P ( A | B ) = P ( A , B ) P ( B ) .
  14. [14]
    [PDF] Conditional Probability and Independence - City Tech OpenLab
    There is another notation used for the conditional probability of A given B: PB(A). In either case you read it as “the probability of A given B”. In an ...
  15. [15]
    [PDF] advanced probability - james norris
    Sep 3, 2025 · ... conditional expectation E(X|B) but also a conditional probability measure PB, given by. PB(A) = P(A|B) = P(A ∩ B). P(B) . Then, for all ...
  16. [16]
    4.8 Conditional distributions | An Introduction to Probability and ...
    To emphasize, the notation fY|X(y|x) f Y | X ( y | x ) represents a conditional distribution of the random variable Y Y for a fixed value x x of the random ...<|control11|><|separator|>
  17. [17]
    [PDF] on a new axiomatic theory of probability
    Problems which are peculiar to the theory presented in this paper are those in which conditional probabilities with respect to different conditions are figuring ...Missing: title | Show results with:title
  18. [18]
    Conditional distribution | Formula, derivation, examples - StatLect
    A conditional distribution is the probability distribution of a random variable, calculated according to the rules of conditional probability.Overview · Discrete random vectors · Continuous random vectors · More details
  19. [19]
  20. [20]
    21.1 - Conditional Distribution of Y Given X | STAT 414
    The continuous random variable Y follows a normal distribution for each x . The conditional mean of Y given x , that is, E ( Y | x ) , is linear in x .
  21. [21]
    [PDF] Chapter 2 - math.binghamton.edu
    What is the probability that at least one die is a 1, given that the sum of the two dice is 7? (A) 1/36. (B) 1/6. (C) 1/3. (D) 1 ...
  22. [22]
  23. [23]
    [PDF] Section 3: Conditional Probability - Washington
    The Monty Hall problem is a famous, seemingly counter-intuitive probability puzzle named after Monty Hall, the host of the show ”Let's Make a Deal”. This ...<|control11|><|separator|>
  24. [24]
    [PDF] Conditional Probability and Tree Diagrams
    Example Of the students at a certain college, 50% regularly attend the football games, 30% are first-year students and 40% are upper-class students who do not.
  25. [25]
    Bayes for Beginners: Probability and Likelihood
    Aug 31, 2015 · The distinction between probability and likelihood is fundamentally important: Probability attaches to possible results; likelihood attaches to hypotheses.
  26. [26]
    Bayes' formula: a powerful but counterintuitive tool for medical ... - NIH
    When testing for disease, we are interested in P(D+|T+), the probability of having the disease given a positive test, and P(D–|T–), the probability of not ...
  27. [27]
    [PDF] 2 Sequential Bayesian updating for Big Data - UC Irvine
    Sequential Bayesian updating uses posterior distributions as prior distributions for new data, avoiding repeated likelihood calculations, enabling continuous  ...
  28. [28]
    Frequentist Inference - an overview | ScienceDirect Topics
    Frequentist inference is defined as a statistical approach where all probabilities are conditional on parameters assumed to be known, leading to methods ...
  29. [29]
    11.1.4 - Conditional Probabilities and Independence | STAT 200
    Conditional probability is the probability of one event given another. Independent events are unrelated, where one event's outcome doesn't affect another.
  30. [30]
    [PDF] Notes: Independence
    Jan 23, 2024 · Let's break down this equation using the definition of conditional probability: P(A|B) = P(A). Definition of A and B independent. ⇔. P(A ∩ B).
  31. [31]
    [PDF] Conditional Independence
    Sep 4, 2024 · Definition: Random variables X and Y are conditionally independent given Z, if p(x|y, z) = p(x|z) for all x, y, z ∈ R such that p(y, z) > 0. ...
  32. [32]
    [PDF] CS109: Conditional Independence and Random Variables
    The probability of the event that a random variable 𝑋 takes on the value 𝑘! For discrete random variables, this is a probability mass function, or PMF. Page 22 ...
  33. [33]
    5.3 - Mutual Independence | STAT 414
    Three events are mutually independent if and only if the following two conditions hold: The events are pairwise independent.
  34. [34]
    [PDF] CIS 1600 Recitation 8 - Independence, Trees, Random Variables ...
    ▷ Mutual independence implies pairwise independence but the converse is not true. ... ▷ Random variables X1,X2, ...,Xn are mutually independent if ... random ...
  35. [35]
    [PDF] STAT 24400 Lecture 5 Section 3.1-3.3 Joint & Marginal Distributions ...
    ▷ Equivalently, the random variables 𝑋1,𝑋2,… , 𝑋𝑛 are (mutually) independent if and only if their joint distributions factors into the product of their marginal.
  36. [36]
    [PDF] Joint Distribution Functions, Independent Random Variables
    Independence of random variables X and Y implies that their joint CDF factors into the product of the marginal CDFs. • Applies to all types of random variables.
  37. [37]
    Bayes' Theorem - Stanford Encyclopedia of Philosophy
    Jun 28, 2003 · Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches ...Conditional Probabilities and... · Special Forms of Bayes...
  38. [38]
    LII. An essay towards solving a problem in the doctrine of chances ...
    S. Thomas Bayes. Google Scholar · Find this author on PubMed · Search for more ... publication date: 1-Jan-2026. Basílio J, Maillard J and Spielvogel C (2026) ...
  39. [39]
    [PDF] Bayesian updating with continuous priors Class 13, 18.05 Jeremy ...
    As usual when moving from discrete to continuous we will need to replace the probability mass function by a probability density function, and sums by integrals.
  40. [40]
    Flipping the Conditional: Why We Are Probably Wrong About ... - NIH
    Flipping the conditional is an error of reasoning that occurs when we inadvertently transpose the terms in a conditional probability.
  41. [41]
    The Inverse Fallacy and Interpreting P Values - ScienceDirect.com
    We can most efficiently represent these statements in symbols using conditional probability notation: Pr(A|B) can be read as. Calibrating P Values to ...
  42. [42]
    [PDF] The “prosecutor's fallacy” and the “interrogator's fallacy” are
    The prosecutor's fallacy is a conflation of two conditional probabilities, a way of inferring a suspect's guilt.
  43. [43]
    Statistics and the law: the prosecutor's fallacy
    Mar 22, 2016 · In summary The prosecutor's fallacy suggested, quite wrongly, that the probability that Jack was guilty was 0.996. The likelihood ratio was 250, ...
  44. [44]
    On the generality and cognitive basis of base-rate neglect
    Base rate neglect refers to people's apparent tendency to underweight or even ignore base rate information when estimating posterior probabilities for events.
  45. [45]
    Miss rate neglect in legal evidence | Law, Probability and Risk
    Dec 10, 2016 · Some scholars view 'base rate neglect' as more important than the 'inverse fallacy', and claim that people commit the inverse fallacy due to ...
  46. [46]
    [PDF] Interpreting Probability - Assets - Cambridge University Press
    The calculus of inverse probability thus enabled unknown causes to be inferred from known effects. The method was championed from the late eighteenth century by ...
  47. [47]
    None
    ### Summary of Historical Errors in Inverse/Conditional Probability (18th Century & Early Development)
  48. [48]
    [PDF] fisher-1930-inverse-probability.pdf - Error Statistics Philosophy
    Feb 15, 2016 · The general statement of the inverse type of argument is as follows; we shall first cloak its fallacy under an hypothesis, and then examine it ...
  49. [49]
    Misunderstandings Involving Conditional Probabilities - UT Math
    One common misunderstanding is confusing a conditional probability with the reverse conditional probability -- that is, with conditonal probability that ...Missing: marginal | Show results with:marginal
  50. [50]
    A Gentle Introduction to Joint, Marginal, and Conditional Probability
    May 6, 2020 · In this post, you will discover a gentle introduction to joint, marginal, and conditional probability for multiple random variables.
  51. [51]
    [PDF] A NEW APPROACH TO ESTIMATING THE PROBABILITY OF ...
    National opinion polls about U.S. presidential races gener- ally focus on candidate standings in the popular vote. But it is the Electoral College where the ...Missing: mistake | Show results with:mistake
  52. [52]
    Simpson's Paradox - Stanford Encyclopedia of Philosophy
    Mar 24, 2021 · Simpson's Paradox is a statistical phenomenon where an association between two variables in a population emerges, disappears or reverses when the population is ...Introduction · Simpson's Paradox and... · What Makes Simpson's... · ApplicationsMissing: marginal | Show results with:marginal
  53. [53]
    Simpson's Paradox: Examples - PMC
    Apr 25, 2018 · Simpson's paradox is very prevalent in many areas. It characterizes the inconsistency between the conditional and marginal interpretations of the data.
  54. [54]
    Simpson's paradox unraveled | International Journal of Epidemiology
    Mar 31, 2011 · Nonetheless, Simpson, realized that the marginal odds ratio cannot be generally expressed as a weighted average of the conditional odds ratios, ...
  55. [55]
    [PDF] Viewing Simpson's Paradox - arXiv
    Apr 21, 2018 · In other words, the occurrence of the paradox depends on the conditional probability P(Z|Y ), i.e., the dependence between Y and Z (and ...<|control11|><|separator|>
  56. [56]
    [PDF] POPPER'S LAWS OF THE EXCESS OF THE PROBABILITY OF THE
    ... Karl Popper discovered in 1938 that the unconditional probability of a conditional of the form. 'If A, then B* normally exceeds the conditional probability ...
  57. [57]
  58. [58]
    Conditional Probability - Department of Mathematics at UTSA
    Oct 30, 2021 · Conditional probability is a measure of the probability of an event occurring, given that another event (by assumption, presumption, assertion or evidence) has ...
  59. [59]
    [PDF] Chapter 2. Discrete Probability 2.3: Independence 2.3.1 Chain Rule
    Proof of Chain Rule. Remember that the definition of conditional probability says P (A ∩ B) = P (A) P (B | A). We'll use this repeatedly to break down our P (A ...
  60. [60]
    [PDF] Proof of Bayes Theorem
    Bayes Theorem is derived by equating P(A∩B) = P(A)P(B|A) and P(A∩B) = P(B)P(A|B), resulting in P(A|B) = P(A) P(B|A) P(B).
  61. [61]
    [PDF] Conditional Probability, Independence and Bayes' Theorem Class 3 ...
    Know the definitions of conditional probability and independence of events. 2. Be able to compute conditional probability directly from the definition.