Doomsday argument
The Doomsday argument is a probabilistic claim, originally formulated by astrophysicist Brandon Carter in 1983, positing that an observer's random position among all humans who will ever exist implies a high likelihood of human extinction in the coming centuries rather than over vast future timescales.[1][2] The argument employs self-sampling reasoning: assuming uniform prior beliefs over possible total human populations N and conditioning on one's birth rank n (with roughly 100 billion humans born by the late 20th century), the posterior probability density becomes P(N|n) ∝ 1/N for N > n, yielding P(N|n) = n/N² under normalization, which assigns substantial probability mass to N values only modestly exceeding n.[1] This formulation, refined and popularized by philosopher John Leslie through Bayesian analysis and thought experiments like the "shooting room," suggests a greater than 95% chance that N < 20n, forecasting doomsday (defined as the cessation of human reproduction) by around the year 2200 if population stabilizes near modern levels.[2][3] Subsequent variants, including J. Richard Gott's "delta-t" argument and extensions incorporating population growth rates, reinforce the core inference by treating birth order as a uniform draw from 0 to 1 across N, predicting median survival times on the order of current elapsed human history.[4] The argument's defining characteristic lies in its reliance on first-principles observer selection effects without invoking external risks like astrophysical threats or technological failures, instead deriving doomy expectations directly from anthropic data.[1] Despite its logical parsimony, it remains highly controversial, with detractors arguing flaws in the self-sampling assumption (e.g., neglecting reference class definitions or multiverse expansions) or prior distributions (e.g., favoring power-law tails over uniform bounds), though no unified refutation has emerged among philosophers and cosmologists.[5][6] Proponents counter that such objections often presuppose optimistic futures incompatible with the observed n, while empirical tests—such as humanity's persistence to 2025 without evident contradiction—do not falsify the prediction, as it accommodates ongoing but finite growth.[7]Historical Origins
Brandon Carter's Initial Formulation (1974)
Brandon Carter, a theoretical astrophysicist at the University of Cambridge, initially formulated the doomsday argument as part of his application of anthropic reasoning to cosmological and biological questions during a presentation at the Kraków symposium on "Confrontation of Cosmological Theories with Observational Data" in February 1973, with the proceedings published in 1974.[1] In his paper "Large Number Coincidences and the Anthropic Principle in Cosmology," Carter integrated the argument with the weak anthropic principle, which posits that the universe must permit the existence of observers like ourselves, and the Copernican principle, emphasizing that humans should not presume an atypical position in the sequence of all observers.[1] This framing highlighted observer selection effects, where the fact of our existence as latecomers—after approximately $10^{10} humans have already been born—constrains probabilistic inferences about the total human population N.[7] The core probabilistic reasoning assumes that an individual's birth rank n (our approximate position in the human lineage, around the $10^{10}th) is randomly sampled from the uniform distribution over 1 to N, conditional on N \geq n.[1] Carter employed a prior distribution P(N) \propto 1/N for N \geq n, reflecting ignorance about scale in a manner consistent with scale-invariant reasoning in cosmology.[1] The likelihood P(n \mid N) = 1/N for N \geq n then yields a posterior P(N \mid n) \propto 1/N^2, normalized such that the cumulative probability P(N \leq Z \mid n) = (Z - n)/Z for Z > n.[1] This implies a high probability that N is not vastly larger than n; specifically, there is approximately 95% confidence that N \leq 20n.[1] ![{\displaystyle PN\\leq 20n={\frac {19}{20}}}}[float-right] Under assumptions of modest future population growth or stabilization, this translates to human extinction occurring within a timeframe on the order of $10^9 years from the present, as the remaining human births would deplete without exceeding the bounded total.[8] Carter's approach thus served as an early illustration of how self-selection among observers biases expectations away from scenarios with extraordinarily long human histories, privileging empirical positioning over optimistic priors about indefinite survival.[1] This initial presentation laid the groundwork for later elaborations but remained tied to first-principles probabilistic updating under anthropic constraints, without invoking multiverse or infinite measures.[1]John Leslie's Elaboration and Popularization (1980s–1990s)
Philosopher John Leslie substantially expanded Brandon Carter's initial doomsday argument formulation during the late 1980s and early 1990s, transforming it from an esoteric probabilistic observation into a prominent tool for assessing human extinction risks. In works such as his 1989 contributions and subsequent papers, Leslie emphasized the argument's reliance on self-locating uncertainty about one's position in the total sequence of human observers, positing that the low observed birth rank—approximately the 60-70 billionth human—indicates a modest total human population rather than an astronomically large one implied by indefinite survival.[9] This elaboration countered optimistic projections by conditioning probabilities on actual existence rather than hypothetical vast futures, aligning with a view that prioritizes observable data over unsubstantiated assumptions of perpetual growth. Leslie popularized the argument through accessible thought experiments, notably the urn analogy, wherein an observer unaware of drawing from a small urn (10 tickets) or large one (millions) who selects an early-numbered ticket rationally infers the smaller total, mirroring humanity's early temporal position as evidence against scenarios of trillions more future humans.[10] He detailed this in his 1993 paper "Doom and Probabilities," defending it against critiques like the possibility of selection biases by invoking Bayesian updating based on empirical observer ranks, and argued that dismissing the inference requires rejecting standard inductive reasoning.[11] These analogies rendered the argument intuitive, shifting focus from abstract cosmology to practical implications for species longevity. Culminating in his 1996 book The End of the World: The Science and Ethics of Human Extinction, Leslie integrated the doomsday reasoning with analyses of anthropogenic threats, estimating a substantial probability—around one in three for extinction by the third millennium—that doomsday looms soon unless risks are mitigated, without presupposing priors favoring eternal persistence.[12] He critiqued overreliance on technological salvation narratives, advocating instead for precautionary measures grounded in the argument's probabilistic caution, and linked it to ethical duties to future generations by highlighting how ignoring early-observer status underestimates extinction odds from events like nuclear conflict or environmental collapse. This work elevated the doomsday argument in philosophical discourse on anthropic principles and existential hazards, influencing subsequent debates on human survival probabilities.[13]J. Richard Gott's Independent Development (1993)
In 1993, astrophysicist J. Richard Gott III published "Implications of the Copernican Principle for Our Future Prospects" in Nature, independently deriving a probabilistic argument akin to the doomsday argument by assuming humans occupy a typical, non-privileged position within the total span of human existence.[14] Gott framed this under the Copernican principle, positing that observers should expect to find themselves neither unusually early nor late in any phenomenon's history, without relying on specific priors about its total length.[14] He illustrated the approach with temporal examples, such as a hypothetical random visit to the New York World's Fair in 1964 shortly after its opening, where the observed elapsed time since inception (t_p) implied a high likelihood of brief remaining duration, consistent with the fair's actual demolition the following year.[15] Gott's formulation treats the observer's position as uniformly distributed over the total duration T, yielding a posterior distribution for T given elapsed time t that reflects a "vague" prior uniform over logarithmic scales of duration, effectively P(T) \propto 1/T.[16] The likelihood P(t|T) = 1/T for T > t then produces P(T|t) \propto 1/T^2. Integrating this posterior, the probability that total duration satisfies N \leq 20n (where n analogs elapsed "units," such as births or time) is 95%, or P(N \leq 20n) = 19/20.[14] For a 95% confidence interval excluding the outermost 2.5% tails of the uniform fraction f = t/T, the remaining duration falls between t/39 and 39t.[16] Applied to humanity, Gott adapted this to cumulative human births as the measure of "existence," estimating around 50–60 billion humans born by 1993 and treating the current observer's birth rank as randomly sampled from total N.[4] This yields a 95% probability that fewer than about 19–39 times that number remain unborn, implying human extinction within roughly 8,000 years assuming sustained birth rates of approximately 100 million per year.[4] Gott emphasized this as a first-principles Bayesian update, avoiding strong assumptions about longevity by relying on the self-sampling uniformity and the vague logarithmic prior to derive conservative bounds on future prospects.[14]Core Logical Framework
Basic Probabilistic Reasoning
The basic probabilistic reasoning of the Doomsday argument treats an individual's birth rank n among all humans who will ever exist as a random sample uniformly drawn from the integers 1 to N, where N denotes the unknown total number of humans.[1] Observing n—empirically estimated at approximately 117 billion based on historical birth records through 2022—serves as data that updates beliefs about N toward smaller values, as large N would make such an "early" rank unlikely under the sampling assumption.[17] In Bayesian terms, the likelihood P(n|N) equals 1/N for N ≥ n (and 0 otherwise), reflecting the uniform sampling.[1] A scale-invariant prior P(N) ∝ 1/N for N ≥ n—chosen for its lack of arbitrary scale preference in the absence of other information—yields a posterior P(N|n) ∝ 1/N<sup>2</sup>.[1] In the continuous approximation, normalization gives P(N|n) = n / N<sup>2</sup> for N ≥ n, and the cumulative distribution follows as P(N ≤ Z | n) = 1 - n/Z for Z ≥ n.[1] This posterior implies high probability for N modestly exceeding n: for instance, P(N ≤ 20n | n) = 19/20 = 0.95.[1] With n ≈ 1.17 × 10<sup>11</sup>, total N < 2.34 × 10<sup>12</sup> at 95% posterior probability, constraining future births to under 2 trillion despite past cumulative totals.[17] The logic incorporates an observer selection effect: birth ranks beyond N are impossible, so conditioning on existence biases against scenarios with small N and late ranks, but the observed relatively early n (as a fraction of potential vast N) countervails by favoring bounded totals.[1] Empirical demographic data, including decelerating global birth rates (from 140 million annually in 2015–2020 toward projected peaks near 141 million by 2040–2045 before decline), render assumptions of indefinite exponential growth or infinite N empirically unmotivated and inconsistent with observed trends toward population stabilization.[18] Counterarguments positing ad hoc expansions, such as interstellar colonization yielding unbounded humans, lack causal mechanisms grounded in current technological or biological constraints and fail to override the update from sampled n.[1]Key Assumptions: Random Sampling and Observer Selection
The Doomsday argument hinges on the self-sampling assumption (SSA), which holds that a given observer should reason as if they constitute a randomly selected member from the aggregate set of all observers within the pertinent reference class.[19][20] In its canonical formulation, this entails viewing one's birth order—estimated at approximately the 100 billionth human—as drawn uniformly at random from the interval spanning the first to the Nth human, where N denotes the ultimate total human population.[21][22] This random sampling premise presupposes an equiprobability across individuals (or, in some variants, observer-moments) without bias toward temporal position, thereby enabling Bayesian updating on the evidence of one's ordinal rank to constrain plausible values of N.[23] Critics of alternative anthropic principles, such as the self-indication assumption, argue that SSA aligns more closely with causal realism by conditioning solely on realized observers rather than potential ones, avoiding inflation of probabilities for unobserved worlds.[20] Complementing this is the observer selection effect, whereby the very act of self-observation filters evidentiary scenarios to those permitting the observer's existence and capacity to deliberate on such matters.[24] In the Doomsday context, this effect underscores that empirical data—such as the observed human population to date—conditions probabilistic inferences, privileging hypotheses under which an early-to-mid sequence observer like oneself emerges with high likelihood, as opposed to those mandating vast posteriors where such positioning would be anomalously improbable.[21] This selection mechanism counters dismissals invoking unverified multiplicities (e.g., simulated realities or infinite multiverses), which might dilute the sampling uniformity by positing countless non-actual duplicates; instead, it enforces a parsimonious focus on the concrete causal chain yielding detectable evidence.[25] Empirical grounding derives from elementary Bayesian principles: the likelihood P(n|N) approximates 1/N under uniform sampling, updating a prior distribution over N without presupposing extended futures or exotic physics.[1] Thus, the argument's validity pivots on these assumptions' alignment with probabilistic realism, where observer-centric evidence rigorously narrows existential timelines absent ad hoc expansions of the sample space.[24]Role of Reference Classes in the Argument
The reference class in the Doomsday argument represents the total population of observers—ordinarily defined as all humans who will ever exist—from which one's own existence is treated as a random draw ordered by birth rank. This class forms the foundation for the probabilistic inference, as the observer's position n within it updates beliefs about the overall size N, yielding a posterior distribution concentrated around values of N comparable to n rather than vastly exceeding it. Brandon Carter's 1983 formulation specified the class in terms of human observers capable of self-referential temporal awareness, rooted in demographic patterns of births rather than speculative extensions to non-human or hypothetical entities.[1] John Leslie reinforced this by insisting on a reference class aligned with causal and empirical continuity, such as the sequence of all Homo sapiens births, to preserve the argument's predictive power against doomsday; he cautioned against classes either too narrow (e.g., limited to modern eras) or excessively broad (e.g., encompassing undefined posthumans), which could arbitrarily weaken the sampling assumption. Cumulative human births, estimated at 117 billion as of 2022, place contemporary individuals around the 95th percentile under uniform priors, empirically favoring classes at the human scale over more abstract ones that ignore observed population dynamics.[26][17] A key debate concerns the granularity of the reference class, pitting discrete units like individual human lives (tied to birth events) against continuous observer-moments (each instance of subjective experience). The birth-based class, central to Carter and Leslie's versions, implies a finite total N on the order of 10-20 times current cumulative births to render one's rank typical, consistent with historical growth data showing exponential but decelerating rates since the Industrial Revolution. Observer-moment classes, by contrast, could permit longer futures if future observers accrue more moments per life (e.g., through longevity or enhanced cognition), yet this hinges on unverified assumptions about experiential rates, which empirical neuroscience pegs at roughly constant for humans—about 3 billion seconds of consciousness per lifetime—without causal evidence for drastic future increases that would dilute the doomsday signal.[27][17]Formal Variants and Extensions
Self-Sampling Assumption (SSA) Approach
The Self-Sampling Assumption (SSA) posits that a given observer should reason as if they are a randomly selected member from the actual set of all observers in the relevant reference class, such as all humans who will ever exist.[21] This approach treats the observer's position within the sequence of births as uniformly distributed across the total number, conditional on the total N being fixed.[20] Applied to the Doomsday Argument, SSA implies that discovering one's birth rank n—estimated at approximately 117 billion for a typical human born around 2023—provides evidence favoring smaller values of N, as early ranks are more probable under small-N hypotheses.[21] Formally, SSA yields a likelihood function where the probability of observing birth rank n given total humans N is P(n \mid N) = \frac{1}{N} for n \leq N and 0 otherwise, reflecting uniform random sampling from the realized population.[20] To compute the posterior P(N \mid n), a prior on N is required; a scale-invariant prior P(N) \propto \frac{1}{N} (Jeffreys prior for positive scale parameters) is often employed to reflect ignorance about the order of magnitude of N.[21] The posterior then becomes P(N \mid n) = \frac{n}{N^2} for N \geq n, derived via Bayes' theorem:P(N \mid n) = \frac{P(n \mid N) P(N)}{P(n)} \propto \frac{1}{N} \cdot \frac{1}{N} = \frac{1}{N^2},
normalized over N \geq n where the integral \int_n^\infty \frac{n}{N^2} \, dN = 1.[20] The cumulative distribution under this posterior is P(N \leq k n \mid n) = 1 - \frac{1}{k} for k \geq 1, obtained by integrating:
P(N \leq x \mid n) = \int_n^x \frac{n}{N^2} \, dN = n \left[ -\frac{1}{N} \right]_n^x = 1 - \frac{n}{x}.
Setting x = k n yields the result.[20] Thus, the posterior median is N \approx 2n (where P(N \leq 2n \mid n) = 0.5), and there is a 95% probability that N < 20n.[21] For n ≈ 10^{11}, this predicts a median total human population of roughly 2 × 10^{11}, implying a substantial chance of extinction within centuries, assuming birth rates of order 10^8 per year.[20] In variants incorporating successive sampling—such as the Strong SSA (SSSA), which applies sampling to observer-moments rather than static observers—SSA reinforces doomy posteriors by modeling births as a sequential process, where early positions in a growing population still favor total durations not vastly exceeding current elapsed time.[22] This contrasts with priors expecting indefinitely long civilization survival, as the observed early rank updates strongly against such expansive scenarios under random sampling from the realized total.[21]