Fact-checked by Grok 2 weeks ago

Metaculus

Metaculus is an online platform for crowd-sourced probabilistic forecasting and prediction aggregation, enabling users to anticipate outcomes of future events across domains such as science, technology, geopolitics, and global risks. Founded in 2015 by physicists Anthony Aguirre and Greg Laughlin, it functions as a public benefit corporation dedicated to advancing epistemic infrastructure for modeling and navigating complex challenges. The platform operates by posing precisely defined, resolvable questions and soliciting probability estimates from a global community of forecasters, whose inputs are combined via weighted medians to yield community predictions. Users compete on leaderboards tracking baseline accuracy against naive benchmarks and peer relative performance, with tournaments offering cash prizes to incentivize skill development. Metaculus also provides private forecasting services for organizations and hosts specialized tracks, amassing over 2.9 million predictions and resolving more than 9,000 questions to date. Community forecasts on Metaculus have demonstrated superior calibration to baselines and, in select domains, to individual domain experts, as measured by logarithmic scoring rules like the Brier score, where platform predictions achieved 0.107 on resolved questions through 2021. This track record underscores its utility in eliciting collective intelligence, though it has faced scrutiny over question resolution criteria in politically sensitive cases, such as early COVID-19 origin debates.

Platform Overview

Core Purpose and Operations

Metaculus operates as an online crowd-sourced forecasting platform and aggregation engine designed to enhance collective reasoning and coordination on matters of global significance by soliciting and synthesizing probabilistic predictions about future events. Users submit forecasts on time-bound questions, encompassing binary outcomes—such as whether a specific event will occur by a given date—and quantitative estimates, like the timing or magnitude of developments in technology or policy. The platform's core mechanism involves aggregating these individual predictions into community-level estimates, typically represented by medians or weighted averages that emphasize empirically calibrated contributors, thereby producing headline probabilities intended to surpass the accuracy of isolated expert judgments or unadjusted crowd opinions. Launched in 2015, Metaculus emphasizes reputation-based participation, where forecasters' track records influence the weighting of their inputs in aggregations, fostering a system that rewards accuracy and penalizes overconfidence through scoring relative to resolved outcomes. This approach counters common cognitive biases, such as overreliance on intuition or groupthink, by prioritizing data-driven calibration derived from historical resolution data across thousands of questions. The platform concentrates on high-stakes topics, including advancements in artificial intelligence, geopolitical tensions, and global catastrophic risks, where precise foresight can inform decision-making amid uncertainty. By maintaining a repository of over 21,000 questions, with more than 9,000 resolved as of recent counts, Metaculus serves as epistemic infrastructure for modeling complex challenges, enabling users and external observers to track prediction accuracy and refine probabilistic assessments through iterative community input. This operational model underscores a commitment to empirical validation, where aggregated forecasts are evaluated against real-world resolutions to iteratively improve forecasting fidelity, distinct from narrative-driven analyses prevalent in traditional media or academic consensus.

Question Types and Categories

Metaculus hosts a range of question formats designed to elicit probabilistic forecasts on resolvable future events, with binary questions requiring predictions of yes or no outcomes based on predefined criteria. These often probe binary events like "Will the World Health Organization declare a pandemic emergency due to avian influenza by December 31, 2030?" which resolve affirmatively if official declarations match specified conditions verifiable through public records. Quantitative questions, in contrast, solicit numerical estimates within bounded ranges, such as forecasting the percentage of new U.S. light-duty vehicle production that will be electric by 2027, resolved using data from authoritative sources like the U.S. Department of Energy. Multiple-choice questions, introduced in December 2023, allow selection among discrete mutually exclusive options, useful for scenarios with several plausible paths, such as predicting outcomes in geopolitical contests or technological milestones. Date-specific variants, often treated as quantitative, estimate timelines for events like the announcement of weakly general artificial intelligence. Questions span thematic categories emphasizing high-stakes, empirically trackable domains, including artificial intelligence milestones (e.g., achievement of transformative AI capabilities), biosecurity risks (e.g., engineered pandemics or pathogen outbreaks), economic indicators (e.g., global GDP growth rates or stock index returns), and political events like election results or leadership transitions. Core focus areas encompass science and technology, effective altruism priorities, health threats, and geopolitics, with resolutions anchored to objective public datasets such as government reports, scientific publications, or international organization announcements to minimize ambiguity. For instance, AI questions might forecast compute thresholds for model training, while economic ones track metrics like S&P 500 annual returns using verifiable financial data. Any logged-in user may propose questions, though many originate from Metaculus staff, with all submissions undergoing review by volunteer community moderators to enforce standards for clarity and falsifiability. Guidelines prioritize precise resolution criteria—such as explicit definitions of terms, reliable data sources, and avoidance of subjective interpretations—to ensure questions test causal predictions against observable reality rather than interpretive disputes. This process filters out vague or ideologically skewed queries, favoring those resolvable via empirical evidence over opinion-based assessments.

Technical and Forecasting Mechanics

Prediction Aggregation Methods

Metaculus computes its Community Prediction as a recency-weighted median of the latest predictions submitted by individual forecasters for each question, excluding bots by default. This method takes only the most recent forecast per user, assigns weights increasing with recency—such that the oldest prediction receives weight 1 and newer ones progressively higher up to n for the newest among n predictions—and then derives the median under these weights. The approach requires roughly half the forecasters to update their predictions to substantially shift the aggregate, balancing responsiveness to evolving information against resistance to transient outliers or low-effort inputs. By emphasizing recency, the formula incentivizes dynamic updates reflecting reasoned revisions over static initial guesses, without directly weighting by forecaster reputation or historical accuracy. This aggregation draws on empirical findings from forecasting research demonstrating that crowd medians often outperform individual predictions by harnessing collective information while mitigating extremes, achieving calibration comparable to select superforecasters despite lacking financial incentives. Studies of Metaculus data show logarithmic gains in accuracy as the number of contributors grows, with the Community Prediction typically surpassing 90% of participating forecasters in log-score performance. The median's robustness helps counter potential herding, as it does not amplify popular but erroneous views, though analyses indicate minimal overconfidence in resolved outcomes relative to base rates. The formula's transparency enables external verification, with Metaculus publishing resolved question track records—including Brier scores averaging 0.126 across thousands of outcomes—for testing calibration and bias. Researchers can replicate aggregates via public data exports or the platform's aggregation explorer, facilitating scrutiny of effects like recency bias or participant selection on predictive power. This openness supports causal assessments of aggregation efficacy, confirming the method's edge over unweighted averages or individual baselines in diverse domains from geopolitics to science.

Scoring and Resolution Processes

Metaculus resolves questions according to criteria specified in each question's fine print, which typically identifies objective sources such as official government reports, scientific publications, or verifiable data feeds to determine the outcome. This approach prioritizes disinterested factuality by relying on predefined, low-ambiguity references over interpretive media accounts, with resolution occurring after the question's close date by platform administrators. For instance, geopolitical or economic questions often cite entities like the United Nations or central banks, while scientific queries reference peer-reviewed journals or institutional announcements. In cases of ambiguity—such as conflicting reports or discontinued data sources—questions may be marked as unresolved or ambiguous, preserving users' prior scores without penalty, though this occurs infrequently to maintain scoring integrity. Administrators handle adjudication, occasionally consulting community input via comments or polls for edge cases, but final decisions rest with staff to ensure consistency. Post-resolution scoring employs a time-weighted logarithmic scoring rule (log score), a proper scoring rule that rewards probabilistic forecasts aligning with true beliefs by maximizing expected score only when predictions match subjective probabilities. Unlike local scores that evaluate solely at resolution, Metaculus computes a time-averaged log score across a user's activity period on the question, weighting earlier predictions more heavily to incentivize prompt updates based on evolving evidence rather than last-minute adjustments. This formulation penalizes overconfidence or under-updating, as deviations from the outcome accrue penalties logarithmically, promoting calibration where aggregate community predictions often achieve Brier scores below 0.1 on resolved questions, outperforming naive baselines.

User Engagement and Incentives

Community Participation Features

Metaculus enables user involvement through interactive tools designed to enhance forecasting skills and community interaction. Users access personal dashboards on their profiles, which display calibration curves plotting predicted probabilities against actual outcomes, allowing forecasters to assess and refine their accuracy across resolved questions. These dashboards also track relative scores and participation metrics, supporting self-improvement from novice to advanced levels without requiring specialized expertise. Discussion threads facilitate evidence-based debates, with each question featuring comment sections where participants share research, challenge assumptions, and collaborate on rationales for predictions. Dedicated forums, such as the Metaculus Hangout for casual exchanges and meta-discussion threads for platform feedback, further promote analytical discourse among users. Tournaments, including quarterly cups, provide structured competitive environments where forecasters engage on themed question sets, receiving rapid resolution feedback to hone skills in real-time. The platform differentiates participation tiers, with the public community open to all for basic forecasting, while Pro Forecasters—selected from the top 2% of users based on historical scoring—gain elevated access to private instances, organizational engagements, and influence on high-impact predictions. This merit-based system prioritizes demonstrated competence over broad access, enabling dedicated analysts to contribute to policy-relevant forecasts. Metaculus supports global engagement via region-specific questions on topics like geopolitical tensions, drawing participants from diverse locales to aggregate insights beyond Western perspectives.

Reward Systems and Leaderboards

Metaculus employs a logarithmic scoring rule to allocate points for individual predictions, defined as the natural logarithm of the predicted probability assigned to the actual outcome, ln(P(outcome)), which incentivizes honest reporting of beliefs and penalizes overconfidence or underconfidence. This proper scoring rule forms the basis for both absolute (Baseline) scores, comparing predictions to a flat prior, and relative (Peer) scores, benchmarking against other forecasters' performance. Points are time-averaged across a question's lifetime to encourage ongoing updates, with log scores proving particularly sensitive to calibration on low-probability tail events. Leaderboards segment rankings by performance categories, including Baseline Accuracy (summing raw scores across questions to reward broad participation) and Peer Accuracy (weighted averages of relative scores, requiring at least 30% coverage of a question's duration to qualify). These segmentations highlight empirical top performers by distinguishing volume-driven contributions from skill-relative outperformance, while imputing zeros for low-activity periods to normalize comparisons. Tournament leaderboards further apply log-based scoring for competitive subsets, using natural logarithms scaled for comparability. Bronze, silver, and gold medals are granted quarterly or annually based on leaderboard percentiles—gold to the top 1% of ranked users, silver to the 1–2% range, and bronze to the 2–5% range—prioritizing sustained calibration and relative accuracy over sheer volume. These serve as non-monetary reputation signals, displayed on user profiles to foster community recognition without financial incentives. Empirical analysis of the system shows that medal-eligible scores correlate positively with forecast quality, as proper log scoring inherently rewards probabilistic accuracy and the peer-relative metric isolates skill from luck via aggregation. Coverage-weighting adjustments, implemented in 2024, mitigate biases against late entrants but introduce trade-offs, such as imputing low scores for users with sparse activity (e.g., fewer than 40 questions), potentially discouraging casual engagement while emphasizing dedicated forecasters.

Historical Development

Founding and Initial Launch

Metaculus originated as a concept in 2014 within scientific circles seeking to harness collective intelligence for forecasting uncertain events, particularly in science and technology domains. The platform was founded by physicist Anthony Aguirre, astronomer Greg Laughlin, and data scientist Max Wainwright, drawing from the effective altruism and rationality communities' emphasis on probabilistic reasoning and decision-making under uncertainty. Unlike prediction markets that involve financial stakes—and thus potential risks from speculation or manipulation—Metaculus was designed as a reputation-based aggregation engine without monetary incentives, prioritizing empirical calibration over betting dynamics. Initial development focused on curating questions about scientific breakthroughs and technological timelines, with early experiments testing whether aggregated crowd predictions could outperform individual experts or naive baselines. The site quietly launched in 2015, initially limiting participation to invited users to refine aggregation algorithms and build a core of skilled forecasters from rationalist-adjacent networks. By mid-2017, broader prediction aggregation features were introduced, enabling community medians to form on active questions. Key early milestones included the resolution of initial questions in late 2018, such as those tracking quarterly outcomes, which revealed the platform's baseline calibration exceeding simple statistical priors like base rates. Further resolutions in early 2019 confirmed this edge, with community forecasts on binary events achieving log scores indicative of superior probabilistic accuracy compared to unweighted averages. These outcomes validated the founders' hypothesis that selective weighting of forecaster track records could yield reliable superforecasts without expert-only reliance.

Expansion and Key Milestones

Metaculus experienced accelerated growth during the 2020 COVID-19 pandemic, as forecasters turned to the platform for probabilistic assessments of outbreak trajectories and policy responses. Community median predictions on U.S. COVID-19 deaths by mid-2020 achieved an error rate of 12.2% relative to official tallies, demonstrating the efficacy of its aggregation in uncertain scenarios. This period marked a shift from niche academic use to broader engagement, with question volumes surging on health and geopolitical topics amid global uncertainty. By 2021, Metaculus introduced API access, enabling researchers and external tools to query and analyze its database of aggregate forecasts across thousands of questions. Concurrently, the platform saw rising prominence in artificial intelligence forecasting, with community predictions on timelines for general AI systems drawing sustained participation and debate. Questions probing AGI development dates, such as the first announcement of general AI, amassed hundreds of predictions, reflecting growing interest in long-term technological risks and milestones. In 2022, Metaculus crossed the milestone of 1,000,000 total predictions submitted across over 7,000 questions, underscoring its maturation into a substantial forecasting repository. The year also featured the initiation of partnerships to bolster methodological rigor, including a October collaboration with Good Judgment Inc., which integrated Metaculus's crowd-sourced predictions with superforecaster inputs to produce hybrid forecasts. This alliance aimed to refine accuracy by combining diverse expertise pools, marking an evolution toward more structured, credibility-enhanced operations.

Recent Advancements (2023–2025)

In 2024, Metaculus launched the AI Forecasting Benchmark Series, a set of quarterly tournaments designed to evaluate AI models' forecasting performance against professional human forecasters on real-world questions, with a total prize pool of $120,000 across four events. The inaugural tournament in Q3 2024 featured over 300 binary questions resolving by early October, enabling direct comparisons that highlighted gaps between AI capabilities and human expertise, such as AI achieving a head-to-head score of -11.3 against pros. Subsequent quarters showed incremental AI improvements, with top bots reaching -8.9 in Q4 2024, though still trailing human benchmarks that emphasized realistic tracking of model progress amid hype. The series continued into 2025, with Q1 results demonstrating pro forecasters outperforming AI bots on diverse topics, prompting a renewal announcement on July 22 for an expanded year-long iteration backed by $175,000 in prizes to further probe AI limitations in probabilistic reasoning. Concurrently, Metaculus introduced scoring refinements, including updates to time-averaged metrics documented in July 2024, which prioritize ongoing prediction adjustments to reward timely responses to new evidence and mitigate static herding tendencies observed in aggregated forecasts. By early 2025, the platform experienced record growth, exemplified by the Astral Codex Ten 2025 series attracting over 3,000 participants as its fastest-expanding forecast collection to date, alongside initiatives like a Spanish-language forecasting competition launched mid-February to broaden global engagement. This surge coincided with high-profile contests, such as the ACX 2025 Prediction Contest, which expanded its prize pool to $10,000—up from $2,500 the prior year—and focused on 2025 events to sharpen community predictions on emerging challenges. These developments underscored Metaculus's adaptation to heightened demand for calibrated foresight amid rapid technological and geopolitical shifts.

Empirical Performance and Validation

Track Record on Resolved Forecasts

Metaculus community predictions have exhibited strong calibration on thousands of resolved forecasts, with aggregate Brier scores averaging around 0.10 to 0.20 across domains, where lower scores indicate higher accuracy relative to probabilistic benchmarks. For questions resolved in 2021, the platform's predictions achieved a Brier score of 0.107, reflecting effective aggregation of forecaster inputs into outcomes that closely matched empirical resolutions. In AI-related questions resolved by mid-2021, scores were somewhat higher at 0.2027, with an overconfidence adjustment of -0.57%, suggesting mild underconfidence in assigning probabilities—meaning predictions were often conservatively spread, which slightly widened scores but highlighted reliable discrimination between likely and unlikely events. Specific resolutions underscore this track record, such as the 2020 U.S. presidential election, where Metaculus assigned low probabilities to Donald Trump's reelection (resolving negatively on November 10, 2020), aligning with the final electoral outcome despite widespread media emphasis on polling leads that implied greater certainty for the challenger. Early AI milestone questions, resolved using 2021 benchmark data, revealed critiques of underestimation: forecasters tended to predict slower progress in capabilities like language model performance, with resolved outcomes exceeding community medians in several cases, prompting retrospective analyses of timelines as overly cautious. These instances demonstrate how Metaculus resolutions often provide hindsight validation against overconfident external narratives, grounded in probabilistic rather than deterministic expectations. Longitudinal trends indicate progressive improvement in accuracy as forecast volume grows, with Brier scores declining incrementally—estimated at a 0.012-point reduction per doubling of forecasters—due to enhanced signal from diverse inputs and refined aggregation. Calibration analyses, plotting community-predicted probabilities against actual binary outcomes or quantiles versus resolved dates, show outcomes clustering near the diagonal of perfect calibration, particularly for high-volume questions, evidencing cumulative refinement over time without systematic deviation. This empirical grounding supports Metaculus's claims of foresight superiority on resolved events like elections and technological thresholds, derived from verifiable resolution data rather than anecdotal success.

Comparative Accuracy Assessments

Metaculus community predictions have shown advantages over financial prediction markets like PredictIt for non-tradable events, such as long-term artificial intelligence risks, where market platforms face constraints from regulatory limits on event types and trading volumes, leading to sparse liquidity and incomplete information aggregation. In contrast, PredictIt performs adequately for short-term, politically liquid events like U.S. elections but underperforms relative to polling aggregates or crowd forecasts on broader resolutions, with analyses of 2020 and 2022 cycles indicating higher error rates under Brier scoring. However, Metaculus exhibits vulnerabilities in high-volatility domains; a March 2023 analysis of resolved forecasts revealed weaker Brier scores for certain AI subsets (e.g., model capability milestones) compared to the platform's overall average of approximately 0.126, linked to systematic forecaster optimism and slower resolution of ambiguous technical outcomes. This contrasts with stronger performance across diversified question sets, where community aggregation mitigates individual biases more effectively than in specialized, hype-driven fields. Relative to superforecasters—elite individuals selected from programs like the Good Judgment Project—Metaculus crowd predictions yield aggregate Brier scores around 0.12–0.13, trailing the lowest superforecaster marks of 0.023–0.081 on calibrated binary events but surpassing them in scalability for voluminous, ongoing forecasts. Top Metaculus forecasters achieve standardized Brier scores (around 0.36–0.37) comparable to initial Good Judgment superforecaster outputs, per cross-platform evaluations, though superforecasters maintain edges in domain-specific depth without relying on crowd volume. Third-party validations highlight Metaculus's strengths in update dynamics and calibration granularity; Rethink Priorities' examinations of resolved questions demonstrate superior adjustment rates to new evidence across time horizons, with aggregation yielding lower mean squared errors than individual expert baselines, particularly for multi-year predictions where iterative community input enhances probabilistic refinement. Against play-money markets like Manifold, Metaculus records significantly better mean Brier scores (0.084 versus 0.107) on paired questions, underscoring the value of its weighted logarithmic scoring over unsubsidized trading incentives.

Societal Impact and Applications

Applications in Policy and Research

Metaculus community forecasts have informed research on long-term global trajectories by aggregating probabilistic predictions on demographic shifts, technological adoption, and economic indicators. The "Forecasting Our World in Data" tournament, initiated on October 12, 2022, with a $20,000 prize pool, directed forecasters to resolve uncertainties in trends such as GDP per capita growth and energy transitions through 2100, yielding empirical benchmarks for scenario planning in academic and think-tank analyses. In policy evaluation, Metaculus has facilitated conditional forecasting on intervention outcomes to quantify causal pathways beyond anecdotal evidence. A January 2023 tournament targeted climate policy impacts, prompting predictions on metrics like emission reductions under specific regulatory scenarios, which researchers used to assess tangible effects and refine advocacy strategies. Similarly, ongoing policy challenges since October 2025 have generated aggregates on security and geopolitical risks, providing quantitative inputs for institutional deliberations on resource allocation. Amid the 2022 Russian invasion of Ukraine, Metaculus medians forecasted the event's onset two weeks ahead at high probability, alongside estimates of nuclear escalation risks at 0.35% for full-scale war that year, furnishing policymakers with calibrated probabilities on territorial control and aid efficacy as counters to media-driven narratives. Within effective altruism frameworks, Metaculus predictions on existential threats, including AI development timelines and biosecurity vulnerabilities, have been dissected in community analyses to rank intervention priorities, with median outcomes referenced in strategic evaluations of governance needs over narrative appeals.

Notable Collaborations and Tournaments

Metaculus has organized specialized tournaments to concentrate forecasting efforts on high-impact domains, often with substantial prize pools to incentivize participation and precision. The Forecasting AI Progress tournament, supported by Open Philanthropy with a $100,000 contract, focused on predicting advances in machine learning capabilities and benchmarks. Launched as a comprehensive effort to track AI timelines, it featured continuous and binary questions on metrics like model performance, yielding insights into forecaster calibration on technical progress despite underperforming relative to Metaculus's overall average log scores. Similarly, the ACX 2025 Prediction Contest, in partnership with blogger Scott Alexander, offers a $10,000 prize pool for predictions on 2025 events spanning technology, politics, and culture, closing on December 31, 2025, to sharpen skills amid real-time developments. Key collaborations extend Metaculus's aggregation methods to external expertise. With Vox's Future Perfect team, Metaculus hosted forecasts for 2025 on political, economic, and technological questions, enabling public participation alongside the team's predictions published January 1, 2025, and incorporating a $2,500 prize pool to reward accurate contributions. This builds on prior annual series dating to 2020, emphasizing crowd wisdom to refine media-driven outlooks. In parallel, Metaculus partnered with Good Judgment Inc. starting in 2022 on initiatives like the Our World in Data project, where superforecasters from Good Judgment and Metaculus pro forecasters independently predicted shared questions, highlighting hybrid aggregation's potential to blend elite individual insights with platform crowds for superior resolution outcomes. These efforts have demonstrated enhanced calibration on niche topics through targeted incentives, with prize structures favoring depth in specialized areas over general breadth; for instance, AI Progress analyses revealed forecasters exceeding chance levels on complex benchmarks, informing philanthropic timelines despite challenges in volatile domains. Such tournaments and partnerships underscore Metaculus's role in structuring competitions to elicit granular, evidence-based probabilities, often outperforming isolated expert judgments via weighted community aggregation.

Criticisms and Limitations

Incentive Structures and Behavioral Biases

Metaculus's incentive system relies on non-monetary points derived from relative logarithmic scoring rules, applied to individual predictions and aggregated across questions via time-weighted averages to generate leaderboard rankings. This framework encourages participation through status rewards like medals and titles, but external analyses indicate it can foster herding, where forecasters cluster around the emerging community median to hedge against scoring penalties for outlier predictions, thereby diminishing the diversity of inputs that underpin crowd wisdom. A 2021 Effective Altruism Forum post documented platform data showing correlations between high-volume forecasting and convergence to medians, attributing this to the points system's emphasis on relative performance, which penalizes deviation more heavily than absolute inaccuracy in stable environments. Time-averaged scores, refined in updates through 2024, aim to incentivize ongoing engagement by rewarding sustained prediction maintenance over static snapshots, with daily relative log scores averaged to capture temporal dynamics. However, Metaculus's own 2023–2024 scoring evaluations reveal trade-offs: these metrics promote frequent minor tweaks to track shifting medians, potentially favoring low-effort adjustments that preserve relative standing over infrequent, evidence-driven overhauls that risk temporary point losses, as the averaging dilutes the impact of bold shifts unless perfectly timed. To counteract quantity-over-quality distortions—where points accumulation scales with prediction volume, incentivizing superficial coverage of numerous questions—Metaculus implemented volume-independent metrics in 2023, including Baseline Accuracy medals that normalize scores for participation breadth and prioritize raw calibration over output scale. These adjustments, evaluated in 2024 tournament redesigns, seek to realign rewards toward probabilistic rigor and independence, with preliminary leaderboard data showing reduced skew toward high-volume users while maintaining overall engagement.

Prediction Biases and Methodological Debates

A 2021 analysis by Rethink Priorities of 259 resolved AI-related questions on Metaculus found weak evidence of systematic optimism bias, with date-set questions tending to resolve earlier than the community median predicted and binary progress indicators showing fewer positive outcomes than forecasted (11 actual versus a median prediction of 17.48). This suggests forecasters may have slightly overestimated near-term AI advancements, though small sample sizes for resolved questions (e.g., only 7 of 41 date questions) and potential selection effects limit the strength of the conclusion. In contrast, Metaculus's 2023 internal review of over 150 resolved AI forecasts reported no clear systematic biases, with community predictions demonstrating proper calibration (approximately 50% resolution within 50% confidence intervals) and outperforming baselines like random guessing. Debates persist regarding extreme accelerationist views, such as Ray Kurzweil's predictions of rapid exponential progress leading to human-brain emulation by the mid-2020s; Metaculus community forecasts for related milestones, like whole brain emulation, place the median at 2071, effectively rejecting Kurzweil-style timelines in favor of more gradual advancement akin to economist Robert Gordon's skepticism of sustained hyper-growth. This stance aligns with resolved data debunking unsubstantiated accelerationism, as evidenced by the slower-than-predicted progress in early AI benchmarks, though pro forecasters' aggregates emphasize empirical trends over theoretical exponentials. Forecaster demographics contribute to methodological discussions, with Metaculus drawing heavily from effective altruism (EA) and rationalist communities, potentially skewing predictions toward tech-centric priorities that undervalue geopolitical realpolitik in favor of long-term utopian or risk-focused scenarios. Critics argue this composition outperforms mainstream media's tendency toward unsubstantiated alarmism on AI threats, as aggregate forecasts have shown better calibration on resolved events, but it may introduce overconfidence in abstract technological trajectories absent broader societal constraints. Resolution methodologies face scrutiny for source reliability, particularly in ambiguous domains where interpretive judgments could reflect institutional biases; advocates call for stricter, unambiguous criteria to minimize subjective selection of resolving evidence, as vague guidelines risk inconsistent outcomes or undue influence from potentially slanted data sources. Metaculus guidelines emphasize precise criteria to mitigate such issues, yet debates highlight the need for enhanced transparency in sourcing to ensure resolutions align with objective verification over narrative-driven interpretations.

Broader Critiques of Forecasting Platforms

Crowd prediction platforms, including non-financial ones like Metaculus, have been critiqued for lending a veneer of scientific precision to what remain educated guesses rather than robust causal models, particularly in domains prone to high uncertainty. Nassim Nicholas Taleb argues that such probabilistic forecasting often fails to incorporate genuine skin in the game, leading to overconfidence in predictions that ignore nonlinear dynamics and fat-tailed distributions. This approach emphasizes calibration on past resolutions but overlooks the epistemic limits of aggregating judgments without underlying mechanistic understanding, as evidenced by the fragility of crowd estimates in volatile scenarios where collective errors amplify rather than average out. Probabilistic framing on these platforms can foster overreliance on numerical probabilities for events inherently resistant to prediction, such as black swans—rare, high-impact occurrences that defy historical extrapolation. Taleb contends that standard forecasting techniques, including those used in crowd aggregation, break down out-of-sample due to their inability to model extreme tail risks, as seen in financial models like Value at Risk that underestimate catastrophic losses. Empirical tests of crowd wisdom reveal consistent underperformance in predicting irregular outcomes, such as economic indicators, where aggregated forecasts lag behind even simple benchmarks due to shared informational blind spots. Despite claims of improved accuracy through superforecasters or aggregation algorithms, these systems remain susceptible to systemic failures when causal realities—unobserved variables or structural shifts—diverge from probabilistic assumptions. In contrast to financial prediction markets, non-monetary platforms like Metaculus sidestep risks of manipulation through large bets or liquidity issues but forfeit the informational value of price signals derived from participants' willingness to risk capital, which better proxies belief strength and incentivizes error correction. Taleb highlights that absent financial exposure, forecasters lack accountability for errors, potentially inflating apparent calibration without true predictive power. While this model promotes broader participation unhindered by capital barriers, it may underweight contrarian views requiring substantial commitment, as monetary stakes reveal divergences in private information that point systems obscure. Forecasting communities underpinning platforms like Metaculus, often rooted in rationalist circles, risk ideological echo chambers that privilege contrarian skepticism—aligning with realism on topics like overblown climate projections or inequality dynamics—over mainstream academic consensus, potentially skewing aggregates toward subcultural priors. This homogeneity arises from self-selection among users favoring probabilistic tools and effective altruism-adjacent worldviews, fostering groupthink where dissenting progressive narratives face underrepresentation despite broader societal debates. Such dynamics mirror general echo chamber effects, where confirmation biases reinforce internal rationales at the expense of diverse causal inputs, though empirical validation of directional skew remains limited to community self-assessments.

References

  1. [1]
    About Metaculus
    Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.
  2. [2]
    Metaculus FAQ
    Metaculus is an online forecasting platform and aggregation engine that brings together a global reasoning community and keeps score for thousands of ...
  3. [3]
    Metaculus 2025 Company Profile: Valuation, Funding & Investors
    When was Metaculus founded? Metaculus was founded in 2015. Where is Metaculus headquartered? Metaculus is headquartered in Santa Monica, CA.
  4. [4]
    [PDF] Comparing Top Forecasters to Domain Experts - Arb Research
    historical accuracy, expertise, and psychometric profile, and then extremizes the aggregate forecast (towards 1 or 0) using an optimized extremization.
  5. [5]
    Metaculus should restart its Lab Leak prediction aggregator ...
    Here is the site. The forecast never went above twenty percent, and then fell consistently, with the aggregator being discontinued in May and resolved as ...
  6. [6]
    AI Is Learning to Predict the Future—And Beating Humans at It | TIME
    Sep 18, 2025 · Metaculus, a forecasting platform, poses questions of geopolitical importance such as “Will Thailand experience a military coup before September ...
  7. [7]
    AI and Geopolitics: Resources, Predictions, and Mental Models
    AI and Geopolitics: Resources, Predictions, and Mental Models. Follow. Start DateOct 12, 2025. View Questions(5). AI and Geopolitics: Resources, Predictions ...Missing: topics | Show results with:topics
  8. [8]
  9. [9]
    Metaculus
    Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.Prediction Resources · Metaculus FAQ · About Metaculus Pro Forecasters · About
  10. [10]
    Now You Can Create Multiple Choice Questions - Metaculus
    Dec 20, 2023 · Create multiple choice questions and bring greater clarity to topics with multiple potential outcomes where one and only one will occur.Missing: quantitative | Show results with:quantitative
  11. [11]
    Question writing and submission guidelines - Metaculus
    Here, you'll learn about our best practices for writing and submitting questions, as well as our content rules and guidelines.Missing: binary quantitative<|separator|>
  12. [12]
    Wisdom of the Crowd vs. the Best of the Best of the Best - Metaculus
    Apr 4, 2023 · This post asks whether we can improve forecasts for binary questions merely by selecting a few accomplished forecasters from a larger pool.Methods · Results (and Some... · LimitationsMissing: logarithmic opinion
  13. [13]
    Scores FAQ - Metaculus
    Predicting perfectly on a binary or multiple choice question gives a score of +100. The average scores of binary and continuous questions roughly match.Missing: quantitative | Show results with:quantitative
  14. [14]
    More Is Probably More — Forecasting Accuracy and Number of ...
    Jan 31, 2023 · That improvement of the Metaculus community prediction seems to be approximately logarithmic, meaning that doubling the number of forecasters ...Missing: methods | Show results with:methods
  15. [15]
    Exploring Metaculus' community predictions — EA Forum
    Mar 24, 2023 · According to Metaculus' track record page, the mean Brier score for Metaculus' community predictions evaluated at all times is 0.126 for all ...Summary · Results · DiscussionMissing: formula reputation
  16. [16]
    Analysing Individual Contributions to the Metaculus Community ...
    May 8, 2023 · The Metaculus Community Prediction is a recency-weighted median of all predictions. I removed questions that had an average community ...Missing: formula reputation
  17. [17]
    Question writing checklist (obsolete) - Metaculus
    Jun 5, 2023 · Have you flipped a coin on how to formulate the question? (For binary questions, most of them resolve negatively, so you should flip a coin/ ...Missing: quantitative choice
  18. [18]
    Resolutions to the Challenge of Resolving Forecasts - LessWrong
    Mar 11, 2021 · As another alternative, Metaculus sometimes chooses to leave a question as "ambiguous" if the data source is discontinued, or it is later ...
  19. [19]
    Scores and medals: trade-offs and decisions (updated July 2024)
    Nov 20, 2023 · We also wanted to reward forecasters who like to focus more intensely on fewer questions. This is where the Peer Accuracy medals come in.Missing: achievements | Show results with:achievements
  20. [20]
    A Primer on the Metaculus Scoring Rule
    Feb 26, 2021 · The total Score for a forecasting question is a weighted sum of A and R such that both increase with the number of predictions N on a question, ...Missing: formula reputation
  21. [21]
    How accurate are our predictions? - Good Ventures
    Jun 16, 2022 · The calibration curve tells the user where they are well-calibrated vs. overconfident vs. underconfident. If a forecaster is well-calibrated for ...
  22. [22]
  23. [23]
    Platform feature suggestions, questions, bug reports - Metaculus
    Aug 21, 2025 · This thread serves as a collection spot for Metaculus platform feature requests and discussions. Everyone is invited to chime in with ...
  24. [24]
    Quarterly Cup Tournament: Q1 2025 - Metaculus
    This repeating tournament is an opportunity for forecasters of all experience levels to compete on topical questions, get fast feedback, and be recognized for ...
  25. [25]
    (Mathematically) Predicting the Future with Professional Forecaster ...
    Mar 3, 2023 · To become a Pro for Metaculus, a forecaster has to have scores in the top 2% of all Metaculus users. The Metaculus scoring system is complex ...
  26. [26]
    About Metaculus Pro Forecasters
    Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.Peter Wildeford · Scott Eastman · Excellent Forecasting...
  27. [27]
    Will there be a US-China war before 2035? - Metaculus
    Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global importance.
  28. [28]
    Medals FAQ - Metaculus
    Medals reward Metaculus users for excellence in forecasting accuracy, insightful comment writing, and engaging question writing. Medals are awarded based on a ...Missing: achievements | Show results with:achievements
  29. [29]
    Metaculus Tournament Scoring [Updated 4.6.22]
    There is an infinite variety of proper scoring rules, but Metaculus tournament scoring is based on the Relative Log Score.
  30. [30]
    Metaculus - 2025 Company Profile, Team, Funding & Competitors
    Sep 2, 2025 · Metaculus was founded in 2014 and raised its 1st funding round 9 years after it was founded. Where is Metaculus located? Metaculus is ...
  31. [31]
    Anthony Aguirre
    Anthony is also a co-founder of the Future of Life Institute, an organization aiming to increase the probability that life has a future, and of Metaculus ...
  32. [32]
    Anthony Aguirre - Future of Life Institute
    He is a creator of the science and technology prediction platform Metaculus.com, and is founder (with Max Tegmark) of the Foundational Questions Institute.
  33. [33]
    Metaculus: a prediction website with an eye on science ... - Yale News
    Nov 2, 2016 · “Metaculus” things happen when you gather a wealth of well-reasoned scientific opinion and point it at the future.Missing: 2014 | Show results with:2014
  34. [34]
    Which percentage of Metaculus questions resolving from October ...
    Which percentage of Metaculus questions resolving from October 1st to December 31st 2018 (inclusive) will resolve positively? community. 44.7.
  35. [35]
    Which percentage of Metaculus questions resolving in Q1 2019 will ...
    I totally failed at this as well. But, as clarified later in the comments below, official resolution is by script on April 7th 2019. Still, won't matter much ...
  36. [36]
    Forecasting skill of a crowd-prediction platform: A comparison ... - arXiv
    In this analysis, exchange rate crowd-predictions made on Metaculus are compared to predictions made by the random-walk, a statistical model considered ...Missing: controversies criticisms
  37. [37]
    A Preliminary Look at Metaculus and Expert Forecasts
    Jun 2, 2020 · The first to resolve was Survey 11, Question 1 (LRT 2.1 on Metaculus). The question asked how many confirmed cases the US would have on May 3, ...Missing: 2019 | Show results with:2019
  38. [38]
    2020: Forecasting in Review
    Overall Metaculus, a sophisticated forecasting platform and community with a pretty good track record, organized a large number of activities, tournaments and ...
  39. [39]
    Metaculus API 2.0.0 OAS3
    Welcome to the official Metaculus API! If you have questions, ideas, or feedback, please contact our team at api-requests@metaculus.com.
  40. [40]
    When Will the First General AI Be Announced? - Metaculus
    Starting from weak-AGI ~mid-2027, I expect ~3–4 years to robust cross-domain competence with strategic planning, stable autonomy, and reliable self-improvement ...
  41. [41]
    Metaculus Year in Review: 2022 - Medium
    Jan 5, 2023 · Alt-Protein Forecasting Tournament, Launching April 22, 2021. Happy Earth Day!<|control11|><|separator|>
  42. [42]
    Metaculus on X: "We're pleased to announce Metaculus ...
    Oct 27, 2022 · We're pleased to announce Metaculus ... Metaculus and Good Judgment Inc Launch First Collaboration. From metaculus.medium.com · 4:01 PM · Oct 27, ...Missing: partnership | Show results with:partnership
  43. [43]
    Page 43 – Good Judgment
    Jan 9, 2023 · Metaculus and Good Judgment Inc are pleased to announce our first collaboration. Our organizations, which represent two of the largest human ...
  44. [44]
    AI Forecasting Benchmark Tournament - 2024 Q3 - Metaculus
    This is the first of four $30,000 quarterly tournaments in a $120,000 series designed to benchmark the state of the art in AI forecasting and compare it to the ...
  45. [45]
    Comparing Forecasting Track Records for AI Benchmarking and ...
    Sep 25, 2024 · On July 8, 2024 we launched the Q3 AI Benchmarking tournament, featuring over 300 binary questions, each resolving by early October. Every ...
  46. [46]
    Q4 AI Benchmarking Results - Metaculus
    Feb 19, 2025 · Metaculus Pro Forecasters were better than the top bot “team” (a team of one, this quarter), but not with statistical significance (p = 0.079) ...
  47. [47]
    Q1 AI Benchmark Results: Pro Forecasters Crush Bots - Metaculus
    Jun 21, 2025 · Pro forecasters significantly outperform bots: Our team of 10 Metaculus Pro Forecasters demonstrated superior performance compared to the top-10 ...
  48. [48]
    The State of Metaculus
    from tournaments and ...Missing: academic | Show results with:academic<|control11|><|separator|>
  49. [49]
    ACX 2025 Prediction Contest - Metaculus
    This contest offers both seasoned and aspiring forecasters a chance to refine their skills while tracking current events through a predictive lens. For 2025, ...
  50. [50]
    Try The 2025 ACX/Metaculus Forecasting Contest - Astral Codex Ten
    Jan 20, 2025 · The logarithmic score is always negative, except if you predicted 100% for an event that did actually happen, in which case the score is 0. But ...
  51. [51]
    An examination of Metaculus' resolved AI predictions and their ...
    Jul 20, 2021 · Of all 41 date questions, 4 were predicted to be 25%-50% to resolve by now (1 did), 2 were predicted to be 50%-75% to resolve by now (1 did), ...
  52. [52]
    Trump reelected by November 10th 2020 - Metaculus
    Start getting alternative questions ready. This will not be officially called by Nov 10 without legal challenges. 2.Missing: accuracy | Show results with:accuracy<|separator|>
  53. [53]
    How good were our AI timeline predictions so far? - Metaculus
    Oct 14, 2022 · To understand how well the tool of forecasting works for AI timelines, I evaluated forecasts of the recent past (predictions started in December ...<|separator|>
  54. [54]
    Mirror, Mirror on the Wall: How Do Forecasters Fare by Their Own ...
    Nov 7, 2023 · The second one is a calibration plot that displays predicted probabilities against observed frequencies. ... As we can see from the calibration ...
  55. [55]
    Why I Reject the Comparison of Metaculus to Prediction Markets
    Feb 24, 2023 · Metaculus is not a prediction market. Metaculus and prediction markets both aggregate users' forecasts, and both reward users for accurately anticipating the ...
  56. [56]
    Metaculus and Markets: What's the Difference?
    Jun 4, 2025 · Metaculus and prediction markets both aggregate forecasts about world events, but they do so through notably different methods.
  57. [57]
    What can we learn from scoring different election forecasts?
    Nov 20, 2022 · Metaculus aggregates the predictions together using a model that calibrates and weights them based on factors like recency and the predictors' ...
  58. [58]
    Comparing 538 and PredictIt forecasts in 2020 - Metaculus
    A lower Brier score is better, with perfect predictions corresponding to S=0. This question resolves positively if the Brier score for the 51 races is lower ...
  59. [59]
    Exploring Metaculus's AI Track Record
    Mar 28, 2023 · In this post, we report the results of a recent analysis we conducted exploring the performance of all AI-related forecasts on the Metaculus platform.Missing: achievements | Show results with:achievements
  60. [60]
    Takeaways from the Metaculus AI Progress Tournament — EA Forum
    Jul 27, 2023 · In late 2019 and early 2020, Metaculus ran a series of questions about AI progress on a separate subdomain. These questions were not published ...
  61. [61]
    Evaluating LLMs on Real-World Forecasting Against Human ... - arXiv
    Apr 14, 2025 · Compared to the ten expert forecasters who Metaculus hired ... superforecasters, who achieve Brier scores of 0.023 compared to o3's 0.135.
  62. [62]
    Data on forecasting accuracy across different time horizons and ...
    May 27, 2021 · The Brier Score is a scoring rule for probabilistic forecasts. It is equivalent to the mean squared error of forecasts for binary forecasts, so ...<|separator|>
  63. [63]
    How does forecast quantity impact forecast quality on Metaculus?
    Oct 2, 2021 · In this post I will look at binary forecasts from Metaculus. I focus on binary forecasts rather than continuous as they are easier to score.Missing: types multiple
  64. [64]
    Predictive Performance on Metaculus vs. Manifold Markets
    Mar 4, 2023 · The mean Brier score was 0.084 for Metaculus and 0.107 for Manifold. This difference was significant using a paired test. Metaculus was ahead of ...
  65. [65]
    Metaculus Launches the 'Forecasting Our World In Data' Project to ...
    Oct 12, 2022 · Metaculus is an online forecasting platform and aggregation engine working to improve human reasoning and coordination on topics of global ...More From Metaculus · Metaculus Is Not A... · Metaculus Year In Review...
  66. [66]
    Forecasting Our World in Data: The Next 100 Years - Metaculus
    Feb 1, 2023 · The following report presents Metaculus community forecasts and analyses on key measures of human progress developed by Our World in Data.
  67. [67]
    FAS and Metaculus are Using Forecasting to Support Better Climate ...
    Jan 27, 2023 · Tournament participants will make forecasts on policy-relevant outcomes, including “conditional forecasts” that predict the tangible impacts of ...
  68. [68]
    RAND x Metaculus Policy Challenge
    Oct 1, 2025 · Metaculus is an aggregation engine working to improve human reasoning and coordination on topics of global importance. We're partnering with the ...
  69. [69]
    Russia-Ukraine Conflict: Forecasting Nuclear Risk in 2022 - Metaculus
    Feb 27, 2022 · Forecasters estimate the overall risk of a full-scale nuclear war beginning in 2022 to be 0.35% and to be similar to the annual risk of nuclear ...
  70. [70]
    Exploring Metaculus's AI Track Record — EA Forum
    May 1, 2023 · Metaculus is a forecasting platform where an active community of thousands of forecasters regularly make probabilistic predictions on topics ...
  71. [71]
    Metaculus seeking Analytical Storytellers to write essays fortified ...
    Oct 6, 2021 · At Metaculus, our aim is to improve human decision-making and coordination at scale by increasing analytic capacity, reasoning, and judgment.<|separator|>
  72. [72]
    Metaculus — AI Forecasting Tournament | Open Philanthropy
    Open Philanthropy recommended a contract of $100,000 with Metaculus to support work related to forecasting the future of machine learning.Missing: Progress | Show results with:Progress<|separator|>
  73. [73]
    Takeaways from the Metaculus AI Progress Tournament
    Jul 27, 2023 · A more recent analysis from March 2023 found that Metaculus had a worse Brier score on (some) AI questions than its average across all questions ...
  74. [74]
    Forecast 2025 With Vox's Future Perfect Team - Metaculus
    Dec 20, 2024 · (We are working on an ACX 2025 contest that will be, however.) And then we're currently working on getting leaderboards set up for Community ...
  75. [75]
    Future Perfect 2020 Series - Metaculus
    Jan 13, 2020 · Hi Vox readers, welcome to Metaculus! This is the 2020 version of our Future Perfect Question Series. The 2021 version is here.Missing: collaboration | Show results with:collaboration
  76. [76]
    Good Judgment Inc and Metaculus Launch First Collaboration
    Cohorts of Superforecasters from Good Judgment Inc and Pro Forecasters from Metaculus will make predictions on their separate platforms on a set of 10 questions ...Missing: hybrid models
  77. [77]
    How does forecast quantity impact forecast quality on Metaculus?
    Oct 1, 2021 · Accuracy of an aggregate prediction (as measured by Brier scores) improves as the number of predictors rises, with the marginal improvement of ...
  78. [78]
    [PDF] Herding in Probabilistic Forecasts - IORA, NUS
    Metaculus allows users to input their forecasts as finite mixtures of normal distributions. Electronic copy available at: https://ssrn.com/abstract=3674961 ...
  79. [79]
    New Tournament Scoring: Trade-offs, Decisions - Metaculus
    This post is aimed at forecasters who are interested in the mathematical details and trade-offs involved in scoring.Exponential And Equivalent... · Evaluating The Criteria · Prize Concentration
  80. [80]
    Introducing Simpler, Fairer Tournament Scores - Metaculus
    Mar 14, 2024 · You receive a Peer Score for each tournament question, with zeros for any you don't predict. Your Total Score is the sum. If it's positive, ...
  81. [81]
    Kurzweil's Predictions Correct - Metaculus
    The Community currently predicts whole human brain emulation in 2071. Ray Kurzweil argues in The Singularity is Near that it would occur in the mid 2020s.
  82. [82]
    Forecasts about EA organisations which are currently on Metaculus.
    Dec 29, 2020 · Metaculus is a community forecasting platform which popular among members of the EA and Rationalist communities. Many of the forecasts on ...Missing: demographics skew
  83. [83]
    Ambiguity in Prediction Market Resolution is Harmful - LessWrong
    Sep 26, 2022 · I think Metaculus's level of verbosity in resolution criteria is bad in that it makes questions longer to write and longer to understand ...
  84. [84]
    Nassim Nicholas Taleb - Forget Forecasting - Nordic Business Forum
    May 19, 2022 · Nassim emphasized that he believes forecasting is absurd and folly. “These people [forecasters] don't really have skin in the game. They just ...
  85. [85]
    Fooled By Randomness & The Black Swan - SoBrief
    Rating 4.6 (71) Sep 17, 2025 · Limits of prediction: Taleb argues that in complex, nonlinear systems, prediction is largely futile, especially for rare, high-impact events.<|separator|>
  86. [86]
    The fragile “wisdom” of crowds - Medium
    Mar 19, 2014 · A truly bad outcome would be a crowd that at once gives a very inaccurate estimate and does so with a narrow range of opinion differences ...
  87. [87]
    When the crowd gets it wrong – the limits of collective wisdom in ...
    Jul 1, 2025 · This study examines collective decision-making dynamics using a machine learning framework, drawing parallels between a previously established synthetic ...
  88. [88]
    [PDF] Mathematics of Black Swans
    Taleb, The Black Swan, Random House, 2007. 7 There are other problems. 1) VaR does not replicate out of sample -- the past almost never predicts subsequent.
  89. [89]
    The Black Swan and Our Fatal Blindness to the Unexpected
    Jul 7, 2025 · The Black Swan is a furious challenge to the illusion of predictability. We overfit narratives to randomness, Taleb argues, seeking coherence and simplicity ...
  90. [90]
    The 'Wisdom of the Crowd' Has a Pretty Bad Track Record at ...
    Jul 8, 2016 · The 'Wisdom of the Crowd' Has a Pretty Bad Track Record at Predicting Jobs Reports. Few managers today would trust a single analyst to predict ...
  91. [91]
    [PDF] On the Difference between Binary Prediction and True Exposure ...
    First, binary predictions tend to work; we can learn to be pretty good at making them (at least on short timescales and with rapid accuracy feedback that ...
  92. [92]
    Some Considerations on Prediction Markets - LessWrong
    May 23, 2025 · The most obvious way in which Metaculus and Manifold Markets are different from the other four examples is that they are general purpose. Anyone ...Predictive Performance on Metaculus vs. Manifold MarketsAn Introduction to Prediction Markets - LessWrongMore results from www.lesswrong.comMissing: advantages | Show results with:advantages
  93. [93]
    Metaculus Monday - by Scott Alexander - Astral Codex Ten
    Feb 1, 2021 · Welcome to Metaculus Mondays, where I make you listen to reports of how the prediction markets did this week and what they're predicting for later.Missing: controversies criticisms
  94. [94]
    The Rationale-Shaped Hole At The Heart Of Forecasting — EA Forum
    Apr 2, 2024 · Political scientists like Tetlock treat forecasting as a psychology problem; economists and mathematicians treat it as an aggregation and ...
  95. [95]
    Echo chambers, rabbit holes, and ideological bias: How YouTube ...
    Oct 13, 2022 · These users are in an “echo chamber” because they are only exposed to information that is consistent with their own ideology and prior beliefs.