Fact-checked by Grok 2 weeks ago

Collective wisdom


Collective wisdom, commonly referred to as , denotes the empirical observation that the aggregated judgments of a diverse group of independent individuals often yield more accurate estimates, predictions, or solutions than those produced by solitary experts or smaller homogeneous groups. This phenomenon hinges on statistical principles where errors in individual assessments tend to cancel out when opinions are uncorrelated and drawn from varied perspectives, provided no single viewpoint dominates through influence or .
The concept gained prominence through James Surowiecki's 2004 book , which delineates four essential conditions for its realization: diversity of opinion to incorporate multiple informational signals, to prevent , to leverage specialized local knowledge, and an effective mechanism for aggregating inputs such as averaging or . Empirical investigations, including numerical estimation tasks and complex survival scenarios, substantiate these prerequisites, demonstrating enhanced group accuracy with increasing size and heterogeneity under controlled , though expertise can sometimes amplify rather than diminish the effect when properly harnessed. Cultural and opinion diversity further bolsters outcomes by mitigating uniform biases, as evidenced in studies linking variance in group judgments to predictive power. Despite successes in domains like prediction markets and , collective wisdom falters without rigorous safeguards against , which fosters and amplifies errors, as seen in network models where erodes informational . Defining characteristics include its vulnerability to —whether from shared misinformation or —leading to collective folly in real-world applications such as financial bubbles or polarized , underscoring that mere aggregation without causal filtering of inputs yields no inherent superiority. Notable achievements encompass improved in decentralized systems, yet controversies persist over its overapplication in deliberative settings like , where is routinely compromised, prompting calls for hybrid approaches integrating expert curation with broad input.

Definition and Core Principles

Conceptual Foundations

The concept of collective wisdom refers to the capacity of aggregated individual judgments within a group to produce outcomes superior to those of isolated experts or subgroups, grounded in probabilistic mechanisms where independent errors average toward truth. This relies on the , whereby diverse estimates converge on accurate values when biases are uncorrelated and individual competence exceeds random chance. A foundational is the Condorcet Jury Theorem, proposed by Nicolas de Condorcet in 1785, which establishes that for binary decisions, if each independent voter holds a correct opinion with probability p > 0.5, the majority vote's probability of correctness approaches 1 as group size grows indefinitely, assuming rule. This theorem underpins collective wisdom by demonstrating how statistical independence amplifies marginal individual accuracy into near-certainty at scale, though it assumes no systematic correlations in errors. Early empirical validation appeared in Francis Galton's 1907 analysis of 787 guesses at a fair for the dressed weight of an (actual: 1,198 pounds), where the median estimate of 1,207 pounds erred by less than 1%, surpassing most individual attempts despite participants' varying expertise. Building on such observations, formalized key enabling conditions in 2004: diversity of private information to minimize shared blind spots, independence to prevent informational cascades, decentralized to incorporate local , and an impartial aggregation method like averaging or . These principles highlight that collective wisdom emerges not from group deliberation, which risks , but from non-interactive synthesis of dispersed insights.

Key Conditions for Effective Collective Wisdom

outlined four essential conditions for groups to produce wise collective judgments superior to individual experts: diversity of opinion, among participants, of information processing, and an effective mechanism for aggregating individual inputs. These conditions ensure that errors from individuals cancel out statistically when opinions are averaged or otherwise combined, drawing on the principle that uncorrelated estimates converge toward the true value under proper aggregation. Diversity of opinion requires participants to possess varied perspectives, backgrounds, and private information, preventing homogeneity that amplifies shared biases. Without it, groups replicate individual errors, as seen in homogeneous panels that underperform diverse amateurs in tasks. Empirical tests, such as aggregating judgments from heterogeneous groups in estimation experiments, confirm that reduces variance and improves accuracy by incorporating complementary knowledge fragments. Independence ensures that each person's judgment forms without influence from others, avoiding informational cascades where early opinions sway subsequent ones and homogenize the group. Violations, such as through in sequential estimation tasks, have been shown experimentally to undermine collective accuracy, with even mild reducing performance below individual levels in groups of 144 participants. Maintaining , as in anonymous or simultaneous submissions, preserves the statistical benefits of error cancellation. Decentralization leverages localized knowledge and specialized insights from distributed participants rather than centralized control, allowing bottom-up integration of details inaccessible to any single authority. This condition fosters efficiency in complex systems, such as markets where traders aggregate dispersed signals into prices more accurate than forecasts. Centralized hierarchies, by contrast, filter out peripheral , leading to poorer outcomes in adaptive environments. Finally, a robust aggregation mechanism—such as averaging numerical estimates or voting for binary choices—must translate individual inputs into a output without . Simple statistical methods suffice for quantitative problems, as demonstrated in classic jury theorems where converges to truth under and competence assumptions. Poor aggregation, like unweighted opinions, fails to harness , underscoring the need for impartial combination rules. These conditions interact: lacking one undermines the others, as evidenced by market crashes where interdependence overrides despite aggregation tools.

Historical Development

Ancient and Pre-Modern Roots

The concept of collective wisdom, wherein aggregated judgments of a group surpass those of individuals under suitable conditions, finds early philosophical articulation in . , in his (circa 350 BCE), argued that the many, even if each lacks excellence, can collectively form better judgments than a single expert or elite few, analogous to a communal feast where diverse contributions yield a superior whole. He posited this in Book III, Chapter 11, emphasizing that ordinary citizens, when assembled, pool sensory perceptions and partial insights into a comprehensive , provided they are not wholly corrupt. This view defended participatory elements in over pure , highlighting epistemic benefits from diversity in rather than mere . In practice, ancient Athenian democracy exemplified such aggregation through the ekklesia, where up to 6,000 male citizens voted on policies after debate, often yielding decisions reflective of broad experiential input over singular authority, as seen in pivotal choices like the Sicilian Expedition (415 BCE), though outcomes varied. Roman institutions further embodied collective judgment via the Senate's advisory role and popular assemblies (comitia), where magistrates consulted groups for magistracies and laws, integrating plebeian and patrician perspectives to mitigate individual biases, a system enduring from the Republic's founding in 509 BCE. Pre-modern extended these roots through medieval , where and associations from the onward employed for internal , enabling collective oversight of apprenticeships, quality standards, and disputes without deferring to a sole master, thus harnessing dispersed practical for organizational . This paradigm contrasted with feudal hierarchies, fostering proto-democratic mechanisms in urban centers like those in 13th-century and , where charters formalized equal voices among members to counter arbitrary rule. Such structures prefigured aggregation by prioritizing verifiable group over charismatic , though limited to enfranchised artisans.

19th-20th Century Precursors

In the mid-19th century, Belgian astronomer and statistician pioneered the application of probabilistic and averaging techniques to human attributes, laying early groundwork for aggregating collective data to uncover underlying patterns. In his 1835 treatise Sur l'homme et le développement de ses facultés, ou Essai de physique sociale, Quetelet analyzed large datasets on physical traits like height and weight, as well as moral statistics such as rates, to define the "average man" (l'homme moyen) as a composite representing the of a . He contended that this figure embodied the "normal" or , with individual deviations treated as errors akin to observational inaccuracies in astronomy, thereby suggesting that societal truths emerge more reliably from massed data than from singular observations. Quetelet's framework influenced the shift toward viewing statistical averages as tools for social prediction and law-like regularities, predating direct empirical tests of crowd estimation but establishing the conceptual basis for deriving wisdom from dispersed individual measurements. His emphasis on large-scale aggregation to mitigate individual variability resonated in emerging fields like , though critics later noted risks of overgeneralizing averages to prescriptive norms. A pivotal empirical occurred in early 20th-century through Francis Galton's inadvertent experiment on crowd judgment. At the 1906 West of England Fat Stock and Poultry Exhibition in , 787 attendees submitted independent estimates of a fattened ox's (after slaughter and organ removal), which was verified at 1,198 pounds. The mean of these guesses equaled 1,197 pounds, with a of 1,207 pounds, revealing an accuracy far exceeding most individual entries despite the crowd's diversity in expertise—from farmers to amateurs. Galton, a and proponent of , published the analysis as "Vox Populi" in on March 7, 1907, interpreting the result as evidence that "many heads" could yield judgments rivaling expert opinion under conditions of and minimal systematic . He cautioned, however, that real-world applications—like democratic —often falter due to correlated errors, , or from vocal minorities, thus qualifying his endorsement of collective mechanisms. This work spurred subsequent replications in psychological laboratories during the , confirming aggregation's efficacy for quantitative estimates while highlighting prerequisites like cognitive diversity and non-interaction.

Post-2000 Popularization

The publication of James Surowiecki's in June 2004 marked a pivotal moment in popularizing collective wisdom, synthesizing empirical examples to argue that diverse groups, when aggregating independent , often outperform individual experts in estimation and prediction tasks. Surowiecki outlined four key conditions— of , of opinions, decentralized formation, and an aggregation —for crowds to yield superior results, drawing on cases like the 1906 Plymouth fair where the average guess of 800 attendees for an ox's weight was just 0.6 pounds off from 1,197 pounds. The book, reviewed positively in outlets like for its accessible exploration of group rationality, influenced decision-making in business and policy by challenging reliance on elite expertise. This framework spurred practical applications in the era, particularly through prediction markets, which operationalize collective wisdom by pricing contracts on future events to reflect aggregated probabilities. Platforms like Intrade, established in and peaking during the 2008 U.S. with millions in trading volume, demonstrated crowd accuracy in forecasting outcomes, often surpassing traditional polls by incentivizing information revelation via financial stakes. Corporate adoption followed, with firms such as and deploying internal prediction markets in the mid-2000s to forecast , project timelines, and technological trends, yielding accuracies 20-30% better than individual forecasts in controlled tests. Online scalability further amplified the concept's reach, enabling large-scale experiments that validated crowd judgments in digital environments. A 2014 study involving over 3,000 participants found aggregated predictions of movements outperformed small panels by 15-25% in accuracy, attributing gains to the of non-professional participants. By the , integrations in platforms—such as Hewlett-Packard's use of crowd-sourced for R&D prioritization—embedded collective wisdom into innovation processes, though results varied with adherence to criteria. These developments shifted perceptions from toward pragmatic endorsement, evidenced by academic citations exceeding 10,000 for Surowiecki's thesis by 2020 and the proliferation of tools like in 2014 for event betting.

Theoretical Models

Wisdom of Crowds Framework

The Wisdom of Crowds framework, popularized by journalist in his 2004 book , asserts that diverse groups can generate judgments more accurate than those of individual experts, provided certain structural conditions enable the aggregation of independent information to cancel out errors through statistical averaging. This approach draws on probabilistic principles, such as the , where the variance of aggregated estimates decreases with group size under conditions of independence, yielding results closer to the true value as random biases offset each other. Surowiecki illustrates the framework with historical examples, including statistician Francis Galton's 1906 observation at a county fair, where 787 participants guessed the dressed weight of an ox; the crowd's median estimate of 1,207 pounds deviated by less than 1% from the actual 1,198 pounds, demonstrating emergent accuracy without coordination. Central to the framework are four interdependent conditions required for effective collective judgment. First, diversity of opinion ensures that group members contribute unique information or perspectives, reducing systematic blind spots; homogeneous groups, by contrast, amplify shared errors, as seen in cases where expert fails due to uniform training. Second, independence mandates that individuals form opinions without from peers, mitigating social pressures like or , which Surowiecki links to phenomena such as bubbles where overrides private signals. Third, decentralization allows participants to specialize in localized or domain-specific , fostering efficiency without top-down control; this mirrors mechanisms where prices emerge from dispersed traders' inputs rather than centralized . The fourth condition, aggregation, requires a reliable —such as averaging, , or market pricing—to synthesize individual inputs into a coherent output, transforming disparate judgments into a probabilistic estimate superior to any single contributor. Surowiecki emphasizes that violations of these conditions, such as excessive interdependence in deliberative groups, can invert the effect, producing "foolish" crowds; empirical tests, including studies, support this by showing accuracy gains when preserves . The framework's causal logic rests on : diverse, independent signals aggregate to approximate truth via error reduction, but it assumes unbiased individual estimators and scales poorly with correlated noise from . Applications span (e.g., polls averaging polls) to problem-solving, though Surowiecki cautions that real-world implementations must these conditions deliberately, as natural crowds often lack them.

Distinctions from Collective Intelligence and Groupthink

Collective wisdom, as conceptualized in frameworks like , emphasizes the statistical aggregation of independent, diverse individual judgments to yield estimates superior to those of solitary experts or averages, relying on mechanisms such as averaging or majority voting under conditions of informational and cognitive . In contrast, encompasses broader interactive processes where groups collaboratively generate, share, and refine knowledge through deliberation, networks, or distributed problem-solving, often producing emergent solutions beyond mere aggregation, as seen in or Wikipedia's editorial dynamics. This distinction highlights that collective wisdom prioritizes non-interactive inputs to harness statistical error cancellation—rooted in the Condorcet jury theorem, which mathematically demonstrates that diverse, independent voters with better-than-chance accuracy converge on correct majority outcomes—while risks amplifying correlated errors through unless structured to preserve . Groupthink, defined by Irving Janis in 1972 as a mode of thinking where group members prioritize consensus and cohesion over critical appraisal, leading to defective decision-making through symptoms like illusion of invulnerability and suppression of dissent, represents the antithesis of effective collective wisdom. Collective wisdom avoids groupthink by enforcing independence—preventing members from influencing each other prior to aggregation—and fostering decentralization, where local knowledge informs judgments without hierarchical override, as evidenced in experiments where interdependent groups underperform independent ones by up to 20-30% in estimation accuracy. James Surowiecki's 2004 analysis specifies four conditions to evade groupthink: diversity of information, independence from others' opinions, decentralized decision-making, and a reliable aggregation method, with failures in these—such as homogeneous expert panels—mirroring groupthink's pitfalls, as in the 1986 Challenger disaster where NASA engineers' warnings were dismissed amid cohesion pressures. Empirical distinctions underscore causal mechanisms: collective wisdom thrives on uncorrelated errors canceling out in large samples, per Galton's 1907 ox-weighing demonstration where 787 villagers' guesses averaged within 1% of the true 1,197-pound weight despite individual deviations. , however, can devolve into -like echo chambers if homogenizes views, as shown in studies where interactive groups exhibit biases reducing forecast accuracy by 15-25% compared to independent aggregation. Thus, while both concepts leverage group capabilities, collective wisdom's success hinges on minimizing social pressures that fuel , prioritizing empirical aggregation over collaborative harmony.

Empirical Evidence Supporting Collective Wisdom

Classic Experiments and Predictions

In 1907, Francis Galton examined data from a 1906 weight-judging contest at the West of England Fat Stock and Poultry Exhibition in , , where fairgoers estimated the live weight of a bullock minus its head, feet, and entrails after slaughter. The true measured 1,198 pounds, while the of 787 valid entries—after excluding duplicates and incomplete submissions—yielded 1,207 pounds, an error of just 9 pounds or 0.8%. The median estimate similarly approximated the actual value closely, with Galton noting in his article "" that individual guesses varied widely yet canceled out errors when averaged, revealing an emergent accuracy unattainable by most participants alone. This finding underscored the potential for collective estimation to outperform solitary expertise under conditions of and , as Galton argued the crowd's judgments reflected a "democratic averaging" superior to selecting the single best guess. Subsequent analyses confirmed the mean's proximity, attributing it to statistical aggregation rather than coincidence, though Galton himself expressed reservations about applying it broadly to complex social judgments without safeguards against . Replications of similar estimation tasks have consistently validated Galton's observation. For instance, in classroom experiments involving guesses of jelly beans in a sealed jar, group averages often achieve higher accuracy than the best individual estimates; one conducted by finance professor Jack Treynor with 56 students produced a collective guess of 871 beans for a jar holding exactly 850, surpassing 55 of the 56 solo predictions. Such trials, typically using opaque containers to ensure independent judgments, demonstrate that errors in over- and underestimation tend to balance, yielding predictions within 1-2% of reality across hundreds of informal and controlled settings. These experiments extend to predictive scenarios beyond static estimation, such as lengths or volumes, where aggregated responses from non-experts rival professional appraisals when participants lack shared or . Early 20th-century analogs, including public guesses at agricultural fairs for crop yields or animal sizes, similarly showed crowd means deviating minimally from verified outcomes, supporting the causal mechanism of error cancellation in diverse groups. However, accuracy hinges on avoiding or , as deviations arise when judgments correlate, a limitation Galton implicitly highlighted by discarding non-independent entries.

Quantitative Studies on Aggregation Mechanisms

In 1907, analyzed 787 estimates of an ox's dressed submitted at a county fair, finding that the of 1,197 s deviated by just 1 from the true of 1,198 s, while the of 1,207 s was accurate within approximately 1%. This early quantitative evidence illustrated the potential of simple averaging to aggregate dispersed judgments into a highly accurate estimate, provided guesses were and unbiased, with the crowd's aligning with expectations for large samples. Subsequent lab experiments have compared aggregation mechanisms across estimation tasks with asymmetric information. In a 2019 study involving 144 participants predicting jar values over multiple rounds, continuous double auction (CDA) markets—using midpoint prices—yielded the lowest log deviations from true values compared to means, geometric means, , and call auctions, with (p < 0.05 via t-tests) attributed to incentive-compatible trading revealing private more effectively than static pooling. Among non-market methods, the outperformed and geometric means, particularly in early periods before learning effects diminished differences. Similarly, a 2014 analysis of 1,233 forecasters on nearly 200 events demonstrated that judgments by a model's of positive contributors (expertise proxies) reduced errors beyond simple unweighted averaging or weighting, by excluding negative influencers and leveraging consistent outperformers. For probabilistic judgments, opinion pooling methods differ in handling and extremes. Linear pools, which arithmetically average probabilities, preserve marginals but can produce overconfident aggregates if individuals assign zero probabilities to true outcomes; logarithmic pools, multiplying densities (equivalent to geometric averaging of ), mitigate this by avoiding zeros and emphasizing confident forecasts, with empirical reviews showing superior in expert elicitation tasks where linear pools underperform on tail events. weighting in aggregation yields mixed quantitative results: a 2021 perceptual study found weighted averages of decision variables improved accuracy across difficulties ( via reduced variance), yet a 2020 survival task experiment with small groups (4-5 members) reported no significant error reduction (individual error 46.99 to group 36.88 unweighted; -weighted 41.12, p = 0.43 via ), suggesting benefits emerge only in larger or simulated groups with tuned sensitivity. These findings underscore that optimal mechanisms depend on task type, with markets excelling in incentive alignment and weighted/static pools varying by error structure.

Failures and Limitations of Collective Wisdom

Structural Conditions Leading to Errors

Collective wisdom emerges from the statistical aggregation of , diverse judgments, but structural conditions that violate key prerequisites—such as , , , and effective aggregation—can systematically amplify errors rather than cancel them out. When individual errors are correlated due to shared es or influences, the crowd's estimate deviates further from truth, as the variance in judgments shrinks without reducing . For instance, in experimental settings, even minimal , like observing others' estimates before submitting one's own, reduces the range of responses and aligns them toward the perceived , thereby undermining accuracy. Lack of in participant backgrounds or perspectives constitutes a primary structural flaw, as homogeneous groups reinforce collective blind spots rather than providing corrective variance. Peer-reviewed analyses show that demographic diversity alone does not guarantee improved judgments; without underlying cognitive or informational , such groups perform no better than uniform ones and may entrench errors through . In decentralized systems like prediction markets, centralization of —where a dominates signals—creates herding cascades, propagating initial mistakes across the crowd. Decentralized structures mitigate errors by allowing local to without top-down distortion, yet over-centralization or poor alignment leads to failures by suppressing and fostering . Empirical models demonstrate that networks with high clustering or chambers exacerbate this, as repeated interactions amplify correlated errors, turning potential into "madness of crowds." Inadequate aggregation mechanisms, such as without weighting for expertise or confidence, further compound issues by overweighting noisy inputs from uninformed participants. These conditions interact causally: dependence erodes , which in turn hampers effective aggregation, resulting in outcomes worse than individual averages in controlled trials. For example, in non-anonymous groups often fails to harness private information, leading to inefficient equilibria where accessible but flawed prevails.

Empirical Examples of Crowd Misjudgments

In financial markets, the from 1995 to 2000 demonstrated collective investor misjudgment, as crowds drove valuations of internet companies to unsustainable levels—often based on hype and lack of profitability—resulting in a index peak of 5,048.62 on March 10, 2000, followed by an 78% decline by October 2002, wiping out approximately $5 trillion in market value. Election polling aggregations provide another case, as in the 2016 U.S. presidential election, where surveys representing crowd opinions showed leading by margins of 3 to 5 percentage points nationally in the final weeks, yet Trump won the with victories in pivotal states like , , and due to underestimation of non-college-educated white and shy Trump supporters. Economic forecasting by aggregated expert crowds has also faltered, exemplified by Bloomberg's consensus surveys of economists for U.S. from 2010 to 2016, which erred by an average of 16,000 jobs monthly, exceeded 50,000 jobs in error more than 50% of the time, and missed by over 100,000 jobs in about 25% of releases, including directional failures like the July 2016 estimate; these inaccuracies often stemmed from on prior data and overcorrections, such as biasing subsequent predictions downward after upward misses. Experimental further illustrates failures under correlated biases, as in a where participants estimated the value of a $60 investment after 30 years at 10% annual compound growth (actual: $1,047), yielding a median of $360 due to widespread underappreciation of effects, with insufficient high estimates to offset the skew.

Criticisms and Controversial Aspects

Cognitive and Social Biases Undermining Groups

Groups exhibit when members selectively seek or interpret information that aligns with shared preconceptions, thereby amplifying errors rather than correcting them through diverse inputs. This bias undermines the independence required for effective aggregation in collective wisdom, as individuals suppress contradictory evidence to maintain internal harmony. Empirical studies demonstrate that confirmation bias persists in group settings, leading to overvaluation of supportive arguments and undervaluation of alternatives, which erodes decision quality. Groupthink, characterized by excessive cohesion that prioritizes consensus over critical evaluation, further impairs collective judgments by fostering illusions of unanimity and self-censorship among dissenters. Originating from Irving Janis's analysis of historical policy failures like the in 1961, groupthink manifests when cohesive groups discount external risks and alternative viewpoints, resulting in suboptimal outcomes. Experimental evidence confirms that such dynamics reduce the exploration of information, contrasting with the decentralized evaluation needed for wisdom of crowds effects. Social pressures exacerbate these issues, as individuals adjust estimates to match perceived group norms, diminishing viewpoint without enhancing accuracy. A 2011 study found that even minimal in estimation tasks caused herding toward incorrect averages, directly countering the statistical benefits of independent judgments. This effect is pronounced in sequential decision environments, where early opinions disproportionately sway later ones. Information cascades occur when participants abandon private information in favor of inferred public signals from prior choices, propagating errors across the group. Theoretical models show that once a cascade begins—often from a few initial errors—subsequent members ignore their own data, leading to uniform but inaccurate collective outputs, as observed in behavioral experiments on sequential judgments. Such cascades explain real-world misjudgments, like market bubbles, where rational individual signals fail to aggregate due to imitative behavior. Overconfidence bias compounds these social dynamics, with groups exhibiting inflated certainty in aggregated estimates that masks underlying errors. Professionals in high-stakes domains, such as and , display overconfidence that correlates with reduced information processing, per reviews of decision-making literature. Mitigation requires mechanisms preserving and , as biased interactions otherwise dominate, per analyses of decision frameworks.

Ideological Polarization and Media Influence

Ideological polarization undermines collective wisdom by homogenizing opinions within subgroups, thereby diminishing the cognitive diversity required for accurate aggregation of judgments. When individuals cluster into ideologically aligned groups, their estimates or beliefs exhibit high correlation in errors, which prevents the statistical cancellation of biases inherent in the wisdom-of-crowds mechanism. A 2019 study on partisan crowds demonstrated that in homogeneous networks resembling echo chambers, biased adjustments during information exchange drive group beliefs toward greater extremism rather than truth, as correlations between individual biases amplify deviations from reality. This effect is particularly pronounced in predictive tasks, where polarized groups fail to outperform independent individuals due to the absence of viewpoint diversity. Echo chambers, facilitated by social sorting and algorithmic recommendations, further exacerbate these failures by limiting exposure to contradictory evidence, fostering through mechanisms like persuasive argumentation and social comparison. Experimental research shows that among like-minded participants intensifies initial leanings, leading to riskier or more extreme collective decisions that diverge from empirical accuracy. In online contexts, such as debates, reduces the substantive quality of discourse, as participants prioritize in-group validation over evidence-based refinement, resulting in poorer group judgments on factual matters. Concerns extend to real-world institutions like electorates and juries, where rising correlates with diminished accuracy in aggregate , as diverse inputs necessary for are supplanted by shared misconceptions. Media influence compounds these issues by disseminating correlated or skewed narratives that align with ideologies, eroding the of judgments across the . Selective exposure drives consumers to outlets reinforcing priors, while biased framing—often systematically left-leaning in mainstream journalism—introduces uniform errors that propagate through social networks, undermining aggregation benefits. For instance, reliance on shared news sources within groups leads to lower accuracy in veracity assessments, as correlated amplifies blind spots rather than providing diverse signals; independent local sources, by contrast, enhance performance by introducing variability. Overconfidence in media-derived judgments further entrenches errors, particularly for low-quality , as audiences fail to calibrate amid polarized coverage. Interventions like evidence review can mitigate bias in crowd judgments, but without them, media-driven echo effects systematically impair in polarized environments.

Practical Applications

Economic and Predictive Tools

Prediction markets represent a key predictive tool that operationalizes collective wisdom by enabling participants to trade contracts resolving to a fixed payout based on verifiable future outcomes, such as election results or economic indicators, with market prices serving as aggregated probability estimates. These markets incentivize truthful revelation of information through financial stakes, drawing on diverse participant knowledge to outperform isolated judgments. Empirical analyses confirm their efficacy; for instance, the Iowa Electronic Markets, launched in 1988 as real-money futures platforms for political events, generated predictions closer to actual outcomes than 964 contemporaneous polls in 74% of cases across five cycles from 1988 onward. In , prediction markets have been applied to anticipate indicators like GDP growth, inflation rates, and corporate earnings, often integrating real-time data adjustments. A study examined their use for events such as the likelihood of finding weapons of mass destruction in , where market odds aligned closely with resolved probabilities, demonstrating aggregation of dispersed superior to in select cases. Corporate implementations, including internal markets at firms like , have tracked flows for decisions on product viability and sales projections, with trading volumes correlating to predictive precision. Field experiments comparing prediction markets to methods like the technique found equivalent long-term accuracy for economic variables, underscoring their robustness without requiring panels. Beyond markets, statistical aggregation of inputs provides economic tools for predictive enhancement, particularly when weighting schemes mitigate biases. In surveys of economists indicators like or output gaps, the forecast outperforms the , yielding higher odds of surpassing any single participant's accuracy by reducing sensitivity to extreme errors. This approach leverages among forecasters to distill collective signal from , as validated in analyses of macroeconomic survey where larger respondent pools improve average precision asymptotically. Such techniques have informed policy simulations and risk assessments, though they assume minimal , which empirical reviews of earnings forecasts identify as a potential when participants overweight consensus views.

Technological and Organizational Implementations

Prediction markets serve as a key technological implementation of collective wisdom, enabling participants to trade contracts tied to future event outcomes, with market prices reflecting aggregated probabilistic judgments. These platforms incentivize the revelation of private information through financial stakes, often yielding forecasts more accurate than those from experts or polls. For instance, employed internal prediction markets in the early to forecast quarterly printer sales, demonstrating improved accuracy over conventional methods. Major corporations have integrated such markets organizationally to support strategic decisions. operated internal prediction markets from around 2005, allowing employees to bet on metrics like product launch dates and targets, though participation waned over time due to low trading volumes and cultural shifts. developed the Prediction Lab platform to scale user predictions on events, engaging millions in forecasting tasks to refine collective insights for and product development. These implementations highlight how markets can harness employee diversity for internal , provided incentives align with truthful and participation is encouraged. Crowdsourcing platforms extend collective wisdom by distributing complex problems to distributed solvers, rewarding successful contributions. , launched in 2001, connects organizations with a network exceeding 375,000 global experts as of recent operations, R&D challenges that have solved issues in areas like battery cooling and control. This model reduces internal R&D costs by leveraging external collective expertise, with solvers often from unrelated fields providing novel solutions unattainable through siloed teams. Online forecasting communities like aggregate crowd predictions on geopolitical, scientific, and technological questions using statistical models to weight forecaster accuracy, producing community medians that capture emergent . Established in 2018, has tracked thousands of questions, with its aggregated forecasts outperforming superforecasters in domains like timelines, by fostering iterative updates and debate among participants. Organizations adopt similar tools for private instances, transforming raw predictions into decision-support probabilities on risks and opportunities.

Recent Developments and Ongoing Research

Studies from 2020-2025

A using the moon survival task with undergraduate groups demonstrated that both interactive group decision-making and non-interactive aggregation methods, such as the and confidence-weighted Borda count, outperformed individual judgments in ranking survival items, with no significant differences among the aggregation approaches. The findings highlighted the role of group size and weighting sensitivity in achieving accurate collective rankings, suggesting aggregation's robustness in complex information integration scenarios. Research in 2021 on perceptual decision-making tasks revealed that —via averaging individual estimates—enhances accuracy across varying difficulty levels and diverse populations, including neurotypical adults, those with autism spectrum traits, and children as young as six years old. This effect persisted even when individual performance was near chance, underscoring aggregation's value in low-signal environments without requiring discussion. The same year, an into expertise's impact found that while aggregating across individuals (outer ) consistently improved numerosity accuracy, training individuals to become experts reduced variance in their repeated judgments, thereby diminishing benefits from within-person averaging (inner ). This implies that expertise homogenizes internal variability, favoring reliance on diverse external crowds over solo deliberation for optimal outcomes. Analytic modeling in 2021 showed social influence's ambiguous effects on collective accuracy: it can enhance wisdom of crowds only when initial group error is sufficiently high, but typically reduces and worsens predictions by promoting on suboptimal opinions, with stronger mitigating these drawbacks. In domains, a 2023 analysis of multi-year competitions indicated that small teams of forecasters produce more accurate aggregate s than large non-elite crowds or prediction markets, attributing superiority to selective expertise rather than sheer volume. Similarly, evaluations of crowd-prediction platforms like in 2024 confirmed their skill surpasses random-walk benchmarks for exchange rates, though aggregation mechanisms must account for to maximize utility. A study on ensembles found that simple averaging of predictions from multiple LLMs rivals or exceeds human crowd accuracy on diverse tasks, including and future events, suggesting AI-augmented aggregation as a scalable alternative to human-only collectives. These results point to systems potentially amplifying collective wisdom while addressing human limitations like and fatigue.

Emerging Insights on Polarization and Machine Learning Interfaces

Recent research indicates that machine learning-driven recommendation systems on social platforms amplify political polarization, thereby distorting collective wisdom by prioritizing content that maximizes engagement over informational diversity. A 2023 algorithmic audit of Twitter's (now X) feed algorithm revealed it disproportionately boosts divisive material, with right-leaning content receiving 5.87 times more amplification than left-leaning equivalents when interactions are controlled, leading to echo chambers that skew group-level perceptions and reduce the accuracy of crowd-sourced judgments on factual matters. Similarly, simulations of large language model agents in networked environments, published in January 2025, demonstrated emergent polarization akin to human societies, where initial neutral interactions evolve into clustered opinions, undermining aggregated intelligence unless diversity-enforcing mechanisms are imposed. In polarized settings, traditional wisdom-of-crowds methods like averaging opinions fail when groups stratify into ideological silos, as partisan biases cause divergent error patterns that do not cancel out. A 2022 study on polarized groups found that collective accuracy persists only in contexts where informational incentives align across divides, such as incentivized tasks, but collapses in zero-sum debates; techniques, including on historical judgments, can predict and mitigate these failures by weighting contributions based on past calibration rather than affiliation. Experiments on "wisdom of partisan crowds" further showed that exposing individuals to aggregated partisan estimates reinforces biases, with Democrats and Republicans diverging further on politically charged facts, though debiasing algorithms that filter for evidential strength improved group by up to 20% in controlled trials. Emerging applications leverage interfaces to counteract polarization's erosive effects on . For instance, 2023 projects on algorithmic amplification explore diversity-promoting feeds that reduce distortion in collective signals, such as by counterfactually simulating balanced exposures to enhance prediction markets' accuracy amid ideological fragmentation. A 2025 analysis of in argues that regulatory tweaks to recommendation engines—favoring cross-ideological links over engagement metrics—could diminish polarizing feedback loops, preserving crowd in democratic deliberations; empirical tests on synthetic populations confirmed that such interventions halved opinion clustering without sacrificing utility. These insights underscore causal pathways where unchecked ML interfaces exacerbate divides through selective exposure, yet targeted designs enable resilient aggregation, as validated in agent-based models from 2022 onward.

References

  1. [1]
    Wisdom of crowds and collective decision-making in a survival ... - NIH
    Oct 15, 2020 · The study's findings suggest that the wisdom of crowds could be applicable to complex problem-solving tasks, and interaction between group size ...
  2. [2]
    The wisdom of crowds: Why the many are smarter than the few and ...
    Surowiecki, J. (2004). The wisdom of crowds: Why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations ...
  3. [3]
    How the wisdom of crowds, and of the crowd within, are affected by ...
    Feb 5, 2021 · We investigated the effect of expertise on the wisdom of crowds. Participants completed 60 trials of a numerical estimation task, ...
  4. [4]
    Wisdom of crowds and collective decision-making in a survival ...
    Oct 15, 2020 · This study investigates the performance and process of the wisdom of crowds through a survival situation task involving complex information integration.
  5. [5]
    Cultural diversity and wisdom of crowds are mutually beneficial and ...
    Aug 16, 2021 · These two phenomena are actually intertwined. The wisdom of crowds effect is contingent on some opinion diversity in the group, otherwise the ...
  6. [6]
    Network dynamics of social influence in the wisdom of crowds
    As a result, social influence is expected to undermine the wisdom of crowds. We present theoretical predictions and experimental findings demonstrating that, in ...<|separator|>
  7. [7]
    [2204.13610] How social influence affects the wisdom of crowds in ...
    Apr 28, 2022 · Abstract page for arXiv paper 2204.13610: How social influence affects the wisdom of crowds in influence networks. ... empirical evidence.
  8. [8]
    The Limits of Crowd Wisdom - Farnam Street
    Collectives can be just as stupid as any individual, and in important cases, stupider. The interesting question is whether it's possible to map out where the ...
  9. [9]
    The impact of incorrect social information on collective wisdom in ...
    Sep 9, 2020 · Our results suggest that incorrect information does not necessarily impair the collective wisdom of groups, and can even be used to dampen the negative effects ...Missing: criticisms | Show results with:criticisms
  10. [10]
    Some Microfoundations of Collective Wisdom (Chapter 3)
    Aug 5, 2012 · The logical statistical foundations for collective wisdom are well known. First, a straightforward mathematical calculation demonstrates ...Missing: conceptual | Show results with:conceptual
  11. [11]
    (PDF) Collective Wisdom: Old and New - ResearchGate
    Collective wisdom refers to the idea that large groups collectively make wiser decisions, problem-solving, innovation implementation, and predictions (Hamada, ...
  12. [12]
    Collective Wisdom - Cambridge University Press & Assessment
    As mentioned earlier, collective wisdom can be understood as an amplification of individuals' cognitive abilities, performing for individuals' wisdom what a ...
  13. [13]
    The Wisdom of (Little) Crowds | National Geographic
    Apr 22, 2014 · Many scientists have used Condorcet's idea (known as the jury theorem) as a launching pad for exploring collective decision-making. They've ...
  14. [14]
    The Real Wisdom of the Crowds | National Geographic
    Jan 31, 2013 · In 1907, Sir Francis Galton asked 787 villagers to guess the weight of an ox. None of them got the right answer, but when Galton averaged ...
  15. [15]
    Wisdom of Crowds: Definition, Theory, and Examples - Investopedia
    According to Surowiecki, wise crowds have several key characteristics: The crowd should be able to have a diversity of opinions. One person's opinion should ...
  16. [16]
    The Wisdom of Crowds | Summary, Quotes, FAQ, Audio - SoBrief
    Rating 4.4 (210) Jan 22, 2025 · Conditions for success.​​ For collective wisdom to emerge, certain conditions must be met: Diversity of opinion. Independence of thought.
  17. [17]
    [PDF] Summary of "The Wisdom of Crowds" by James Surowiecki
    For a crowd to be smart, it must satisfy four key conditions: 1. There must be diversity of opinion amongst the people. 2. Each person must come to their ...<|separator|>
  18. [18]
    The Wisdom of Crowds: Why the Many are Smarter than the Few
    Capturing the 'collective' wisdom best solves cognitive problems. Four conditions apply. There must be: (a) true diversity of opinions; (b) independence of ...<|separator|>
  19. [19]
    The Wisdom of Crowds: An Exploration into Collective Intelligence
    Oct 19, 2023 · For the crowd's collective judgment to be superior, he identified four necessary conditions: diversity of opinion, independence of members from ...
  20. [20]
    How social influence can undermine the wisdom of crowd effect
    May 16, 2011 · We demonstrate by experimental evidence (N = 144) that even mild social influence can undermine the wisdom of crowd effect in simple estimation tasks.
  21. [21]
    [PDF] Harnessing the Wisdom of Crowds - University of Notre Dame
    A randomized experiment offers clean evidence that the wisdom of crowds can be better harnessed by encouraging independent voices among participants. Ironically ...
  22. [22]
    The Wisdom of Crowds - Surowiecki
    ” There are four conditions that characterize wise crowds: diversity of opinion, independence, decentralization and aggregation. In the Wisdom of Crowds ...
  23. [23]
    [PDF] The wisdom of crowds - RMU.edu
    The wisdom of crowds: why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations / James Surowiecki.
  24. [24]
    [Book Report] The Wisdom of Crowds: A Deep Dive into Collective ...
    Jun 15, 2024 · Surowiecki emphasises that for a group to be wise, it must satisfy three key conditions: diversity of opinion, independence of members from ...
  25. [25]
    [PDF] Wisdom of Crowds: Tests of the Theory of Collective Accuracy
    Jun 4, 2014 · theory of collective accuracy describes the conditions under which collective judgment is accurate. Empirical evidence for collective accuracy.<|separator|>
  26. [26]
    Wisdom of crowds benefits perceptual decision making across ...
    Jan 12, 2021 · We demonstrate that the weighted average of individual decision confidence/neural decision variables produces significantly better performance.
  27. [27]
    The ambigous role of social influence on the wisdom of crowds
    We derive conditions in which social influence can improve the wisdom of crowds. We show that most often social influence deteriorates the outcome.
  28. [28]
    Politics by Aristotle - The Internet Classics Archive
    Practical wisdom only is characteristic of the ruler: it would seem that all other virtues must equally belong to ruler and subject. The virtue of the ...
  29. [29]
    Democracy's Wisdom: An Aristotelian Middle Way for Collective ...
    Jan 30, 2013 · Aristotle's account of collective judgment suggests how diverse expertise might be aggregated by a group of democratic decision-makers ...
  30. [30]
    Aristotle: Politics | Internet Encyclopedia of Philosophy
    In his Politics, he describes the role that politics and the political community must play in bringing about the virtuous life in the citizenry.
  31. [31]
    [PDF] The Origins of Collective Decision Making - Ethical Politics
    The ethics of collective decision-making have deep cultural and historical roots. These ethical principles may even survive revolutions, and yet they do ...
  32. [32]
  33. [33]
    Andy Blunden, The Origins of Collective Decision Making. Leiden ...
    Blunden proposes that the practice of majority decision making within organizations stems from medieval or early modern guilds. There were, to be sure, ancient ...
  34. [34]
    Adolphe Quetelet (1796-1874)--the average man and indices of ...
    Adolphe Quetelet (1796-1874) was a Belgian mathematician, astronomer and statistician, who developed a passionate interest in probability calculus that he ...
  35. [35]
    Adolphe Quetelet and the legacy of the “average man” in psychology.
    Mar 21, 2021 · Adolphe Quetelet was a Belgian polymath who aimed to advance aggregate-level statistical tools as a unifying framework for all scientific ...
  36. [36]
    Adolphe Quetelet and the Legacy of the “Average Man” in Psychology
    Adolphe Quetelet was a Belgian polymath who aimed to advance aggregate-level statistical. tools as a unifying framework for all scientific disciplines.Missing: collective | Show results with:collective
  37. [37]
    New study improves 'crowd wisdom' estimates | Santa Fe Institute
    Apr 18, 2018 · In 1907, a statistician named Francis Galton recorded the entries from a weight-judging competition as people guessed the weight of an ox.
  38. [38]
    Forget the Wisdom of Crowds - MIT Technology Review
    Jul 14, 2014 · Way back in 1906, the English polymath Francis Galton visited a country fair in which 800 people took part in a contest to guess the weight ...
  39. [39]
    The Wisdom of Crowds - James Surowiecki - Google Books
    The Wisdom of Crowds ... In this fascinating book, New Yorker business columnist James Surowiecki explores a deceptively simple idea: Large groups of people are ...Missing: popularization | Show results with:popularization
  40. [40]
    'Wisdom of the crowd': The myths and realities - BBC
    Jul 7, 2014 · Yet there is some truth underpinning the idea that the masses can make more accurate collective judgements than expert individuals. So why ...
  41. [41]
    'The Wisdom of Crowds': Problem Solving Is a Team Sport
    May 22, 2004 · James Surowiecki says that people in large groups are better informed and more rational than any single member might be.Missing: impact | Show results with:impact<|separator|>
  42. [42]
    Harnessing the wisdom of crowds: Decision spaces for prediction ...
    This article examines the benefits of prediction markets and develops a framework that can be used to identify in which situations prediction markets can be ...Missing: post- | Show results with:post-
  43. [43]
    [PDF] Prediction Markets for Economic Forecasting
    winnowing of prediction market companies in the 2000s ... Figure 7: Prediction markets are generally more accurate than polls, even after removing known biases.Missing: popularization | Show results with:popularization<|control11|><|separator|>
  44. [44]
    [PDF] Are crowds on the internet wiser than experts? The case of a stock ...
    Feb 19, 2014 · Abstract According to the ''Wisdom of Crowds'' phenomenon, a large crowd can perform better than smaller groups or few individuals.Missing: popularization | Show results with:popularization
  45. [45]
    The Wisdom of Crowds - ResearchGate
    To a large extent, the success of crowdsourcing can be attributed to the "wisdom of the crowds. " According to this principle, the judgment errors of different ...
  46. [46]
    The Wisdom of Crowds - by James Surowiecki - Derek Sivers
    Apr 16, 2008 · More overconfident when facing big problems than easy ones. The more important a decison, the less likely a cascade (fad/craze) is to take hold.
  47. [47]
    Did 800 people estimate the weight of an ox within 1% in 1906?
    Mar 4, 2016 · Statistician Francis Galton observed that the median guess, 1207 pounds, was accurate within 1% of the true weight of 1198 pounds.
  48. [48]
    Information aggregation and collective intelligence beyond the ...
    Apr 25, 2022 · When information is randomly distributed, simple averages tend to be quite accurate, as in the wisdom of crowds. Using smaller groups composed ...
  49. [49]
    Collective Intelligence vs. The Wisdom of Crowds - Pop Junctions
    Nov 26, 2006 · The Wisdom of Crowds model focuses on isolated inputs: the Collective Intelligence model focuses on the process of knowledge production. The ...<|separator|>
  50. [50]
    Democratic Reason (Chapter 11) - Collective Wisdom
    ... collective intelligence of the people and even, occasionally, their collective wisdom. ... The Wisdom of Crowds: Why the Many Are Smarter Than the Few. New ...<|separator|>
  51. [51]
    Groupthink versus The Wisdom of Crowds: The Social Epistemology ...
    Aug 7, 2025 · 2004. The wisdom of crowds: Why the many are. smarter than the few and how collective wisdom shapes business,. economies, societies and ...
  52. [52]
    Vox Populi | Nature
    Published: 07 March 1907. Vox Populi. FRANCIS GALTON. Nature volume 75, pages 450–451 (1907)Cite this article. 33k Accesses. 1057 Citations. 182 Altmetric.
  53. [53]
    [PDF] 450 NATURE - MIT
    According to the democratic principle of " one vote one value," the middlemost estimate expresses the vox populi, every other estimate being condemned as too ...
  54. [54]
    The Jelly Bean Experiment - The Wisdom of Crowds
    When finance professor Jack Treynor ran the experiment in his class with a jar that held 850 beans, the group estimate was 871. Only one of the fifty-six ...
  55. [55]
    Rescuing Collective Wisdom when the Average Group Opinion Is ...
    The total knowledge contained within a collective supersedes the knowledge of even its most intelligent member. Yet the collective knowledge will remain ...Missing: criticisms | Show results with:criticisms
  56. [56]
    [PDF] Studying the ``Wisdom of Crowds'' at Scale - Stanford University
    Over the past century, there have been dozens of studies that document this “wisdom of crowds” effect (Surowiecki. 2005). Simple aggregation—as in the case of ...
  57. [57]
    Revisiting Francis Galton's Forecasting Competition - Project Euclid
    This note reexamines the data from a weight-judging competition described in an article by Francis Galton published in 1907. Following the correction of ...Missing: experiment | Show results with:experiment
  58. [58]
    Aggregation mechanisms for crowd predictions
    Nov 9, 2019 · We find that prices from continuous double auction markets clearly outperform all alternative approaches for aggregating dispersed information.
  59. [59]
    Identifying Expertise to Extract the Wisdom of Crowds - PubsOnLine
    May 23, 2014 · Statistical aggregation is often used to combine multiple opinions within a group. Such aggregates outperform individuals, including experts ...
  60. [60]
    [PDF] The Case of Logarithmic Pooling - arXiv
    Oct 10, 2023 · Logarithmic pooling is particularly sensible in the context of calibrated experts because it takes confident forecasts “more seriously" as ...
  61. [61]
    Group decisions based on confidence weighted majority voting
    Mar 15, 2021 · We found that real groups can aggregate individual confidences in a way that matches statistical aggregations given by CWMV to some extent.
  62. [62]
    How social influence can undermine the wisdom of crowd effect - PMC
    May 16, 2011 · In this article, we will demonstrate that social influence has three effects that can undermine the wisdom of crowds. Two of the effects are ...
  63. [63]
    Demographically diverse crowds are typically not much wiser than ...
    Feb 9, 2018 · Reported studies examine the effects of demographic diversity on the accuracy of “crowd” judgment—statistically aggregated individual judgments.
  64. [64]
    [PDF] Deliberation and the wisdom of crowds - LSE Research Online
    Jun 22, 2023 · Failures 1 and 2 can each lead to inefficiency, and do so without the other failure: Proposition 2 Given a simple opinion structure with non- ...
  65. [65]
    Understanding the Dotcom Bubble: Causes, Impact, and Lessons
    Many startups went public during this period, raising significant capital despite lacking viable business models, which ultimately led to the market collapse ...
  66. [66]
    Why 2016 election polls missed their mark | Pew Research Center
    Nov 9, 2016 · Supporters of presidential candidate Hillary Clinton watch televised coverage of the U.S. presidential election at Comet Tavern in the ...
  67. [67]
    How Pollsters Got the 2016 Election So Wrong, And What They ...
    Jul 27, 2022 · According to exit polls, voters who decided who to vote for in the final week of the campaign picked Trump by a 29-point margin. The late- ...<|separator|>
  68. [68]
    The 'Wisdom of the Crowd' Has a Pretty Bad Track Record at ...
    Jul 8, 2016 · People make mistakes. They have bad days. But we put a great deal of trust in the “wisdom of the crowd,” the idea that aggregating predictions ...
  69. [69]
    How the wisdom of the crowd works - and why it can fail
    Apr 7, 2016 · However, when crowds are faced with less intuitive questions, they can fail as dramatically as any individual. ClearerThinking founder Spencer ...
  70. [70]
    The Wisdom of the Small Crowd: Myside Bias and Group Discussion
    The my-side bias is a well-documented cognitive bias in the evaluation of arguments, in which reasoners in a discussion tend to overvalue arguments that confirm ...
  71. [71]
    Moderate confirmation bias enhances decision-making in groups of ...
    Our study shows that a moderate confirmation bias can actually improve decision-making when multiple reinforcement learning agents learn together in a social ...
  72. [72]
    (PDF) Cognitive Biases and Their Influence on Critical Thinking and ...
    Researchers have discovered 200 cognitive biases that result in inaccurate or irrational judgments and decisions, ranging from actor-observer to zero risk bias.
  73. [73]
    Counteracting estimation bias and social influence to improve the ...
    Apr 18, 2018 · Other aggregation measures have been proposed to improve the accuracy of the collective estimate, such as the geometric mean [26], the ...
  74. [74]
    The Impact of Cognitive Biases on Professionals' Decision-Making
    First, the literature reviewed shows that a dozen of cognitive biases has an impact on professionals' decisions in these four areas, overconfidence being the ...Management · Justice · Discussion
  75. [75]
    Mitigating Biases in Collective Decision-Making - arXiv
    Mar 11, 2024 · Our analysis reveals recurring individual biases and their permeation into collective decisions. We show that demographic factors, headline ...
  76. [76]
    The wisdom of partisan crowds - PNAS
    May 13, 2019 · However, we find that the wisdom of crowds is robust to partisan bias. Social influence not only increases accuracy but also decreases ...Abstract · Sign Up For Pnas Alerts · Results (experiment 1)
  77. [77]
    The impact of group polarization on the quality of online debate in ...
    In this paper, we focus on group polarization in the context of social media-enabled interaction, a dysfunctional group dynamic by which participants become ...Abstract · Introduction And Theory... · Cognitive And Social...
  78. [78]
    Collective wisdom in polarized groups - ACM Digital Library
    Sep 13, 2022 · In recent years, increasing polarization has led to concern about its effects on the accuracy of electorates, juries, courts, and congress.
  79. [79]
    Collective incentives improve group accuracy by reducing reliance ...
    Sep 16, 2025 · This study examined how collective incentives and independent news sources might improve collective accuracy in judgments based on news content.
  80. [80]
    Overconfidence in news judgments is associated with false ... - NIH
    Our results suggest that overconfidence may be a crucial factor for explaining how false and low-quality information spreads via social media. Keywords: ...
  81. [81]
    Searching for or reviewing evidence improves crowdworkers ...
    May 26, 2023 · Searching for or reviewing evidence improves crowdworkers' misinformation judgments and reduces partisan bias. Paul Resnick https://orcid.org ...
  82. [82]
    Prediction Market: Overview, Types, Examples - Investopedia
    A prediction market is where individuals trade contracts based on the outcomes of unknown future events such as election results or sports competitions.Understanding a Prediction... · Types of Prediction Markets
  83. [83]
    Prediction Markets (Chapter 1) - Collective Wisdom
    Empirical evidence that such predictions correlate well with observed event ... Using prediction markets to track information flows: Evidence from Google.
  84. [84]
    Prediction market accuracy in the long run - ScienceDirect.com
    We compare market predictions to 964 polls over the five Presidential elections since 1988. The market is closer to the eventual outcome 74% of the time.
  85. [85]
    Prediction Markets for Economic Forecasting - ScienceDirect.com
    Prediction markets – markets used to forecast future events – have been used to accurately forecast the outcome of political contests, sporting events, and, ...
  86. [86]
    [PDF] Prediction Markets versus Alternative Methods. Empirical Tests of ...
    In addition, trading behavior in both markets was analyzed in terms of whether there is support for the assumption that experts possess superior knowledge.
  87. [87]
    On the efficacy of the wisdom of crowds to forecast economic ...
    Jan 19, 2023 · We find that the median has advantages over the mean as a method to combine the experts' estimates: the odds that the crowd beats all participants of a ...Missing: studies | Show results with:studies
  88. [88]
    On the Wisdom of Crowds (of Economists) | Department of Economics
    On the Wisdom of Crowds (of Economists). We study the properties of macroeconomic survey forecast response averages as the number of survey respondents grows.
  89. [89]
    Prediction markets as a vital part of collective intelligence
    Prediction markets are the real life implementation of collective intelligence. The fact that prediction markets outperform experts makes it a great tool for ...
  90. [90]
    Prediction markets - everything you need to know - a16z crypto
    Sep 25, 2025 · So, one famous example of this is Hewlett Packard: They were interested in forecasting, how many printers are gonna be sold in the next quarter, ...
  91. [91]
  92. [92]
    [PDF] Corporate Prediction Markets: Evidence from Google, Ford, and Firm ...
    The three companies whose prediction markets we examine, Google, Ford, and Firm X, are in different industries, have distinct corporate cultures, and took ...
  93. [93]
    Microsoft Enters the Prediction Market to Engage Users | Insight
    Microsoft Research set out to join the prediction market by creating a website that could scale to allow millions of users to make predictions at once.
  94. [94]
    Internal corporate prediction markets: “From each according to his bet”
    Corporate prediction markets allow companies to use external market concepts to facilitate and support corporate decision making. Recently, Google, Microsoft, ...Abstract · Introduction · Information Systems And...
  95. [95]
    2017 in Review: 8 Top InnoCentive Challenges - LinkedIn
    Feb 21, 2018 · Focuses included reducing the spread of invasive fish, cooling batteries better, improving how blood glucose levels are monitored, and ...<|separator|>
  96. [96]
    2.3 InnoCentive | Online Resources
    First, InnoCentive allows seeker organizations to reduce their R&D budget by tapping into the wisdom and innovative capacity of a network of more than ...
  97. [97]
    If You Have a Problem, Use Innocentive to Ask Everyone
    Jul 22, 2008 · InnoCentive, a company that links organizations (seekers) with problems (challenges) to people all over the world (solvers) who win cash prizes for resolving ...
  98. [98]
    Create Your Private Forecasting Platform - Metaculus
    Transform raw predictions into collective intelligence. Use visual summaries and aggregation tools to support decisions with clear, up-to-date probability ...
  99. [99]
    Why I Reject the Comparison of Metaculus to Prediction Markets
    Feb 24, 2023 · Metaculus is creating an entire forecasting ecosystem that enables collective intelligence to be harnessed in the service of accurately ...
  100. [100]
  101. [101]
  102. [102]
  103. [103]
    Crowd prediction systems: Markets, polls, and elite forecasters
    Our main finding is that small, elite crowds tend to produce consistently more accurate aggregate forecasts than non-elite crowds, whereas prediction markets ...
  104. [104]
    [PDF] Forecasting skill of a crowd-prediction platform - arXiv
    May 23, 2025 · This study assesses the accuracy of crowd-predictions from the Metaculus platform on questions related to exchange rates, where the random-walk ...
  105. [105]
    Wisdom of the silicon crowd: LLM ensemble prediction capabilities ...
    Nov 8, 2024 · Our findings suggest that LLM predictions can rival the human crowd's forecasting accuracy through simple aggregation.
  106. [106]
    Engagement, user satisfaction, and the amplification of divisive ...
    We conduct an algorithmic audit of Twitter's algorithm, which optimizes for what users will engage with, and find that it amplifies divisive content much more.
  107. [107]
    Emergence of human-like polarization among large language model ...
    Jan 9, 2025 · In this paper, we simulate a networked system involving thousands of large language model agents, discovering their social interactions, guided through LLM ...
  108. [108]
    [PDF] The wisdom of partisan crowds - Semantic Scholar
    This work conducted two web-based experiments in which individuals answered factual questions known to elicit partisan bias before and after observing the ...
  109. [109]
    Algorithmic Amplification for Collective Intelligence
    Sep 21, 2023 · A project studying algorithmic amplification and distortion, and exploring ways to minimize harmful amplifying or distorting effects.
  110. [110]
    [PDF] AI and Social Media: A Political Economy Perspective*
    May 29, 2025 · We discuss the impact regulations can have on the polarizing effects of AI-powered online platforms. Keywords: political economy, artificial ...
  111. [111]
    An evolutionary model of bias, polarization, and other challenges to ...
    Sep 9, 2022 · The wisdom of crowds versus the madness of mobs: An evolutionary model of bias, polarization, and other challenges to collective intelligence.