Fact-checked by Grok 2 weeks ago

Analysis of competing hypotheses

Analysis of competing hypotheses () is a structured analytic designed to evaluate multiple alternative explanations for observed data by systematically assessing their consistency and inconsistency with available evidence, thereby reducing common cognitive biases such as . Developed by Richards J. Heuer Jr., a veteran analyst, ACH emphasizes disproving hypotheses rather than seeking confirmatory evidence, inverting the intuitive tendency to prematurely affirm favored explanations. Introduced in Heuer's seminal 1999 monograph Psychology of Intelligence Analysis, the method employs a matrix format where hypotheses are arrayed against pieces of evidence, with cells marked to denote support, refutation, or neutrality, followed by iterative refinement to identify the most plausible interpretations. The technique's core steps include listing all relevant hypotheses without initial elimination, compiling pertinent evidence from multiple sources, filling the evaluation matrix starting with inconsistencies to avoid premature closure, and analyzing results to adjust confidence levels or generate new hypotheses as needed. has been adopted across intelligence agencies and analytic training programs for its empirical basis in countering perceptual and memory distortions documented in , promoting more rigorous, evidence-driven judgments over intuitive leaps. While praised for enhancing diagnostic accuracy in high-stakes environments with ambiguous information, it requires significant time and discipline, limiting its use in fast-paced scenarios, and assumes evidence quality that may not always hold in real-world contexts.

Historical Development

Origins in CIA Intelligence Practices

The Analysis of Competing Hypotheses (ACH) emerged within the U.S. (CIA) during the , a period marked by persistent challenges in evaluating ambiguous intelligence amid incomplete data and high-stakes geopolitical threats. In the , CIA assessments frequently suffered from cognitive pitfalls such as mirror-imaging—projecting U.S. assumptions onto foreign actors—and , which reinforced preconceived notions rather than rigorously testing alternatives, contributing to analytic errors in forecasting Soviet intentions and capabilities. These failures underscored the empirical need for structured methods to mitigate mental shortcuts, as traditional intuitive analysis often faltered under conditions of uncertainty inherent to intelligence work. Richards J. Heuer Jr., a CIA analyst, developed between 1978 and 1986 as a direct response to these documented shortcomings, integrating insights from on cognitive biases and traps. Heuer's approach was grounded in empirical observations of intelligence pitfalls, emphasizing the reversal of conventional analytic sequences—starting with evidence evaluation against multiple hypotheses rather than hypothesis-driven filtering—to counteract biases empirically linked to flawed judgments. This period's internal CIA writings by Heuer laid the groundwork, drawing on studies of perceptual distortions and probabilistic reasoning to address real-world analytic demands without relying on abstract theory. Initially, saw informal application in CIA training programs to equip analysts with tools for handling sparse, contradictory information in threat assessments, such as discerning or unintended signals in adversary behavior. By focusing on falsification over , the method aimed to foster diagnostic rigor in environments where data gaps and ambiguity prevailed, marking an early shift toward evidence-centered practices within the agency. This practical orientation reflected the CIA's post-1970s push for bias-resistant techniques, informed by debriefs of operational failures rather than external academic mandates.

Key Contributions by Richards Heuer

Richards J. Heuer Jr. (July 15, 1927–August 21, 2018) was a longtime CIA analyst whose career spanned over four decades, including roles in operations and directorate of intelligence, followed by contracting work until the late 1990s. During this period, Heuer identified recurrent cognitive pitfalls in intelligence assessments, such as and premature hypothesis closure, through examinations of declassified failures including the CIA's underestimation of Egyptian and Syrian attack intentions prior to the October 6, 1973, . This misestimation stemmed from analysts' overreliance on consistent evidence while discounting inconsistencies, a pattern Heuer documented in internal analyses to advocate for structured countermeasures. Heuer's primary innovation was the formalization of (ACH) in the as a matrix-based procedure to mitigate these biases by systematically refuting rather than corroborating explanations. In ACH, analysts begin by enumerating all plausible hypotheses exhaustively—drawn from Heuer's review of CIA case studies showing how narrow initial framing led to overlooked alternatives—then map evidence against them to prioritize disconfirming data. This approach inverted traditional confirmatory reasoning, aligning with falsification principles to eliminate weaker hypotheses through inconsistency checks, as Heuer outlined in agency memos and later elaborated in his 1999 monograph Psychology of Intelligence Analysis. By requiring evidence evaluation independent of favored hypotheses, ACH aimed to reduce anchoring effects observed in post-mortems like the surprise, where cultural mirror-imaging dismissed Arab capabilities despite contrary indicators. Heuer's internal CIA writings, including structured analytic primers, emphasized ACH's quasi-probabilistic updating via evidence matrices, where hypotheses are iteratively pruned based on refutatory weight rather than probabilistic alone. This method, tested in simulated exercises during his tenure, countered the "proving" mindset Heuer critiqued as amplifying errors in high-uncertainty environments, such as the 1973 war's overlooked mobilizations. His contributions earned recognition, including a 2013 CIA Trailblazer Award for advancing analytic against ingrained psychological traps.

Formalization and Publication

The Analysis of Competing Hypotheses underwent refinements in the through training courses and workshops organized under the CIA's Center for the Study of Intelligence, where Richards J. Heuer Jr. integrated psychological insights to address analytic biases. These sessions, part of broader efforts to enhance during the late era, shifted ACH from application to a repeatable emphasizing testing against . Heuer formalized the method in 8 of his Psychology of Intelligence Analysis, published on October 1, 1999, by the CIA's Center for the Study of Intelligence. The document, drawing on declassified research from Heuer's 45-year CIA career, detailed as a matrix-based approach to evaluate multiple explanations systematically, marking its transition to a documented technique available for institutional use. Post-Cold War declassification enabled wider dissemination of the monograph, with public release via the CIA's website and academic channels by the early , extending its reach to non-government analysts. This publication preceded and informed intelligence reforms, including the Intelligence Reform and Terrorism Prevention Act of 2004 (Public Law 108-458), which indirectly promoted structured analytic methods like by directing enhancements to analytic standards and bias reduction across U.S. intelligence agencies.

Methodological Framework

Core Steps of the ACH Process

The (ACH) process consists of a structured sequence of eight steps designed to systematically evaluate multiple explanatory hypotheses against available , emphasizing falsification through inconsistency mapping rather than . Developed by Richards J. Heuer Jr., this prioritizes generating a broad initial set of hypotheses to avoid premature fixation and uses a matrix-based to assess causal compatibility between evidence and each hypothesis. The method requires analysts to work iteratively, refining assessments to highlight evidence that refutes hypotheses, thereby enhancing the reliability of conclusions drawn from complex data sets.
  1. Identify possible hypotheses: Assemble a diverse group of analysts to brainstorm and compile a comprehensive list of all plausible that could explain the observed , explicitly avoiding dismissal of less likely or hypotheses at this stage to ensure completeness. This step counters by forcing consideration of alternatives from first principles of causal explanation.
  2. List significant evidence and arguments: Gather all relevant , facts, and arguments—both supporting and refuting each hypothesis—without initial evaluation, including assumptions, logical deductions, and potential indicators; sources may include reports, historical data, or expert inputs dated to specific collection periods for verifiability.
  3. Create a : Construct a two-dimensional with hypotheses arrayed across the columns (top row) and items listed down the rows (left column); for each , rate the evidence's diagnostic value as consistent (C), inconsistent (I), or neutral/not applicable (N), focusing on causal linkages where inconsistency signals potential refutation. This tabular format, often implemented in spreadsheets for quantitative tallying, enables visual identification of refutation patterns.
  4. Refine the matrix: Reassess and adjust ratings by seeking overlooked evidence or alternative interpretations; eliminate redundant or implausible only after verifying no consistent evidence remains, and add new if emerging data suggests gaps, ensuring the matrix reflects updated causal evaluations.
  5. Focus on refuting hypotheses: Prioritize evidence marked as inconsistent (I) to identify the most vulnerable , tallying the number of refutations per hypothesis rather than confirmations; a hypothesis lacking any inconsistencies may warrant tentative acceptance, but only after exhaustive search for disconfirming data, inverting traditional probabilistic weighting to emphasize causal disproof.
  6. Analyze sensitivity to critical : Test the robustness of rankings by varying assumptions about key evidence items (e.g., removing or reclassifying high-impact entries dated to specific events), assessing how alternative causal interpretations or uncertainties alter outcomes; this step reveals dependencies on sparse or contested data.
  7. Report conclusions: Derive final judgments on relative likelihoods based on refutation tallies, explicitly noting surviving hypotheses and indicators for monitoring; avoid absolute probabilities, instead qualifying assessments with evidential gaps or risks, such as unexplained absences of expected causal precursors.
  8. Identify milestones for future observation: Post-analysis, define testable predictions or diagnostic paths for each remaining hypothesis, including timelines tied to verifiable events, to enable ongoing causal validation or falsification in dynamic scenarios.
In practice, the process is iterative, often requiring multiple matrix revisions, and is particularly suited to scenarios with voluminous, ambiguous where causal chains must be dissected for inconsistencies.

Essential Tools and Matrices

The core tool in Analysis of Competing Hypotheses (ACH) is the -hypothesis , which systematically maps pieces of against multiple to enforce disciplined . In this , rows list individual items of , while columns represent the competing ; each is marked to indicate whether the evidence is consistent (typically denoted by + or "Supports"), inconsistent (- or "Refutes"), or neutral/neutral-applicable (0, ?, or "Irrelevant") with respect to the hypothesis. This structure, derived from Richards Heuer's , prevents premature commitment to any single explanation by requiring analysts to consider all across hypotheses before drawing conclusions. A secondary diagnostic matrix or refinement process builds on the initial evidence-hypothesis matrix to assess argument strength, prioritizing hypotheses based on the absence or minimization of inconsistencies rather than assigning subjective weights or probabilities at the outset. Hypotheses lacking any inconsistent are advanced first, followed by those with the fewest refutations, thereby emphasizing falsification over mere accumulation. Probabilities or quantitative scoring are deferred until after this qualitative falsification phase to mitigate over-reliance on confirmatory data. To counter pseudo-diagnosticity—the cognitive error of seeking that confirms a favored while ignoring disconfirming alternatives—ACH guidelines mandate listing all relevant evidence upfront, independent of hypotheses, and initially focusing on potential inconsistencies for each cell entry. This approach, rooted in Bayesian-inspired reasoning but operationalized qualitatively, ensures that evidence diagnostic of one hypothesis (e.g., inconsistent with rivals but consistent with it) receives explicit scrutiny, reducing in high-stakes intelligence assessments.

Underlying Principles for Bias Mitigation

The foundational principles of bias mitigation in the Analysis of Competing Hypotheses () draw from empirical findings in , emphasizing logical structures that counteract common mental shortcuts identified in human judgment under uncertainty. Central to ACH is the inversion of typical verification tendencies, where analysts are instructed to prioritize evidence that could falsify hypotheses rather than seeking confirmatory support. This approach directly addresses , a pervasive error where individuals disproportionately gather or interpret information aligning with preconceptions, as demonstrated in experimental tasks showing subjects' preference for verifying instances over potentially disconfirming ones. Richards Heuer, in outlining ACH, rooted this in observed analytical pitfalls, arguing that routine focus on consistency fosters premature closure on flawed explanations. A second core axiom mandates exhaustive generation of alternative hypotheses prior to evidence evaluation, serving as a bulwark against , wherein initial exposures to a particular framing unduly influence subsequent assessments. has quantified this effect through adjustments from arbitrary anchors, with participants' estimates shifting predictably toward irrelevant starting points despite instructions to disregard them. In contexts, such anchoring contributed to historical misjudgments, as when U.S. analysts in the fixated on underestimating Soviet ICBM deployments by anchoring to optimistic compliance assumptions, discounting anomalous indicators that alternative hypotheses might have highlighted. Heuer's framework counters this by requiring deliberate brainstorming of multiple, mutually exclusive explanations, ensuring no single mindset dominates early. Underpinning these is an orientation toward causal mechanisms that robustly explain variances in the data, eschewing reliance on superficial correlations that may mislead without deeper linkages. 's evaluation matrix compels assessment of how each accounts for specific evidentiary patterns through plausible causal pathways, aligning with logical demands for explanatory adequacy over mere predictive fit. This echoes critiques in scientific reasoning where correlational associations, absent mechanistic insight, have sustained erroneous inferences, as in epidemiological overinterpretations of spurious links without intervening processes. By systematizing such scrutiny, fosters realism in attributing causes, reducing the sway of intuitive but unexamined narratives prevalent in biased institutional analyses.

Empirical Assessment

Key Studies on ACH Effectiveness

A 2019 experimental study involving 50 intelligence professionals tested the Analysis of Competing (ACH) technique against a control condition in evaluating ambiguous evidence related to potential terrorist threats. Participants using ACH demonstrated reduced , as measured by lower rates of selectively seeking hypothesis-confirming information (e.g., 25% fewer instances compared to controls), but the method yielded no significant improvements in overall analytical accuracy or hypothesis selection rates. In a 2024 cognitive psychology experiment published in Cognitive Research: Principles and Implications, researchers assessed 's impact on during hypothesis testing tasks with structured versus unstructured evidence presentation. The task-structured variant significantly mitigated bias by encouraging systematic disconfirmation of alternatives, with participants showing a 15-20% increase in considering inconsistent evidence; however, effectiveness hinged on strict user adherence to the matrix-filling process, as deviations led to negligible benefits. A 2025 study on application in simulated involved mock jurors evaluating case evidence under confirmation-prone conditions. implementation reduced erroneous conviction rates by 18% in bias-vulnerable scenarios, attributed to formalized disproval steps that prompted reevaluation of inculpatory evidence; control groups without exhibited higher anchoring to initial guilt hypotheses, though the study noted limitations in generalizing to high-stakes real trials due to simplified mock setups.

Evidence from Intelligence and Experimental Contexts

Internal evaluations within the CIA prior to the 1999 publication of Richards Heuer's Psychology of Intelligence Analysis described the Analysis of Competing Hypotheses (ACH) as contributing to reduced in analytic processes, based on practitioner feedback from its early adoption in the and ; however, these assessments were anecdotal and lacked experimental controls or quantitative metrics to isolate ACH's effects from other factors. Heuer's framework emphasized ACH's role in countering through systematic evidence-hypothesis mapping, but contemporaneous CIA reports provided no rigorous testing, relying instead on qualitative endorsements from analysts handling complex geopolitical assessments. Post-9/11 field applications of in U.S. intelligence community operations yielded mixed outcomes, with () guidance promoting structured techniques like for enhancing analytic rigor amid high-stakes threats, yet noting increased time requirements that could delay outputs without guaranteed accuracy gains. For instance, a 2009 CIA primer highlighted 's utility in sifting large datasets during analyses, reporting anecdotal improvements in falsification, but subsequent reviews post-2004 emphasized the need for broader empirical validation, as real-world constraints like incomplete information often undermined consistent benefits. In controlled laboratory experiments, a 2024 critical review published in Intelligence and National Security synthesized seven articles encompassing six studies testing against baseline analytic methods, finding null overall effects on judgment accuracy and reduction across diverse scenarios involving probabilistic reasoning and . These experiments, primarily conducted with non-expert participants simulating decision tasks, revealed performed marginally better only in high-ambiguity conditions with multiple viable , but it increased judgment inconsistency and failed to mitigate reliably in low-ambiguity or -rich setups. One such study with intelligence analysts mirrored these lab findings, showing mixed for reduction alongside heightened error rates in hypothesis ranking.

Quantitative Outcomes and Meta-Analyses

A 2019 with 50 analysts compared to unaided in a simulated scenario involving sequential evidence presentation. The group exhibited no significant reduction in , defined as disproportionate weighting of confirming over disconfirming evidence, with both groups showing similar rates of favored hypothesis persistence (approximately 60-70% adherence to initial leanings post-evidence). However, increased judgment inconsistency across repeated assessments (F(1,48)=4.2, p<0.05) and error rates in hypothesis ranking (higher misclassification by 10-15% relative to ground truth). In a 2024 study, 161 undergraduate students evaluated evidence under ACH-matrix (hypotheses as columns), alternative row-based, or text-only conditions. The ACH structure yielded null effects on confirmation bias mitigation (χ²(2)=1.2, p=0.55, φ≈0.10), with 45-50% of participants across conditions favoring confirmatory evidence. An alternative hypothesis-row format showed modest bias reduction (χ²(2)=9.43, p=0.009, φ=0.24, small effect size), but ACH did not differ significantly from unstructured baselines in sensitivity to evidence credibility (77% vs. 75%, χ²(2)=0.15, p=0.93). Prediction accuracy, measured as alignment with experimentally determined truths, showed no group differences (all ~65% correct). Among 62 professional military analysts in the same 2024 study, baseline confirmation bias was low (19.4% used confirmatory strategies), with 66.1% balanced approaches; ACH-style structuring did not yield incremental improvements in bias indices or accuracy, as most participants (83.3%) were already sensitive to evidence directionality. Effect sizes for any ACH-related bias mitigation across these studies remain small (Cohen's d equivalents <0.3 where reported), with null findings predominant for alternative hypothesis generation and overall predictive edge versus controls. No dedicated meta-analyses of ACH outcomes exist as of 2025, limiting pooled effect estimates; available syntheses from peer-reviewed trials indicate 0-10% variance reductions in bias metrics under controlled conditions, but without statistical superiority in accuracy. Key evidential gaps include overreliance on student proxies (e.g., >70% of participants in reviewed experiments) rather than experts, scarcity of longitudinal designs tracking real-world application over time, and under-examination of complex, high-stakes scenarios beyond lab simulations.

Theoretical Advantages

Alignment with First-Principles Reasoning

The Analysis of Competing Hypotheses () employs a deductive rooted in basic logical , wherein a set of mutually exclusive and collectively exhaustive is generated and subjected to rigorous testing against evidentiary items. is evaluated for consistency or inconsistency with each via a structured , with priority given to disconfirmatory findings that eliminate untenable explanations. This process demands that articulate causal pathways—specific mechanisms linking proposed causes to observed effects—rather than resting on probabilistic associations or alone, ensuring explanations are mechanistically grounded rather than inferential leaps from correlations. ACH's emphasis on falsification over verification parallels core tenets of scientific methodology, as outlined by , who argued that hypotheses advance knowledge only when exposed to potential refutation through critical tests. In practice, ACH identifies diagnostic evidence capable of disproving multiple hypotheses simultaneously, retaining only those compatible with the full body of data; this mirrors empirical practices in hypothesis-driven sciences, where theories must withstand targeted scrutiny to persist. By focusing on refutation, ACH avoids the pitfalls of inductive overgeneralization, deriving conclusions from the logical incompatibility of evidence with alternatives. Central to ACH's logical integrity is its deliberate suspension of initial probabilistic judgments or priors, compelling analysts to commence evaluation from a baseline akin to a blank slate. Hypotheses are brainstormed comprehensively—drawing from theory, logic, and diverse perspectives—before any weighting, which enforces an evidence-driven progression untainted by anchoring biases. This orientation aligns with first-principles deduction, where validity emerges solely from evidentiary confrontation, as evidenced in ACH's application beyond to structured scientific requiring exhaustive alternative consideration.

Potential for Reducing Cognitive Biases

The Analysis of Competing (ACH) employs a structured sequence of steps designed to counteract cognitive biases that distort analytical judgment, such as by prioritizing the falsification of hypotheses over their corroboration. In the initial phases, analysts must enumerate all plausible hypotheses without premature evaluation, followed by an exhaustive compilation of pertinent drawn from available data sources. This decoupling of hypothesis generation from prevents selective filtering influenced by preconceived notions, fostering a more objective foundation for subsequent reasoning. To specifically counter the , which predisposes individuals to favor supported by vivid or recently encountered information at the expense of comprehensive review, ACH requires the creation of a full inventory prior to any scoring or ranking. is then systematically arrayed in a against each , with explicit to diagnostic value—particularly instances that contradict rather than support a given . This process ensures that non-salient or contradictory data receives equal scrutiny, reducing the heuristic's sway over probability assignments. Hindsight bias, characterized by the post-hoc inflation of an event's foreseeability, is addressed through ACH's documentation protocols, including timestamped matrices that record initial evidence-hypothesis mappings and enable longitudinal tracking of revisions as new information emerges. By preserving the analytical trail from inception to conclusion, these records allow analysts to revisit the state of knowledge and uncertainty prevailing at each stage, thereby tempering retrospective distortions in self-evaluation and learning. These mechanisms align with debiasing strategies in research, where Tetlock's studies of superforecasters reveal that systematic testing and -based updating enhance , with trained participants achieving scores approximately 10-20% superior to untrained counterparts in probabilistic accuracy. ACH's focus on disconfirmation thus complements such , promoting probabilistic without guaranteeing elimination across all contexts.

Enhancements to Causal Analysis

The Analysis of Competing Hypotheses () strengthens by mandating the explicit consideration of multiple alternative explanations, thereby enforcing a multi-causal framework that resists reduction to singular factors. In contrast to conventional explanatory narratives that often privilege one dominant cause, requires analysts to generate a set of mutually exclusive and collectively exhaustive , each representing a distinct causal configuration, and to evaluate them against the full body of . This structured evaluation, typically via a format, assesses whether is consistent, inconsistent, or neutral with respect to each , highlighting discrepancies that single-factor models might ignore. As outlined in foundational , this process originated in the 1970s at the and systematically prioritizes falsification over confirmation, revealing causal complexities inherent in ambiguous data sets. A core enhancement lies in ACH's mechanism for integrating inconsistent evidence, which compels the identification of hidden variables or confounders that mediate observed outcomes. When evidence refutes elements of multiple hypotheses without fully eliminating any, the method prompts revision to incorporate overlooked causal intermediaries, such as environmental or intervening factors not initially hypothesized. U.S. guidelines emphasize ACH's role in for multifaceted scenarios, like threat assessments involving interdependent drivers (e.g., ideological, logistical, and opportunistic elements in ), where failure to aggregate inconsistencies leads to incomplete causal chains. Unlike qualitative storytelling, which can accommodate contradictions through rationalizations, ACH quantifies refutations by tallying inconsistencies per hypothesis, enabling probabilistic downgrading of overly simplistic models and elevation of those accommodating empirical friction—thus aligning inferences with observable causal heterogeneity rather than idealized coherence. In operational contexts, excels at surfacing low-probability, high-impact causal pathways by sustaining scrutiny of peripheral , mirroring red-teaming protocols that stress-test dominant assumptions against outlier scenarios. This approach mitigates the underestimation of rare but consequential causes, as in applications where initial ambiguously supports several etiologies, prompting deeper probing for combinatorial effects. For example, doctrinal adaptations note its utility in dissecting events with layered causation, ensuring that no is prematurely discarded without evidentiary warrant, which fosters more resilient causal models capable of anticipating nonlinear outcomes. The method's emphasis on exhaustive thus elevates causal , distinguishing verifiable multi-factor dynamics from illusory single-thread explanations, though its rigor hinges on comprehensive inputs to avoid artifactual gaps in causal coverage.

Criticisms and Empirical Limitations

Identified Methodological Flaws

ACH's foundational logic prioritizes the of disconfirming to falsify hypotheses, assuming that adequate diagnostic exists to distinguish true from false alternatives. This falsificationist orientation, while intended to counter , risks systematic retention of implausible hypotheses in data-sparse contexts, where insufficient prevents definitive rejection, conflating absence of disproof with viability. The technique's evidence-hypothesis matrix enforces categorical judgments—typically classifying as consistent, inconsistent, or neutral—which rigidifies analysis by excluding probabilistic nuances and dependencies between evidentiary items. Such neglects continuous likelihood assessments, potentially distorting comparative hypothesis by aggregating disparate without accounting for varying degrees of or conditional probabilities. ACH omits explicit integration of prior probabilities and fails to prescribe upfront weighting of evidence by reliability or diagnosticity, diverging from Bayesian norms that quantify belief updates via likelihood ratios and . This structural gap renders the method vulnerable to neglect and evidential handling, undermining in uncertain environments. The approach's theoretical underpinnings have accordingly been characterized as fundamentally misaligned with probabilistic standards, lacking normative rigor for bias mitigation.

Mixed Results from Recent Evaluations

A critical review of the Analysis of Competing Hypotheses () technique, published in Intelligence and National Security, synthesized seven peer-reviewed articles encompassing six experiments conducted primarily with intelligence analysts and students. The review concluded that , when applied as a complete process, demonstrates little to no net benefit in improving judgmental accuracy or reducing errors compared to unstructured analysis. In scenarios involving time constraints, appeared to exacerbate analytical shortcomings, as participants allocated insufficient time to hypothesis generation and evidence evaluation, leading to incomplete matrices and heightened vulnerability to initial biases. These findings contrast with a 2019 experimental study involving 50 intelligence professionals, which tested 's impact on during evaluation. While reduced the tendency to overweight confirming evidence—evidenced by lower inconsistency in evidence- assessments—it yielded no measurable improvement in overall accuracy of rankings. This pattern suggests may foster a subjective sense of rigor akin to a placebo effect, enhancing perceived debiasing without translating to superior outcomes. Similar null results on accuracy have emerged in subsequent tests, such as a 2024 examination of 's task-structuring effects, which found it mitigated some but failed to enhance consideration under structured prompts. Broader empirical scrutiny has challenged claims linking to enhanced forecasting performance, as seen in large-scale probabilistic prediction environments like those in Philip Tetlock's tournaments (2011–2015). is absent from analyses of top-performing "superforecasters," whose edge derived from practices like base-rate utilization, probabilistic updating, and team deliberation rather than systematic matrix construction. No causal evidence connects adoption to superior tournament outcomes, underscoring its limited generalizability beyond controlled, low-stakes testing. These inconsistencies highlight 's variable efficacy, dependent on context, user expertise, and procedural fidelity, with post-2010 data privileging skepticism over unqualified endorsement.

Practical Constraints in Real-World Use

The Analysis of Competing (ACH) process demands significant time investment, particularly when evaluating multiple hypotheses against extensive evidence sets. Constructing the initial , including listing alternatives and mapping evidence, can require from a few hours to several days, depending on data volume and complexity. For scenarios involving five or more hypotheses, the full evaluation phase—disproving rather than confirming—often extends into multiple hours, making ACH unsuitable for urgent, fast-paced operational contexts like responding to imminent threats where decisions must occur within minutes or hours. Scalability poses further operational hurdles, as ACH's , tabular falters in high-volume environments without pre-processed inputs. The relies on analysts systematically assessing each piece of against every , a labor-intensive step that becomes unwieldy with large, unstructured datasets common in feeds. Empirical reviews note limited applicability to complex cases with correlated or voluminous , where the method's rigid fails to efficiently handle , potentially overwhelming or small-team efforts. In group applications, requires rigorous facilitation to mitigate dynamics that could entrench minority or outlier views prematurely. Without an experienced moderator to enforce evidence-based challenges and prevent dominance by initial impressions, teams may inconsistently evaluate hypotheses, leading to reduced judgment coherence or reinforcement of unexamined preferences. Field-oriented critiques highlight that novice groups often struggle with this, as the technique's emphasis on disproval demands structured to avoid devolving into unproductive contention or overlooked alternatives.

Applications and Extensions

Primary Use in Intelligence Analysis

Analysis of Competing Hypotheses (ACH) was developed by CIA analyst Richards J. Heuer Jr. in the 1970s specifically for intelligence applications, aiming to systematically test evidence against multiple explanations to discern Soviet intentions amid ambiguous indicators, such as tactical warnings versus strategic deception. Declassified examples from Heuer's work include evaluations of Soviet actions during the 1969 Sino-Soviet border clashes and the 1980 Afghanistan invasion, where ACH matrices revealed inconsistencies in favored hypotheses of imminent aggression, prompting reconsideration of alternative intents like internal consolidation or feints. These Cold War-era simulations demonstrated ACH's utility in reducing overreliance on mirror-imaging foreign leaders' motives, providing an auditable trail for contested assessments. Following the 1999 publication of Heuer's Psychology of Intelligence Analysis, became a core element in CIA training programs by the early , incorporated into courses like Tradecraft 2000 to train analysts in bias mitigation through hypothesis falsification. The technique gained further prominence after the 2003 exposed flaws in WMD , where failure to rigorously pursue disconfirming contributed to erroneous conclusions about stockpiles and programs. In response, was applied in post-invasion reassessments to explore hypotheses and alternative explanations for ambiguous reporting, aligning with recommendations from the 2005 on the Capabilities of the Regarding Weapons of Mass Destruction for structured methods to handle uncertainty. ACH's integration into broader Intelligence Community (IC) standards accelerated with Intelligence Community Directive (ICD) 203 on Analytic Standards in 2007, which mandated objectivity and alternative consideration in products, implicitly endorsing techniques like to meet these criteria. The CIA's 2009 Tradecraft Primer formalized as a diagnostic tool for high-stakes evaluations, such as deception detection in WMD contexts, ensuring its routine use in DNI-directed training and analytic workflows thereafter. In legal contexts, the Analysis of Competing Hypotheses () has been adapted to counteract during evaluation, particularly in prosecutorial and judicial . A September 2025 study involving 222 law students tested in simulated , where participants generated competing explanations for case and systematically evaluated for inconsistency with favored hypotheses; results indicated improved consideration of disconfirmatory compared to standard linear , though effects varied by participant expertise. Similarly, a 2020 experimental study in scenarios found that prompted mock decision-makers to actively falsify initial guilt hypotheses before full review, reducing premature conclusions and enhancing diagnostic accuracy over intuitive methods. These adaptations emphasize 's matrix-based approach to prioritize causal inconsistencies, aiding prosecutors in assessing alternative narratives for observed facts like witness statements or forensic traces. In business applications, supports by structuring causal predictions of rival actions amid uncertain signals. A October 2022 framework from , a platform, illustrated for forecasting competitor maneuvers, such as interpreting executive hires or patent filings as hypotheses for product launches or pricing shifts; analysts list (e.g., market share , job postings) in a to refute implausible scenarios, enabling prioritized countermeasures like adjustments. This method counters overconfidence in single interpretations, with practitioners reporting clearer identification of high-probability threats, as evidenced by its integration into sales strategy reviews where multiple hypotheses about partner behaviors are tested against contract and performance metrics. Beyond legal and business fields, adaptations appear in cybersecurity for threat attribution and incident response. In a May 2017 analysis, was applied to dissect potential attack vectors by hypothesizing actor identities (e.g., nation-state vs. insider) and scoring indicators like IP logs or signatures for evidential fit; this iterative refutation process helped isolate causal chains, distinguishing orchestrated campaigns from opportunistic hacks. cyber threat intelligence training further incorporates to cluster intrusions by hypothesis testing, reducing false attributions in high-volume alert environments. Such uses highlight 's utility in domains requiring rapid causal disentanglement under incomplete data.

Case Studies of Implementation

In a controlled experiment simulating tasks, 50 experienced practitioners applied to evaluate competing explanations for ambiguous scenarios, such as potential foreign agent activities. The method led to a significant reduction in , evidenced by fewer instances of being interpreted inconsistently across hypotheses, though overall forecasting accuracy remained comparable to intuitive approaches. This outcome highlighted ACH's utility in promoting more balanced assessment but underscored limitations in enhancing predictive precision under . A published in September 2025 tested in simulated involving mock jurors assessing guilt based on forensic and testimonial . Participants using ACH demonstrated decreased susceptibility to , with structured hypothesis falsification yielding verdicts more aligned with objective weightings compared to control groups relying on unstructured . Outcomes indicated improved consistency in rejecting initially favored hypotheses when contradicted by , though the required additional time, averaging 25% longer periods. In business , was employed by a firm in to evaluate hypotheses about a rival's potential product launch , drawing on , filings, and executive statements. The matrix-driven approach disproved a leading of aggressive market entry by highlighting inconsistencies with observed resource allocations, averting a sunk-cost to counter-strategies that would have misallocated $2.5 million in projected R&D funds. This case illustrated 's role in mitigating anchoring to initial assumptions, resulting in a to defensive positioning that preserved competitive parity.

Software Implementations

Open-Source and Commercial Tools

CompetingHypotheses.org provides a free, open-source for implementing Analysis of Competing Hypotheses (ACH), enabling users to construct evidence-hypothesis matrices, score inconsistencies and consistencies, and export results in formats such as PDF or . Developed as a software companion to the CIA-originated , it originated from the PARC ACH tool under the and became publicly available around 2010, with local installation possible via tools like for offline use. The Pherson ACH software, offered as a free downloadable executable for Windows, automates the process by facilitating hypothesis listing, , and automated inconsistency ranking to prioritize . Distributed by Pherson Associates, it has been tested in courses at agencies including the CIA and FBI, emphasizing its role in reducing cognitive biases through structured scoring. ACH0 serves as an academic, experimental tool designed for prioritization in , featuring a table-oriented for scoring relations and applying algorithms to assess hypothesis viability. Developed for purposes, it focuses on algorithmic handling of evidence-hypothesis mappings, distinguishing it from more user-friendly implementations by prioritizing computational transparency over broad accessibility. Other open-source efforts include Open Synthesis, a GitHub-based platform supporting ACH frameworks for public intelligence analysis, and additional repositories replicating ACH matrices for evidentiary evaluation. While commercial ACH-specific software remains limited, variants like the PARC ACH 2.0—also free and downloadable—stem from collaborative development with ACH pioneer Richards J. Heuer, underscoring the predominance of no-cost tools in this domain.

Features and Automation Capabilities

Software implementations of Analysis of Competing (ACH) extend the manual process by automating matrix construction and evaluation, where evidence is systematically assessed against competing hypotheses to identify inconsistencies and support refutation. Tools such as the open-source ACH platform facilitate automatic evaluation of data points against hypotheses, flagging points of contention that highlight inconsistencies across hypotheses rather than mere confirmations. This automation aids in pinpointing refutation paths, where a single strong inconsistency can eliminate a hypothesis, thereby reducing cognitive biases toward confirmation-seeking. Visualization features in these tools include dynamic evidence-hypothesis matrices, allowing users to adjust assessments and observe real-time updates to hypothesis rankings based on inconsistency counts or weighted scores. For instance, downloadable ACH software from Pherson Associates provides a visual trail of evidence evaluations, enabling iterative refinement without manual recalculation. Sensitivity analysis is incorporated to test how results depend on critical evidence items; users can simulate changes to key assessments—such as altering an inconsistent rating to neutral—and observe shifts in hypothesis viability, ensuring robustness against potential errors or deception. Evidence import capabilities vary but often support structured or file uploads to populate matrices efficiently, though full of sourcing remains limited. Despite these advances, core tools as of 2025 require human judgment for initial generation and evidence selection, lacking integrated for automated hypothesis creation or probabilistic Bayesian updates beyond basic scoring. This human dependency preserves analytical rigor but constrains scalability for high-volume or real-time applications.

Comparisons with Alternative Techniques

Contrasts with Informal Analysis Methods

imposes a rigorous, matrix-based framework that requires analysts to generate a comprehensive set of plausible hypotheses upfront and systematically evaluate all available evidence against each, prioritizing disconfirmation over mere consistency. This contrasts sharply with informal analysis methods, which often proceed intuitively by anchoring on an initial favored explanation and selectively gathering or interpreting evidence to support it, a process vulnerable to . Informal approaches mirror Kahneman's characterization of thinking—fast, associative, and heuristic-driven—but lack mechanisms to enforce evidence exhaustiveness, leading to premature hypothesis closure and overlooked alternatives. In scenarios of high ambiguity or voluminous data, informal methods falter due to cognitive limitations, such as availability heuristics that overweight salient but unrepresentative information. ACH counters this by mandating a tabular where evidence is plotted against hypotheses to identify inconsistencies, fostering causal realism through explicit falsification criteria rather than probabilistic hunches. Empirical assessments of structured techniques, including , demonstrate potential advantages over unstructured in complex judgment tasks; however, a experimental study involving intelligence-like scenarios found mixed results, with showing limited mitigation of and, in some cases, heightened judgment inconsistency compared to baseline intuitive analysis. Similarly, a 2023 evaluation of analytic aids reported that did not significantly boost accuracy in , underscoring that its benefits may depend on analyst expertise and task structure rather than inherent superiority. While ACH's formalism enhances traceability and debiasing in high-uncertainty environments—such as assessments where causal chains are opaque—it incurs time costs that informal methods avoid, rendering the latter more suitable for low-stakes, time-sensitive decisions with clear signals. Proponents argue ACH's emphasis on refutation aligns with scientific principles of testing, providing a benchmark for absent in ad-hoc deliberation, though real-world adoption reveals informal methods persist due to their cognitive fluency and adaptability in fluid contexts.

Relations to Other Structured Analytic Techniques

Analysis of Competing Hypotheses (ACH) integrates with other structured analytic techniques (SATs) by emphasizing systematic refutation across multiple explanations, often serving as a core method for testing after initial brainstorming or validation. Unlike techniques that isolate a single viewpoint for challenge, ACH employs a to juxtapose against all hypotheses, fostering balanced and reducing selective confirmation. This positions it as a complement to preparatory steps like brainstorming or assumption checks, and as a precursor to deeper explorations of refined leads. In contrast to Devil's Advocacy, which assigns a team or individual to construct the strongest possible case against a dominant or judgment, ACH distributes scrutiny evenly across competitors, starting with inconsistency checks to eliminate options rather than building affirmative arguments for alternatives. Devil's Advocacy excels in probing high-confidence analytic lines for overlooked flaws, particularly when resources limit broad hypothesis sets, whereas ACH's tabular approach handles voluminous data more scalably for multi-option scenarios, avoiding the inherent in advocacy roles. ACH complements Key Assumptions Check by extending early-stage assumption identification into rigorous evidence-based testing; the latter lists and evaluates pivotal unstated premises supporting a primary judgment, often at project outset, while applies similar scrutiny post-hypothesis listing to refute via contradictory indicators. This sequencing—assumptions first, then matrix refutation—enhances causal robustness, as unchecked assumptions can skew hypothesis viability in . Relative to Alternative Scenarios, which derive branching narratives from key drivers and uncertainties to explore future outcomes, prioritizes explanatory falsification over probabilistic storytelling, mapping diagnostic evidence to disprove rather than elaborate hypotheses. Scenario methods suit forecasting with sparse data by constructing coherent paths, but risk confirmation through selective fit; 's evidence-hypothesis grid enforces disconfirmation priority, making it preferable for diagnostic puzzles with extant observables, though combinable with scenarios to validate projected indicators. ACH's distinguishing matrix enables parallel multi-hypothesis adjudication absent in single-focus methods like Team A/Team B debates, which simulate adversarial refinement of judgments but demand more interpersonal effort; ACH thus offers efficiency for solo or data-heavy , while alternatives like High-Impact/Low-Probability Analysis pair well for outlier vetting after initial culling. Empirical applications in contexts highlight these synergies, with ACH often chained after assumption checks or before to mitigate mind-set biases systematically.

References

  1. [1]
    [PDF] Psychology of Intelligence Analysis - CIA
    Heuer's concept of “Analysis of Competing Hypotheses” (ACH) is among his most important contributions to the development of an in- telligence analysis ...
  2. [2]
    [PDF] Improving Intelligence Analysis with ACH - Pherson
    Richards Heuer is the author of Psychology of Intelligence Analysis (CIA. Center for the Study of Intelligence, 1999). His book is used throughout.
  3. [3]
    [PDF] [PDF] The Psychology of Intelligence Analysis
    Heuer's concept of “Analysis of Competing Hypotheses” (ACH) is among his most important contributions to the development of an in- telligence analysis ...
  4. [4]
    [PDF] How Does Analysis of Competing Hypotheses (ACH) - Pherson
    By Richards J. Heuer, Jr. This paper identifies three different approaches to Analysis of Competing Hypotheses (ACH) and four major steps in the intelligence ...
  5. [5]
    [PDF] Structured Analytic Techniques for Improving Intelligence Analysis ...
    Analysis of Competing Hypotheses (ACH) ... As Richards Heuer and others have argued, all individuals assimilate and evaluate information through the ...
  6. [6]
    [PDF] Why Bad Things Happen to Good Analysts - CIA
    The rationality or coherence bias—“mirror imaging”—is another ... with analytic failures, the process. 22. Page 7. The Perils of Intelligence Analysis.
  7. [7]
    [PDF] Analytic Culture in hte U.S. Intelligence Community - CIA
    ual errors were cognitive rather than purely psychomotor or skill-based. ... failures and biases. Watching the struggle between the man enculturated in ...
  8. [8]
    Psychology of Intelligence Analysis - DTIC
    This volume pulls together and republishes, with some editing, updating, and additions, articles written during 1978-86 for internal use within the CIA ...<|control11|><|separator|>
  9. [9]
    [PDF] Richards J. Heuer Jr. (1927–2018) - CIA
    Jack was a senior CIA analyst and in 2013 was awarded a Trailblazer Award for his work in shaping and refining CIA's analytical practic- es. This past August, ...
  10. [10]
    Turning Intelligence Analysis on Its Head - Pherson
    Dec 14, 2018 · —Heuer conceptualized several new analytic techniques, including Analysis of Competing Hypotheses—a method that has emerged as one of the ...
  11. [11]
    Psychology of Intelligence Analysis - All.Net
    Heuer's concept of "Analysis of Competing Hypotheses" (ACH) is among his most important contributions to the development of an intelligence analysis methodology ...
  12. [12]
    Psychology of Intelligence Analysis - CSI - CIA
    Psychology of Intelligence Analysis. By Richards J. Heuer, Jr. (1999). Cover of Psychology of ...
  13. [13]
    [PDF] Structured Analytic Techniques for Improving Intelligence Analysis ...
    This primer highlights structured analytic techniques—some widely used in the private sector and academia, some unique to the intelligence profession.
  14. [14]
    The “analysis of competing hypotheses” in intelligence analysis
    Mar 21, 2019 · We examined the use of the analysis of competing hypotheses (ACH)—a technique designed to reduce “confirmation bias ...
  15. [15]
    Effects of task structure and confirmation bias in alternative ...
    Jun 13, 2024 · We empirically examined the effectiveness of how the Analysis of Competing Hypotheses (ACH) technique structures task information to help reduce confirmation ...
  16. [16]
    The Analysis of Competing Hypotheses in Legal Proceedings
    Sep 7, 2025 · The present study examined the potential benefit of the Analysis of Competing Hypotheses (ACH) method to reduce confirmation bias in legal ...
  17. [17]
    [PDF] ATA-2024-Unclassified-Report.pdf - DNI.gov
    Mar 11, 2024 · This annual report of worldwide threats to the national security of the United States responds to. Section 617 of the FY21 Intelligence ...
  18. [18]
    Critical review of the Analysis of Competing Hypotheses technique
    Intelligence communities regularly produce important assessments that inform policymakers. The Analysis of Competing Hypotheses technique (ACH) is one of ...
  19. [19]
    Critical review of the Analysis of Competing Hypotheses technique
    The analysis of competing hypotheses (ACH) has been suggested to be a method that can protect against confirmation bias in the context of intelligence analysis.
  20. [20]
    Karl Popper: Falsification Theory - Simply Psychology
    Jul 31, 2023 · Karl Popper's theory of falsification contends that scientific inquiry should aim not to verify hypotheses but to rigorously test and identify conditions under ...Missing: Competing alignment
  21. [21]
    [PDF] ATP 2-33.4 Intelligence Analysis
    Aug 18, 2014 · Facilitate causal analysis. Many scenarios ... The Central Intelligence Agency developed ACH in the 1970s as an intelligence methodology to.
  22. [22]
    [PDF] Extending Heuer's Analysis of Competing Hypotheses Method to ...
    Abstract. In this paper, we evaluate the Analysis of Com- peting Hypotheses (ACH) method using a nor- mative Bayesian probabilistic framework. We.Missing: correlations | Show results with:correlations
  23. [23]
    Cognitive Bias in Intelligence Analysis: Testing ... - Oxford Academic
    Sep 15, 2020 · The book shows that the theoretical basis of the ACH method is significantly flawed, and that there is no empirical basis for the use of ACH in ...Missing: effectiveness | Show results with:effectiveness<|separator|>
  24. [24]
    Critical review of the Analysis of Competing Hypotheses technique
    This critical review identified seven articles describing six experiments testing ACH. The results indicate ACH – as a whole – has little to no overall benefit ...
  25. [25]
    The “analysis of competing hypotheses” in intelligence analysis
    Mar 21, 2019 · We examined the use of the analysis of competing hypotheses (ACH)—a technique designed to reduce “confirmation bias.” Fifty intelligence ...Missing: forecasting | Show results with:forecasting
  26. [26]
    Effects of task structure and confirmation bias in alternative ...
    Jun 13, 2024 · We empirically examined the effectiveness of how the Analysis of Competing Hypotheses (ACH) technique structures task information to help reduce confirmation ...Missing: underlying | Show results with:underlying
  27. [27]
    (PDF) Forecasting Tournaments: Tools for Increasing Transparency ...
    This article describes a massive geopolitical tournament that tested clashing views on the feasibility of improving judgmental accuracy and on the best methods ...
  28. [28]
    Evidence on good forecasting practices from the Good Judgment ...
    Tetlock's book Superforecasting describes this process and Tetlock's resulting understanding of how to forecast well. 1.2. Correlates of successful forecasting.
  29. [29]
    Beyond Bias Minimization: Improving Intelligence with Optimization ...
    Sep 13, 2023 · For example, a highly promoted SAT—the analysis of competing hypotheses—was shown in several recent studies to either not improve judgment ...
  30. [30]
    Mastering the Analysis of Competing Hypotheses (ACH): A Practical ...
    Jun 20, 2025 · Below is an expanded walkthrough of the seven core steps, each designed to promote clarity and rigour in decision-making. ... Psychology of ...
  31. [31]
    [PDF] Trapped by a Mindset: The Iraq WMD Intelligence Failure
    Analysis of competing hypotheses is a structured eight-step process that starts with a full set of possibilities, measures evidence for its diagnostic value and ...
  32. [32]
    [PDF] Analytic Standards - DNI.gov
    Jan 2, 2015 · This Directive supersedes ICD 203, Analytic Standards, dated 21. June 2007, and rescinds ICPM 2006-200-2, Role of the Qffice of the. Director of ...
  33. [33]
    Test of the analysis of competing hypotheses in legal decision‐making
    Aug 24, 2020 · In the current study, we aimed to determine whether ACH could counter confirmation bias in the reasoning with evidence in the context of criminal law ...
  34. [34]
    Analysis of Competing Hypotheses: An Overview + Example | Crayon
    Oct 24, 2022 · Learn how to use analysis of competing hypotheses (ACH) to predict and prepare for your competitors' major moves.
  35. [35]
    Analysis of Competing Hypotheses (ACH part 1) - SANS ISC
    May 28, 2017 · One of the well-known methodologies is the Analysis of Competing Hypotheses (ACH) [1], developed by Richards J. Heuer, Jr., a former CIA veteran ...
  36. [36]
    FOR578: Cyber Threat Intelligence - SANS Institute
    ... analysis of competing hypotheses; and how to cluster intrusions into threat groups. Topics covered. Human-Operated Ransomware; Storing and Structuring Data ...
  37. [37]
    (PDF) The “analysis of competing hypotheses” in intelligence analysis
    May 29, 2025 · We examined the use of the analysis of competing hypotheses (ACH)—a technique designed to reduce “confirmation bias.” Fifty intelligence ...
  38. [38]
    REvil: Analysis of Competing Hypotheses - ReliaQuest
    Jul 28, 2021 · An analysis of Competing Hypotheses (ACH) is typically a tabletop exercise that gathers intelligence analysts in a dark, smoke-filled room or a cave for hours ...
  39. [39]
    The Open Source Analysis of Competing Hypotheses Project
    A software companion to a 30+ year-old CIA research methodology, Open Source Analysis of Competing Hypotheses (ACH) will help you think objectively and ...<|separator|>
  40. [40]
    Licensing - The Open Source Analysis of Competing Hypotheses ...
    It was originally distributed with the PARC ACH tool under the terms of the MIT License: Permission is hereby granted, free of charge, to any person ...
  41. [41]
    Competing Hypotheses - Made for the CIA. Now free to the public.
    Aug 25, 2010 · I'd love to see more examples; I guess certain CRM/Project Management tools try to accomplish the same type of thing. bmm6o on Aug 25, 2010 ...
  42. [42]
    Analysis of Competing Hypotheses - Discover Your Solutions LLC
    This is an excellent tool designed by professional analysts. Related Tools: Hypothesis Generator: Use this tool to create a potential solution. Multiple ...Missing: CompetingHypotheses. | Show results with:CompetingHypotheses.
  43. [43]
    [PDF] ACH0: A Tool for Analyzing Competing Hypotheses
    ACH0 is an experimental program for intelligence analysis, providing a workspace for the Analysis of Competing Hypotheses (ACH) method.
  44. [44]
    twschiller/open-synthesis: Open platform for CIA-style ... - GitHub
    Open platform for CIA-style intelligence analysis ... Initially, the platform will support the Analysis of Competing Hypotheses (ACH) framework.
  45. [45]
    Burton/Analysis-of-Competing-Hypotheses - GitHub
    An analytic technique created at the CIA, ACH helps you analyze complex situations with multiple hypotheses and countless pieces of evidence ... Open Source.
  46. [46]
    Pherson
    Pherson helps organizations adapt to change, offering leadership development, analytic process transformation, and personnel & industrial security services.Contact Us · Who We Are · Our Team · Work With UsMissing: ACH software automation
  47. [47]
    Boosting intelligence analysts' judgment accuracy: What works, what ...
    Jan 1, 2023 · Although ACH failed to improve accuracy, we found that recalibration and aggregation methods substantially improved accuracy.Missing: unstructured | Show results with:unstructured<|separator|>
  48. [48]
    CHOOSING THE RIGHT TECHNIQUE - Sage Publishing
    Analysis of Competing Hypotheses (chapter 7). This technique requires analysts to start with a full set of plausible hypotheses rather than with a single ...