Fact-checked by Grok 2 weeks ago

Insensitivity to sample size

Insensitivity to sample size is a cognitive bias in which people assess the probability of obtaining a particular result from a sample drawn from a specified population without adequately considering the size of the sample, leading to systematic errors in judgment. This bias arises primarily from reliance on the representativeness heuristic, where individuals evaluate the likelihood of an event based on how closely a sample resembles the population stereotype, ignoring statistical principles that larger samples provide more stable and reliable estimates due to reduced variability. Psychologists and first demonstrated this phenomenon in their seminal 1974 study, highlighting how people overestimate the precision of small samples and underestimate variability in probability assessments. A classic illustration involves two hospitals in a town: the larger one delivers about 45 babies per day, while the smaller delivers about 15, with the overall birth rate of boys being 50% but fluctuating daily. When asked on how many days more than 60% of newborns would be boys at each hospital over a year, 21 out of 30 participants (70%) incorrectly judged it about the same for both, despite statistical theory predicting higher variability—and thus more extreme proportions like over 60% boys—in the smaller hospital due to its limited sample size. This error reflects a broader "," where small samples are treated as highly representative of the , akin to the flawed belief in the "" applying equally to tiny datasets. The has significant implications across fields like statistics, , and , often resulting in overconfidence in preliminary data from underpowered studies or polls. For instance, in , ignoring sample size can lead to premature conclusions about from small trials, potentially misleading clinical practices. Empirical studies continue to replicate this effect, showing it persists even among statistically trained individuals, underscoring its robustness as a fundamental deviation from normative Bayesian reasoning.

Definition and Background

Definition

Insensitivity to sample size is a cognitive bias characterized by the tendency to evaluate the probability of obtaining a particular sample statistic or the reliability of an estimate without properly accounting for the size of the sample from which the data is derived, often resulting in overgeneralization from limited data. This error stems from reliance on the , where judgments focus on how closely a sample resembles a population parameter, disregarding the statistical principle that larger samples yield more stable and accurate representations due to reduced variability. Key features of this bias include the failure to appreciate the , which posits that as sample size increases, the sample converges more closely to the population , making outcomes less probable in larger datasets. Individuals exhibiting this insensitivity often treat small and large samples as equivalently informative, leading to phrases or assumptions such as "a few cases prove the rule," where from minimal observations is given undue weight over broader evidence. In research and decision contexts, this manifests as selecting inadequately sized samples or overinterpreting preliminary findings as definitive. This bias is distinct from base rate neglect, another representativeness-based error, in that it specifically involves overlooking the role of sample size in modulating estimate precision, rather than ignoring the integration of prior probabilities into probabilistic judgments.

Historical Development

The roots of insensitivity to sample size trace back to early statistical theory, particularly Jacob Bernoulli's formulation of the law of large numbers in his 1713 treatise Ars Conjectandi, which established that empirical proportions converge to theoretical probabilities as the number of trials increases, laying the groundwork for understanding sampling variability. However, the psychological phenomenon of insensitivity to sample size—where individuals fail to appreciate this convergence and treat small samples as highly representative—did not emerge as a distinct concept until the late 20th century, marking a shift from purely mathematical principles to cognitive explanations. The formal introduction of insensitivity to sample size in occurred in 1971 through the seminal work of and , who coined the term "belief in the " to describe the erroneous intuition that small samples should mirror population characteristics as closely as large ones. In their paper published in Psychological Bulletin, Tversky and Kahneman demonstrated this bias through scenarios involving statistical judgments, highlighting how it stems from intuitive misconceptions about . As primary developers of the , they integrated it into the broader heuristics and biases framework, which gained prominence in the 1980s, as evidenced by their influential 1982 edited volume Judgment Under Uncertainty: Heuristics and Biases, where insensitivity to sample size was framed as a manifestation of the . During the , empirical research expanded on these foundations, with studies exploring the 's robustness across contexts, such as in judgments of proportion variability, confirming persistent insensitivity even among statistically trained individuals. Post-2000, the concept found significant applications in , bolstered by Kahneman's 2002 in Economic Sciences for integrating psychological insights like this into economic decision-making models. Later integrations, such as those by , critiqued the bias within the heuristics and biases program through the lens of ecological rationality, proposing that apparent insensitivities might reflect adaptive strategies in uncertain environments rather than mere errors.

Cognitive and Statistical Foundations

Underlying Statistical Principles

The (LLN) is a foundational theorem in stating that, under mild conditions, the sample \bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i of and identically distributed random variables X_i with finite \mu converges to \mu as the sample size n approaches . This convergence occurs in probability (weak LLN) or (strong LLN), implying that larger samples provide increasingly reliable estimates of the parameter. The rate of this convergence is quantified by the of the , given by \text{SE}(\bar{X}_n) = \frac{\sigma}{\sqrt{n}}, where \sigma is the deviation; as n increases, the diminishes, reducing the potential deviation between the sample and the true . Sampling variability arises from the inherent in drawing samples from a , manifesting as differences in sample across repeated samplings. In small samples, this variability is pronounced, resulting in high variance of estimators like the sample mean and correspondingly wide intervals that reflect substantial in the estimate. Larger sample sizes mitigate this variability by averaging out random fluctuations, leading to narrower intervals and greater in inferring parameters. For instance, the width of a is inversely proportional to \sqrt{n}, underscoring how increased sampling reduces error. The (CLT) complements the LLN by addressing the distributional properties of sample means for large n. It asserts that, for independent and identically distributed s with finite mean \mu and variance \sigma^2 > 0, the standardized sample mean Z_n = \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} converges in to a standard N(0, 1), irrespective of the underlying . This approximation facilitates probabilistic inference, such as constructing confidence intervals or performing hypothesis tests, using the \sigma / \sqrt{n} to scale the variability. The CLT typically requires n \geq 30 for reasonable accuracy in many practical settings. Normative statistical prescribes weighting or estimates according to their , which is inversely related to variance and thus increases with sample size. In frameworks like Bayesian updating, for example, the posterior is a precision-weighted average of the and sample means, where the weight for the sample is proportional to n (reflecting precision n / \sigma^2). This approach ensures that larger samples exert greater influence on inferences, optimizing accuracy under uncertainty. In contrast, descriptive judgments often deviate from this norm by treating samples of varying sizes as equally informative, leading to suboptimal probabilistic assessments.

Psychological Mechanisms

Insensitivity to sample size arises primarily from the use of the , where individuals assess the probability of a sample outcome based on its similarity to a prototypical , disregarding the sample's size as an indicator of reliability. This heuristic leads people to treat small and large samples as equally representative, failing to account for the greater stability of larger samples in reflecting true characteristics. The contributes by prompting overreliance on easily retrievable, vivid instances from small samples, which overshadow the more abstract, aggregated evidence from larger datasets. Vivid anecdotes or memorable small-sample events become disproportionately influential in probability judgments, as their mental accessibility biases perceptions away from considering sample size variability. Cognitive limitations rooted in further exacerbate this insensitivity, as humans possess finite computational resources that hinder intuitive grasp of how sample size affects probabilistic reliability. Under , individuals default to simplified rules of thumb rather than engaging in the complex calculations needed to weigh varying sample sizes, leading to systematic errors in . Within dual-process theory, this bias manifests through the dominance of thinking—fast, intuitive, and heuristic-driven—which routinely overlooks sample size in favor of superficial resemblance, while System 2 thinking—slow and analytical—can intervene to incorporate size effects but is often underutilized due to its effortful nature. 's automatic reliance on representativeness thus perpetuates the error unless deliberate reflection activates System 2 to apply principles like the .

Examples and Demonstrations

Illustrative Experiments

One classic experimental demonstration of insensitivity to sample size was conducted by Tversky and Kahneman, who presented participants with a hypothetical scenario contrasting small and large samples in a probability judgment task. In their study, 95 undergraduate students evaluated the following problem: A is served by two s. In the larger hospital, approximately 45 babies are born each day; in the smaller hospital, about 15 babies are born each day. Roughly 50% of babies born in these hospitals are boys, though the exact percentage varies from day to day. Over one year, each hospital recorded the number of days on which more than 60% of the babies born were boys. Participants were asked which hospital recorded more such days: the larger one, the smaller one, or about the same number. The correct response is the smaller hospital, as smaller samples exhibit greater variability due to , making extreme proportions like over 60% boys more likely. However, responses were distributed as 22% for the larger hospital, 22% for the smaller, and 56% for about the same, revealing that over half the participants failed to adjust their judgments for the difference in sample sizes. Tversky and Kahneman further illustrated this in a related task involving of proportions from samples of varying sizes. Participants, again undergraduates, were asked to estimate the probability that the average of adult males in a sample exceeds 5 feet 10 inches, for samples of 10, 100, or 1,000 men drawn from a with a of 5 feet 9 inches and standard deviation of 2.5 inches. Statistically, the probability decreases as sample size increases because larger samples converge more closely to the . Yet, the probability estimates were nearly identical across sample sizes (around 0.30 for all), indicating equal weighting of evidence regardless of whether it came from small or large samples. This pattern, observed in approximately 60-70% of responses across similar tasks with undergraduate samples, underscores how individuals often treat small samples as equally informative as large ones, neglecting the stabilizing effect of increased n.

Real-World Scenarios

In medical diagnostics, practitioners and patients often overreact to outcomes from small patient samples, such as attributing a rare observed in just a few cases to a broader risk, while discounting evidence from large-scale clinical trials that demonstrate rarity. For instance, in placebo-controlled trials of second-line antirheumatic drugs, small sample sizes led to exaggerated estimates of efficacy, which diminished as sample sizes increased to hundreds of patients per arm, highlighting how initial small-sample enthusiasm can mislead diagnostic decisions. Similarly, studies on brain imaging frequently rely on tiny samples of under 25 participants, resulting in low statistical power and inflated false positive rates for diagnostic correlations, prompting calls for larger cohorts to validate findings. In and , decision-makers commonly judge product success based on initial from small user groups, overlooking how variability decreases with larger audiences and leading to premature scaling or abandonment. A classic illustration is claims like "4 out of 5 dentists recommend," which imply reliability without disclosing the tiny sample—often fewer than 20—thus misleading consumers and inflating perceived endorsement rates. This insensitivity contributes to flawed market analyses, where early tests with dozens of users are extrapolated to millions, ignoring that random fluctuations in small groups do not predict population-level trends. Media coverage and frequently amplify anecdotal stories from minuscule samples, fostering widespread misconceptions, as seen in driven by reports of isolated adverse events. For example, during vaccination campaigns, single anecdotes of side effects shared on outweighed statistical data from millions of doses showing low incidence rates, triggering biases that elevated perceived risks and reduced uptake. Such amplification occurs because vivid, small-sample narratives are more memorable than aggregated trial results, leading public discourse to undervalue the stability of large-scale evidence. In policy-making, educational reforms are often enacted based on promising results from pilot programs with limited participants, only to falter when scaled nationally due to unaccounted variability in larger populations. Systematic reviews in reveal that studies with small samples—typically under 100 students—report effect sizes up to twice as large as those from larger trials, influencing policies like class-size reductions that underperform when expanded. This pattern has contributed to inefficient , as initial pilots in under 50 schools suggest dramatic improvements that evaporate in district-wide implementations lacking sample-size adjustments.

Implications and Applications

Effects on Decision-Making

Insensitivity to sample size often results in errors, where individuals underestimate the inherent in small samples, leading to overconfident decisions in high-stakes contexts such as investments or public policies. For instance, decision-makers may interpret a short sequence of favorable outcomes from limited as indicative of a stable trend, ignoring the higher variability possible in smaller datasets, which can precipitate misguided or policy implementations. This exacerbates overconfidence by treating small-sample statistics as reliable predictors, potentially resulting in financial losses or ineffective strategies when the underlying variability is not accounted for. In group dynamics, insensitivity to sample size contributes to biases, particularly when teams rely on limited shared data, amplifying errors through mechanisms like echo chambers in small social or professional groups. Members may collectively overlook sample limitations, reinforcing flawed judgments as the group converges on unrepresentative , which hinders diverse perspectives and robust . Such dynamics are evident in collaborative settings where preliminary small-scale findings dominate discussions, leading to premature agreements that propagate inaccuracies across the group. Economically, this manifests in behavioral finance, where investors base stock trading decisions on short-term small-sample trends, mistaking transient fluctuations for persistent patterns and incurring avoidable risks. To mitigate these effects, simple interventions like prompting awareness of sample size can reduce the , encouraging more calibrated judgments without requiring extensive . Techniques such as analogical encoding, which highlight structural similarities in statistical scenarios, have demonstrated effectiveness in improving decision accuracy by fostering recognition of sample variability.

Role in Education and Training

In introductory statistics courses, insensitivity to sample size is addressed through targeted curriculum integration that emphasizes the , often using interactive simulations to illustrate how estimates converge to population parameters as sample size increases. For instance, students engage in activities where they generate multiple samples from known distributions, such as flips or rolls, and observe the reduction in variability of sample means or proportions with larger n; this hands-on approach helps counteract the intuitive belief that small samples are as reliable as large ones. Debiasing techniques in these educational settings include presenting information in formats rather than abstract probabilities to highlight sample size effects, as well as employing visual aids like plots of confidence intervals that shrink with increasing n to demonstrate gains. These methods encourage learners to consider sample adequacy explicitly, reducing reliance on representativeness-based judgments. For example, tools such as animated simulations or software-generated graphs allow students to manipulate sample sizes and visualize outcomes, fostering a more normative understanding of . Despite such instruction, challenges persist in education, with students often retaining errors like overgeneralizing from small samples due to entrenched intuitive heuristics that resist formal . shows that even after , misconceptions about sampling variability endure, particularly among novices who struggle to integrate sample size with variability in probabilistic reasoning. Adult learners may exhibit greater resistance, as prior reinforces the , complicating efforts to promote statistical thinking. In professional training programs, such as modules on , insensitivity to sample size is tackled through scenario-based exercises that apply statistical principles to clinical trials and diagnostic studies. Curricula incorporate the Inventory of Cognitive Bias in Medicine to assess and mitigate this bias, using real-world cases to train practitioners in evaluating study reliability based on sample adequacy, thereby enhancing critical appraisal skills in healthcare .

Research and Criticisms

Key Studies

One of the seminal compilations of on insensitivity to sample size is found in Peter Sedlmeier's 1999 book, Improving Statistical Reasoning: Theoretical Models and Practical Implications, which reviews over 20 studies from the onward. These investigations, primarily using experimental tasks such as estimating probabilities from described samples or judging the likelihood of outcomes in hypothetical scenarios, consistently demonstrated that participants failed to adjust their judgments for sample size, treating small and large samples as equally reliable indicators of parameters. The review highlighted the robustness of this bias in diverse experimental contexts, including problems and frequency estimation tasks, with participants often overgeneralizing from small samples due to reliance on the . Cross-cultural research in the 2000s revealed variations in the degree of insensitivity to sample size, often linked to differences in formal statistical education between Western and non-Western samples. Similar comparisons in Asian versus Western cohorts during the decade underscored how educational backgrounds moderate the bias, with non-Western samples displaying elevated insensitivity in tasks involving sample-based predictions. Longitudinal studies have tracked the impact of statistical education on reducing insensitivity to sample size, revealing moderate effect sizes over time. In a 2015 experiment by Aczel et al., 154 university students underwent pre- and post-training assessments four weeks apart, with an analogical intervention group showing significant improvement in recognizing sample size effects (p = 0.03), compared to awareness and control groups; this debiasing persisted in follow-up real-life application reports, indicating that targeted education can diminish the bias. Such findings suggest that repeated exposure to statistical principles gradually enhances sensitivity, though full elimination remains challenging without ongoing practice.

Debates and Limitations

One prominent debate surrounding insensitivity to sample size concerns its potential ecological rationality, particularly in environments characterized by uncertainty and naturally occurring small samples. and colleagues argue that what appears as a in laboratory settings may actually reflect adaptive heuristics suited to real-world frequency estimation tasks, where human intuition aligns with the empirical —observing that larger samples tend to yield more stable proportions closer to population values. For instance, in the classic maternity ward problem, people intuitively favor larger hospitals for more reliable estimates of birth proportions, which proves effective for frequency distributions but less so for sampling variability in controlled experiments; this insensitivity thus serves as a "fast and frugal" tool in ecologically valid contexts with limited data. Methodological critiques highlight challenges in accurately measuring the , including difficulties in distinguishing it from related heuristics like representativeness and confounding factors such as levels. Studies attempting to quantify differences in cognitive es often rely on scenario-based tasks, such as estimating probabilities from varying sample sizes, but these instruments show inconsistent factorization across questionnaires, complicating isolation of insensitivity from overlapping errors like the . Moreover, lower skills can exacerbate apparent insensitivity by impairing overall statistical comprehension, leading researchers to question whether observed effects truly capture the or merely reflect educational disparities. Debates also persist regarding the universality of insensitivity to sample size versus variability across cultural and individual factors, such as expertise levels. While the bias is often portrayed as near-universal, emerging evidence suggests moderation by expertise, with novices more prone to overlooking sample size in probabilistic judgments compared to domain experts who integrate it more readily due to training. Cross-cultural studies remain sparse, but methodological reviews indicate that sample incomparability and cultural variations in statistical exposure may inflate perceived universality, calling for more diverse participant pools to assess true generalizability. Research gaps further underscore limitations in the bias's conceptualization, particularly its under-exploration in human- interactions and field studies. For instance, post-2020 research includes Zhan and Savani (2022), who found relative insensitivity to sample sizes differing by orders of magnitude in frequency judgments across six experiments. In explainable () systems, users often exhibit insensitivity when explanations provide both scores and support metrics, fixating on the former and ignoring sample-derived support, which can lead to misapplied decisions in tasks like risk prediction. This highlights a need for targeted interventions, such as frequency-based formats, yet empirical investigations in real-world -assisted scenarios remain limited, with calls for longitudinal field studies to evaluate the bias beyond lab vignettes. Overall, while controlled experiments predominate, recent work as of 2022 prompts advocacy for ecologically diverse, real-time data collection to refine the bias's scope.

References

  1. [1]
  2. [2]
    Sample Size Neglect: What It Is, How It Works, Example - Investopedia
    Sample Size Neglect is a cognitive bias studied by Amos Tversky and Daniel Kahneman. · It consists of drawing false conclusions from statistical information, due ...
  3. [3]
    Tversky and Kahneman's Cognitive Illusions: Who Can Solve Them ...
    Apr 11, 2021 · Because this similarity does not depend on the size of the sample, people following the representativeness heuristic will ignore sample size.
  4. [4]
    Seeing is believing: Priors, trust, and base rate neglect - ScienceDirect
    Kahneman and Tversky (1973) described an effect they called 'insensitivity to prior probability of outcomes', later dubbed base rate neglect, ...
  5. [5]
    [PDF] 8 The Laws of Large Numbers - Stat@Duke
    Oct 18, 2018 · Laws of Large Numbers (LLN):. Sn − nµ σn. −→ 0. (pr. and a.s.) Central Limit Theorem (CLT):. Sn − nµ σ√n =⇒ No(0,1). (in dist.) Law of the ...Missing: formula | Show results with:formula
  6. [6]
    [PDF] Standard errors and confidence intervals - Stanford University
    /𝑛. ▷ So the (approximate) standard error for large 𝑛 is SE = 𝜎/√𝑛. ... Uniform law of large numbers: As 𝑛 → ∞, the empirical average loss. (1/𝑛)∑. 𝑛.
  7. [7]
    4.6 - Impact of Sample Size on Confidence Intervals | STAT 200
    As the sample size increases the standard error decreases. With a larger sample size there is less variation between sample statistics, or in this case ...
  8. [8]
    [PDF] Sampling Variability and Confidence Intervals
    One hundred 95% confidence intervals from 100 random samples of size n=50. 31 ... In small samples, there is a lot of sampling variability in s as well: so.
  9. [9]
    [PDF] Central Limit Theorem and the Law of Large Numbers Class 6 ...
    Therefore, the central limit theorem tells us that. X ≈ N(p0, σ/√n), where σ = pp0(1 − p0). In a normal distribution 95% of the probability is within 2 standard ...
  10. [10]
    Lesson 27: The Central Limit Theorem - STAT ONLINE
    The Central Limit Theorem (CLT) tells us that the sampling distribution of the sample mean is, at least approximately, normally distributed.Missing: explanation error
  11. [11]
    [PDF] Bayesian Statistics: Normal-Normal Model Robert Jacobs ...
    Dec 3, 2008 · The Normal-Normal model uses Bayes' rule to update a population mean (µ) with new data, using a prior mean (M) and data mean (x) as a weighted ...
  12. [12]
    [PDF] Bayesian Learning and the Pricing of New Information
    This Bayesian model indicates that the market price should be a function of the prior and each of the available signals, with weights reflecting their relative ...<|control11|><|separator|>
  13. [13]
    [PDF] Judgment under Uncertainty: Heuristics and Biases Author(s)
    Judgment under Uncertainty: Heuristics and Biases. Author(s): Amos Tversky and Daniel Kahneman. Source: Science, New Series, Vol. 185, No. 4157, (Sep. 27 ...
  14. [14]
    [PDF] A Compendium of Common Heuristics, Misconceptions, and Biased ...
    Gigerenzer (1991) argues that when an observation is se- lected from a sample for further illustration, it goes against the assumption that this observation was ...
  15. [15]
    Heuristics
    Oct 23, 2001 · Human decision making exhibits what has been called bounded rationality. ... Insensitivity to sample size. When given this problem: A certain town ...
  16. [16]
    [PDF] Sample size neglect problems - Studia Psychologica
    The original problem by. Kahneman and Tversky (1972) reads as fol- lows: A certain town is served by two hospitals. In the larger hospital about 45 babies are ...
  17. [17]
    Judgment under Uncertainty: Heuristics and Biases - Science
    This article described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people ...
  18. [18]
    Vaccine hesitancy: evidence from an adverse events following ...
    Sep 16, 2021 · We found two cognitive biases, shared information bias and false consensus effect, that can affect and direct social media users' behaviors.
  19. [19]
    How to reduce vaccination hesitancy? The relevance of evidence ...
    Jun 19, 2023 · On average, neither the provision of statistical nor anecdotal evidence increased the persuasiveness of information regarding the efficacy of a ...
  20. [20]
  21. [21]
  22. [22]
    The effects of statistical training on thinking about everyday problems
    In Experiments 1 and 2, we taught subjects about the formal properties of the law of large numbers in brief training sessions in the laboratory and found that ...
  23. [23]
    Law of Large Numbers: the Theory, Applications and Technology ...
    These materials demonstrate the law of large numbers in terms of proportions and averages, for a variety of processes modeled by different distributions and ...
  24. [24]
    A review of possible effects of cognitive biases on interpretation of ...
    People tend to underestimate the increased benefit of higher robustness of estimates that are made on a larger sample, which is called insensitivity to sample ...
  25. [25]
    How do students reason about statistical sampling with computer ...
    May 31, 2024 · With and without simulations, students have difficulty forming an aggregate view of data, interpreting sampling distributions, showing a process ...
  26. [26]
    Evidence of probability misconception in engineering students—why ...
    Mar 17, 2021 · Insensitivity to sample size is applied when people believe that the probability of the judged sample statistic is independent of the sample ...
  27. [27]
    Measurement properties of the Inventory of Cognitive Bias in ...
    May 28, 2008 · Insensitivity to sample size, 5 ; Illusory correlation, 5 ; Easily retrievable instances are judged to be more frequent, 4 ; Framing of anchoring ...
  28. [28]
    Improving statistical reasoning: Theoretical models and practical ...
    Sedlmeier, P. (1999). Improving statistical reasoning: Theoretical models and practical implications. Lawrence Erlbaum Associates Publishers. Abstract. Many ...
  29. [29]
  30. [30]
    [PDF] Was Bernoulli wrong? On intuitions about sample size - MPG.PuRe
    Recently we proposed an explanation for the apparently inconsistent result that people sometimes take account of sample size and sometimes do not: Human.
  31. [31]
    Measuring Individual Differences in Decision Biases - NIH
    Only the Insensitivity to sample size and the Gambler's fallacy tasks belong to the same factor in all three questionnaires. The three versions of the tasks ...
  32. [32]
    The Measurement of Individual Differences in Cognitive Biases
    Feb 17, 2021 · Individual differences have been neglected in decision-making research on heuristics and cognitive biases. Addressing that issue requires ...
  33. [33]
    Experts' and Novices' Perception of Ignorance and Knowledge ... - NIH
    Mar 17, 2017 · Novices reported more ignorance and less knowledge in their own discipline than experts, but no differences were found in the assessments of how ...
  34. [34]
    [PDF] Bias and Equivalence in Cross-Cultural Research
    Bias in cross-cultural research is a challenge to data comparability, while equivalence is the level of comparability of scores across cultures.
  35. [35]
    [PDF] How Cognitive Biases Affect XAI-assisted Decision-making
    Hosanagar, and J. Lee, “Human-AI Interaction in. Human Resource Management ... Insensitivity to sample size. When both confidence and support are stated ...