Fact-checked by Grok 2 weeks ago

Sensitivity index

The sensitivity index, commonly denoted as d' (d-prime), is a dimensionless in signal detection theory that quantifies an observer's perceptual ability to distinguish a target signal from background noise, independent of any . It is defined as the standardized distance between the means of the signal-plus-noise and noise-only distributions, typically assuming equal variance, and is calculated using the formula d' = z(HR) - z(FAR), where z(HR) is the z-score of the hit rate (correctly identifying the signal) and z(FAR) is the z-score of the false alarm rate (incorrectly identifying noise as signal). Signal detection theory, which introduced the sensitivity index, originated from World War II applications in signal and was adapted to in the mid-20th century to address limitations in classical threshold models that conflated sensory sensitivity with decision-making criteria. The seminal work by David M. Green and John A. Swets formalized these concepts in their book Signal Detection Theory and Psychophysics, establishing d' as a core metric for evaluating detection performance across varying stimulus intensities and observer states. This separation of sensitivity from —where is measured by the criterion c = -[z(HR) + z(FAR)] / 2—enabled more rigorous experimental designs in sensory research. In practice, d' values range from 0 (indicating no discriminability, equivalent to performance) to higher magnitudes (e.g., d' ≈ 4 for near-perfect detection), and it is often derived from (ROC) curves, where the area under the curve (A') approximates d' for certain conditions. The index has broad applications beyond basic , including auditory and studies, medical diagnostics (such as imaging interpretation), tasks, and even behavioral analysis in non-human animals under paradigms. Its robustness to bias makes it invaluable for assessing real-world detection scenarios, like hazard identification in safety-critical environments.

Introduction

Purpose and Overview

The sensitivity index, commonly denoted as d', serves as a dimensionless in signal detection theory to assess the discriminability between two probability distributions: one for the signal-absent (noise-only) condition and the other for the signal-present condition. It quantifies the separation between the means of these distributions, normalized by the standard deviation, providing a standardized measure of how distinguishable the signal is from noise. This normalization enables direct comparisons across diverse experimental contexts, such as auditory thresholds or visual detection tasks, without dependence on absolute scale. Higher values of d' reflect greater separability between the distributions, indicating enhanced detectability of the signal while excluding considerations of , which is addressed separately in the . For instance, a d' of 2 suggests the signal is two standard deviations away from the , resulting in minimal overlap and high potential. The thus emphasizes pure , allowing researchers to isolate perceptual acuity from tendencies. Although adapted from engineering to psychophysics to model human sensory discrimination, the sensitivity index primarily functions as a tool for statistical discriminability applicable beyond perception, including in machine learning classifiers and medical diagnostics. In the univariate case, this is illustrated by two overlapping distributions along a single , where increasing mean separation reduces overlap (e.g., centered at zero and signal shifted rightward). For the bivariate extension, consider two-dimensional distributions with means separated along a in the plane, highlighting how multidimensional evidence enhances overall discriminability when covariances are equal.

Relation to Signal Detection Theory

Signal detection theory (SDT) provides a framework for analyzing an observer's ability to detect a signal embedded in noise, where the task involves distinguishing trials with signal-plus-noise from those with noise alone. In this context, performance is quantified using hit rates, which represent the proportion of signal-present trials correctly identified, and rates, which indicate the proportion of noise-alone trials incorrectly classified as signal-present. (ROC) curves plot hit rates against rates across varying decision criteria, offering a visual summary of detection performance that remains invariant to changes in . SDT has roots in radar engineering applications during and emerged more fully in the 1950s, with further development in the 1960s during the early era to improve signal detection amid electronic noise; it was subsequently adapted to to model perceptual thresholds more robustly than classical methods. Key contributions came from researchers like David M. Green and John A. Swets, whose 1966 work formalized the theory's application to human sensory processes, emphasizing the role of internal noise and decision-making. Within SDT, discriminability refers to the observer's capacity to separate the underlying signal-plus-noise and noise-alone distributions, measured independently of , which reflects tendencies to favor one response category over another. Under the assumption of equal variances for these distributions, discriminability can be estimated using the , where the inverse cumulative is applied to the rate and rate, and their difference yields a bias-free index. This approach, d', serves as a summary for ROC curve performance in the equal-variance case.

Basic Assumptions and Equal Variance Case

Univariate Equal Variance Definition

In signal detection theory, the univariate equal variance case assumes that the internal responses to signal-present (A) and signal-absent (B) stimuli follow univariate distributions with equal standard deviations, providing the foundational model for measuring . The index, denoted as d', quantifies the observer's ability to discriminate between these two distributions and is defined as the in their means divided by the common standard deviation: d' = \frac{|\mu_A - \mu_B|}{\sigma} where \mu_A and \mu_B are the means of the signal-plus-noise and noise-alone distributions, respectively, and \sigma is the shared standard deviation. This formulation arises from the optimal decision process in which the observer compares the likelihood of a response arising from either distribution, assuming Gaussian noise. The value of d' inversely reflects the overlap between the two distributions: d' = 0 indicates complete overlap and no discriminable difference, while higher values signify greater separation and better , with d' > 3 typically denoting excellent discriminability in psychophysical tasks. For instance, consider two normal distributions where \mu_A = 1, \mu_B = 0, and \sigma = 0.5; here, d' = |1 - 0| / 0.5 = 2, representing substantial but not maximal separation suitable for many detection scenarios. Empirically, d' is often estimated from observed response rates without direct knowledge of the underlying parameters, using the approximation d' ≈ Z(H) - Z(F), where H is the hit rate (correctly identifying signals), F is the false alarm rate (incorrectly identifying noise as signals), and Z denotes the inverse cumulative distribution function of the standard normal distribution. This estimation assumes the equal variance Gaussian model and allows sensitivity to be separated from response bias in experimental data.

Multivariate Equal Variance Extension

The multivariate equal variance extension of the sensitivity index d' generalizes the univariate case to scenarios where perceptual representations are modeled as multivariate normal distributions with equal covariance matrices across stimulus classes. In this framework, known as General Recognition Theory (GRT), the sensitivity index measures the discriminability between two stimuli, such as signal-plus-noise (A_2) and noise-only (A_1), by accounting for the full structure of the perceptual space. The formula for the multivariate d' is given by the square root of the quadratic form involving the difference in mean vectors and the inverse of the common covariance matrix S: d' = \sqrt{(\boldsymbol{\mu}_a - \boldsymbol{\mu}_b)^T S^{-1} (\boldsymbol{\mu}_a - \boldsymbol{\mu}_b)}, where \boldsymbol{\mu}_a and \boldsymbol{\mu}_b are the vectors for the two distributions, and this expression corresponds to the between the means, standardized to reflect perceptual separability under equal covariances. This measure reduces to the univariate d' when the dimensionality is 1, as the covariance matrix simplifies to a scalar variance. In the bivariate case, assuming unit variances along each dimension, the squared sensitivity index expands to {d'}^2 = \frac{{d'_x}^2 + {d'_y}^2 - 2\rho d'_x d'_y}{1 - \rho^2}, where \rho is the between the dimensions, and d'_x and d'_y are the univariate sensitivity indices along each dimension. This formula illustrates how correlations influence overall discriminability: positive \rho introduces , reducing d' compared to the uncorrelated case (\rho = 0), while negative \rho can enhance it by providing complementary information across dimensions. Computationally, evaluating multivariate d' requires inverting the covariance matrix S, which is straightforward and efficient for low-dimensional spaces (e.g., 2–5 dimensions) using standard linear algebra routines, though it becomes more demanding as dimensionality increases due to the O(n^3) cost of inversion.

Unequal Variance Cases

Overview of Challenges

In real-world applications of signal detection theory (SDT), the assumption of equal variances between signal and noise distributions often fails to hold, as empirical data frequently exhibit heteroscedasticity—unequal variances—or differing covariances across conditions. This discrepancy arises in diverse domains, such as and , where factors like stimulus variability, encoding processes, or lead to greater spread in signal responses compared to noise. Forcing the equal-variance formula for the sensitivity index d' onto such data results in biased estimates, as the metric no longer accurately reflects the true separation between distributions. The consequences of this assumption are particularly pronounced in classification tasks, where it can lead to systematic underestimation of discriminability. For instance, in psychophysical experiments involving detection of faint visual or auditory signals amid , the signal distribution's variance often exceeds that of the noise, causing the equal-variance d' to underestimate relative to more accurate measures. Similarly, in tasks, unequal variances between "old" and "new" item strengths distort estimates, skewing interpretations of perceptual or cognitive efficiency and potentially misleading conclusions about response criteria. To address these limitations, alternative sensitivity indices have been developed that accommodate unequal variances through distribution-specific or pooled statistics, such as the Bayes discriminability index, RMS deviation discriminability index, and average deviation discriminability index. These approaches extend beyond the equal-variance baseline by directly incorporating variance differences, providing more robust metrics for scenarios where the d' proves inadequate. At its core, optimal discriminability in SDT is defined by the —the minimal achievable error under ideal decision rules—offering a theoretical for any pair of distributions. However, computing this exactly requires numerical approximations for tractability, especially in multivariate or unequal-variance cases, motivating the use of specialized indices as practical proxies.

Key Indices for Unequal Variances

In cases of unequal variances between signal and noise distributions, traditional equal-variance sensitivity indices like d' become inadequate, prompting the development of alternative discriminability measures that account for differing spreads. The three primary indices for such scenarios are the Bayes discriminability index, the standard deviation discriminability index, and the average standard deviation discriminability index, each offering distinct conceptual approaches to quantifying separability. The Bayes discriminability index represents an optimal measure, derived from the minimum error rate achievable using the Bayes decision rule, which minimizes errors by integrating over the posterior probabilities of the distributions. In contrast, the standard deviation discriminability index employs a root-mean-square pooling of the variances to approximate the effective spread, providing a symmetric gauge of separation that normalizes the mean difference by this pooled metric. The average standard deviation discriminability index, meanwhile, uses the of the individual standard deviations as the normalizing factor, offering a simpler aggregation that balances the variances without squaring. These indices extend naturally to multivariate settings by incorporating pooled or averaged matrices in place of scalar variances, allowing assessment of discriminability across multiple dimensions such as in visual or sensory signal detection tasks. A key shared property is their : swapping the signal and distributions yields the same index value, ensuring consistent measurement regardless of labeling. Selection among them depends on computational feasibility and context; the Bayes index is favored for its when error rates can be accurately computed, while the and average approximations suit high-dimensional cases where exact integration is prohibitive.

Detailed Definitions of Unequal Indices

Bayes Discriminability Index

The Bayes discriminability index, denoted d'_b, serves as the optimal measure of separability between two probability distributions in , particularly when variances are unequal, by quantifying the minimum achievable using the Bayes decision . This index corresponds to the value of the standard equal-variance discriminability d' that would produce the same overlap (Bayes ) if the distributions had equal unit variance. The index is formally defined as d'_b = 2 Z(A_b), where A_b is the maximum accuracy rate (1 minus the Bayes error rate under equal priors), and Z denotes the inverse cumulative distribution function of the standard normal distribution; equivalently, d'_b = -2 Z(E_b), with E_b as the Bayes error rate. For two univariate normal distributions with means \mu_a, \mu_b and standard deviations \sigma_a, \sigma_b, the Bayes error is computed as the average of the misclassification probabilities across the quadratic decision boundary. Assuming \mu_b > \mu_a, the boundary is found by solving the quadratic equation from setting the likelihoods equal: x = \frac{1}{2(\sigma_b^2 - \sigma_a^2)} \left[ (\mu_b^2 - \mu_a^2) + (\sigma_a^2 - \sigma_b^2)(\log(\sigma_b^2 / \sigma_a^2)) \pm \sqrt{ \text{discriminant} } \right], selecting the relevant root, and then E_b = \frac{1}{2} \left[ \Phi\left( \frac{\text{boundary} - \mu_a}{\sigma_a} \right) + \Phi\left( \frac{\mu_b - \text{boundary}}{\sigma_b} \right) \right], where \Phi is the standard normal CDF; numerical methods are generally required to obtain d'_b. In multivariate settings, d'_b extends naturally to distributions with arbitrary means \boldsymbol{\mu}_a, \boldsymbol{\mu}_b and full matrices \mathbf{\Sigma}_a, \mathbf{\Sigma}_b without any pooling of variances, relying on the likelihood ratio to define a decision boundary \beta(\mathbf{x}) = \mathbf{x}^T \mathbf{Q}_2 \mathbf{x} + \mathbf{q}_1^T \mathbf{x} + q_0 = 0. The Bayes error is then evaluated through of the multivariate normals over the regions defined by this boundary, using efficient techniques such as ray-tracing along rays from the origin or into generalized distributions to handle the non-central quadratic forms. This computation preserves the index's —d'_b remains unchanged when swapping the two distributions—and its invariance to invertible linear transformations of the feature space, ensuring it captures intrinsic separability regardless of or . For illustration in univariate unequal-variance scenarios where \sigma_a \neq \sigma_b, d'_b outperforms the conventional equal-variance d' = (\mu_b - \mu_a) / \sqrt{(\sigma_a^2 + \sigma_b^2)/2} by accounting for the true optimal boundary, exceeding it by up to 30% in cases of moderate distributional overlap. The relation to the (ROC) curve underscores d'_b's interpretability: the area under the optimal ROC (likelihood ratio-based) approximates \Phi(d'_b / 2), mirroring the equal-variance case, yet d'_b offers a direct, error-grounded metric without relying on .

RMS Standard Deviation Discriminability Index

The standard deviation discriminability index, denoted as d'_a, serves as a conservative for estimating discriminability under unequal variance assumptions in . In the univariate case, it is computed as d'_a = \frac{|\mu_a - \mu_b|}{\sigma_{\text{rms}}}, where \sigma_{\text{rms}} = \sqrt{\frac{\sigma_a^2 + \sigma_b^2}{2}} represents the of the standard deviations from the two distributions, effectively pooling their variances in a quadratic manner. For multivariate extensions, the index employs the with a pooled \mathbf{\Sigma}_{\text{rms}} = \frac{\mathbf{\Sigma}_a + \mathbf{\Sigma}_b}{2}, defined as d'_a = \sqrt{ (\mu_a - \mu_b)^\top \mathbf{\Sigma}_{\text{rms}}^{-1} (\mu_a - \mu_b) }. This averaging of covariance matrices preserves the RMS pooling principle, providing a single effective metric for multidimensional discriminability while assuming Gaussian distributions. This index tends to underestimate the true discriminability d'_b (the optimal Bayes index minimizing classification error), with the degree of underestimation increasing as variances diverge. For instance, when \sigma_a \gg \sigma_b, d'_a \approx \frac{|\mu_a - \mu_b|}{\sigma_a / \sqrt{2}}, effectively penalizing the larger variance and yielding a lower bound on performance. Theoretical comparisons indicate biases of 10-30% in scenarios with substantial variance differences. Key advantages include its computational efficiency—no iterative integration or optimization is required—and applicability for rapid evaluations in experimental designs where exact Bayes computation is impractical. As a heuristic approximation to the Bayes index, it balances simplicity with reasonable accuracy for moderately unequal variances. However, it is suboptimal for extreme imbalances, potentially leading to overly conservative interpretations of .

Average Standard Deviation Discriminability Index

The standard deviation discriminability index, denoted as d'_e, provides a measure of discriminability between two distributions with unequal variances by normalizing the mean difference by the arithmetic of the standard deviations. In the univariate case, it is defined as d'_e = \frac{|\mu_a - \mu_b|}{\sigma_{\text{avg}}}, where \sigma_{\text{avg}} = (\sigma_a + \sigma_b)/2. This index reduces to the standard equal-variance d' when \sigma_a = \sigma_b. For multivariate distributions, the index extends via the using an averaged covariance structure, where the is \mathbf{\Sigma}_{\text{avg}} = (\mathbf{\Sigma}_a + \mathbf{\Sigma}_b)/2, defined as d'_e = \sqrt{ (\boldsymbol{\mu}_a - \boldsymbol{\mu}_b)^\top \mathbf{\Sigma}_{\text{avg}}^{-1} (\boldsymbol{\mu}_a - \boldsymbol{\mu}_b) }. This approach pools the variability in a linear manner, facilitating computation in higher-dimensional settings such as tasks. The index d'_e serves as a balanced approximation to the optimal Bayes discriminability index d'_b, outperforming the (RMS) variant in scenarios of moderate discriminability by providing a less conservative estimate of separation. As the discriminability d' increases, d'_e converges toward d'_b, reflecting improved alignment with ideal observer performance under growing signal strength. For illustration, consider univariate distributions with \sigma_a = 1, \sigma_b = 2, and difference |\mu_a - \mu_b| = 2: then \sigma_{\text{avg}} = 1.5, yielding d'_e \approx 1.33, compared to the RMS index value of approximately 1.26. This index is particularly suitable for applications where variances differ moderately, as it avoids the greater underestimation of the approach in such cases, and its linear averaging simplifies extensions to non-normal or approximated distributions in perceptual modeling. Early comparisons of detectability indices highlighted the utility of such averaged measures for robust across variance conditions.

Comparing the Indices

Performance in Univariate Scenarios

In univariate scenarios involving two normal distributions with unequal variances, the discriminability indices satisfy the d'_a \leq d'_e \leq d'_b, where d'_a is the average standard deviation discriminability index (Simpson & Fitter, 1973), d'_e is the unequal-variance extension based on average or standard deviations (Egan & Clarke, ), and d'_b is the Bayes discriminability index; equality holds when the standard deviations are identical (\sigma_a = \sigma_b). This ordering reflects the relative conservatism of each index in capturing the true separability, with d'_a providing a lower bound based on assumptions and d'_b offering the upper bound derived from optimal Bayesian classification error minimization. Quantitatively, d'_a underestimates d'_b by up to 30% in cases of highly unequal variances, particularly at high discriminability, while d'_e provides a close approximation to d'_b at high discriminability values. Simulations of univariate normal distributions confirm these bounds; for instance, when d'_b = 1, typical values are d'_a \approx 0.8 and d'_e \approx 0.9, illustrating the progressive tightening of estimates from d'_a to d'_b. These empirical results, generated through Monte Carlo methods varying means and variances, highlight how variance inequality amplifies discrepancies at lower separability levels but diminishes them at higher ones. Given these performance characteristics, d'_b is recommended for precise analysis in univariate data where computational resources allow exact of the distributions, as it maximizes the expected gain in accuracy. In , d'_e serves as a reliable for routine univariate applications, particularly when quick estimates are needed without full Bayesian computation.

Applicability in Multivariate Settings

In multivariate settings, the inequalities relating the discriminability indices—specifically d'_a \leq d'_e \leq d'_b—generally hold, mirroring univariate bounds, though correlations among dimensions can widen or narrow the gaps between them depending on the alignment of mean differences and structures. For instance, high inter-dimensional \rho tends to reduce distributional overlap when means differ along correlated axes, bringing d'_e (the average standard deviation index) closer to d'_b (the Bayes discriminability index) compared to uncorrelated cases, where d'_e may underestimate d'_b by up to 20-30% in moderate separations. This effect arises because correlations effectively compress or expand the Mahalanobis-like separation in the pooled or averaged spaces used by d'_a and d'_e. Computationally, evaluating d'_b requires integrating the Bayes error rate over complex quadratic decision boundaries, achievable up to around 4-5 dimensions using efficient ray-tracing algorithms or generalized chi-squared distributions, with runtimes on the order of seconds to ~40 seconds on standard hardware for cases; higher dimensions necessitate sampling. In contrast, d'_a and d'_e rely on simpler operations involving averaged or root-mean-square estimates, scaling linearly with dimensionality and remaining feasible in hundreds of dimensions without . These trade-offs favor d'_b for accuracy in low-to-moderate dimensions but necessitate sampling or for high-dimensional applications to maintain precision. Selection of an in multivariate with correlations depends on the goals: d'_a, using the root-mean-square of pooled covariances, offers conservative discriminability estimates suitable for correlated psychophysical where overestimation must be avoided, while d'_b enables precise (ROC) curve modeling in low-dimensional settings like visual search tasks. Building briefly on univariate foundations, multivariate correlations introduce additional variance in comparisons, emphasizing the need for covariance-aware choices to avoid biasing assessments. Literature on these indices as of 2021 predominantly assumes multivariate , leaving gaps in handling non- distributions; ongoing work explores numerical methods for non-normal extensions.

Advanced Applications

Dimensional Contributions to Discriminability

In multivariate settings, the discriminability of two distributions, such as multivariate normal distributions with unequal covariances, can be decomposed to assess the unique contribution of each to the overall separability. This is particularly useful for understanding how individual features add to the total Bayes discriminability index d', which represents the optimal classification performance under Bayesian (Hou & , 2020). The contribution of dimension i is quantified as \sqrt{d'^2 - d'_{-i}^2}, where d' is the full discriminability across all dimensions, and d'_{-i} is the discriminability computed after marginalizing out (i.e., removing) dimension i from the distributions. This approach accounts for the full structure, including correlations between dimensions, by recalculating the or equivalent metric without the specified dimension. The formula measures the unique added value of a dimension beyond what is provided by the others, revealing redundancies when features are correlated. For instance, if dimensions are independent, the contribution approximates the univariate d'_i; however, with positive correlations, the contribution is typically less than d'_i because shared information reduces the marginal gain. In applications to for classification tasks, this metric helps identify dimensions that most enhance separability, aiding in while preserving discriminative . By leveraging the in the computation of d', it directly incorporates inter-dimensional dependencies, making it suitable for perceptual modeling or scenarios involving high-dimensional data. In asymmetric cases, such as where one dimension has higher univariate d', the contributions can vary further, emphasizing partial redundancy under high correlations.

Scaling Methods for Distributions

One common technique for adjusting discriminability measures to match observed task performance involves interpolating between two known Gaussian distributions representing signal-present (b) and signal-absent (a) conditions (Thurstonian scaling; Lahne et al., 2025). The mean vector is linearly interpolated as \mu(\theta) = (1 - \theta) \mu_a + \theta \mu_b, and the covariance matrix via convex combination S(\theta) = (1 - \theta) S_a + \theta S_b, where \theta \in [0, 1]. The parameter \theta is then numerically solved such that the resulting discriminability d', defined as the Mahalanobis distance between the interpolated distributions, equals the target value derived from empirical data. This method facilitates the generation of intermediate stimulus conditions with controlled difficulty levels, preserving the multivariate structure of the distributions while enabling precise calibration. Another approach focuses on scaling the decision variable in signal detection models to align with observed behavioral outcomes. The log likelihood ratio boundaries are adjusted by shifting the decision c = -\frac{z(H) + z(F)}{2}, where z(H) and z(F) are the z-scores of and rates, to fit empirical and rates from experimental data (Maniscalco et al., 2024). This scaling ensures the model predicts the observed without altering the underlying d'. Such adjustments are essential for accounting for variations in task demands or observer strategies. These scaling methods find applications in calibrating computational models for detection tasks, particularly in where they bridge theoretical predictions with empirical accuracy. For instance, in experiments, interpolated distributions are used to scale d' to match neuronal response patterns or behavioral thresholds observed in fMRI studies, allowing researchers to validate models of (e.g., in ensemble-based visual ; Shen & Ma, 2024). In multivariate settings, the convex combination preserves structure, making the techniques suitable for high-dimensional , such as neural population responses.

References

  1. [1]
    Signal Detection Theory - an overview | ScienceDirect Topics
    Signal Detection Theory. In subject area: Engineering. Signal Detection Theory ... The sensitivity index (dʹ) is the distance between the means of the ...
  2. [2]
    The Theory of Signal Detection
    The following figure will be used to explain the key concepts we will need for signal detection theory: ... d' is a sensitivity index which is the distance ...
  3. [3]
    Signal detection theory and psychophysics. - APA PsycNet
    Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. John Wiley. Abstract. ". . . CONTAINS INTRODUCTIONS TO PROBABILITY THEORY, ...
  4. [4]
    Signal Detection Theory - an overview | ScienceDirect Topics
    The signal detection theory which originates with the study of auditory detection tasks (see Green and Swets 1966) and has been applied to recognition memory ...
  5. [5]
    Signal detection: applying analysis methods from psychology to ...
    May 18, 2020 · In this article, we will review the application of SDT in interpreting the behaviour of laboratory animals trained in operant conditioning tasks.Abstract · Introduction · Signal detection theory in... · Signal detection theory...
  6. [6]
    [PDF] Sensitivity and Bias - an introduction to Signal Detection Theory
    Aim To give a brief introduction to the central concepts of Signal Detection Theory and its application in areas of Psychophysics and Psychology that involve ...
  7. [7]
    Signal Detection: d' Defined - WISE
    The most commonly used SDT measure of sensitivity is d' (d prime), which is the standardized difference between the means of the Signal Present and Signal ...
  8. [8]
    [PDF] A PRIMER OF SIGNAL DETECTION THEORY
    The authoritative work on the subject, Green's & Swets' Signal Detection Theory and Psycho- physics (New York: Wiley) appearedjn 1966, and is having a profound ...
  9. [9]
    Multidimensional signal detection theory. - APA PsycNet
    Multidimensional signal detection theory is a multivariate extension of signal detection theory that makes two fundamental assumptions, namely that every ...
  10. [10]
    Signal Detection Theory: A Brief History (Chapter 4)
    A brief introduction to signal detection theory (SDT) that was developed for radar applications in the early 1950s and was then applied to research in audition ...
  11. [11]
    Signal Detection Theory and Psychophysics - Google Books
    Title, Signal Detection Theory and Psychophysics ; Authors, David M. Green, John A. Swets ; Publisher, Wiley, 1966 ; Original from, the University of Michigan.
  12. [12]
    [PDF] Signal Detection Theory (SDT) - The University of Texas at Dallas
    The classic work on SDT is Green and Swets (1966), a basic in- troduction is McNicol, D. (1972), two recent comprehensive ref- erences are Macmillan and ...Missing: fundamentals | Show results with:fundamentals<|control11|><|separator|>
  13. [13]
    Calculation of signal detection theory measures
    plete history is provided by Macmillan and Creelman. (1996) and W D. Smith. A ... d' can be calculated as follows (Macmillan, 1993): d'=<I>-l(H) - <I>-l ...
  14. [14]
  15. [15]
  16. [16]
  17. [17]
    A direct test of the unequal-variance signal detection model of ...
    A direct test of the unequal-variance signal detection model of recognition memory ... Article PDF. Download to read the full article text. Explore related ...
  18. [18]
  19. [19]
    A method to integrate and classify normal distributions - PMC
    We now define the Bayes discriminability index as the equal-variance index that corresponds to this same Bayes error, that is, the separation between two ...
  20. [20]
  21. [21]
    None
    ### Summary of RMS sd Discriminability Index, Average sd Discriminability Index, and Bayes Discriminability Index from https://arxiv.org/pdf/2012.14331
  22. [22]
    Exploring the relationship between signal detection theory and ...
    The difficulty of discriminating the two states can be quantified by the discriminability, d ′ , which is defined as the ratio of the absolute difference of the ...
  23. [23]
    Methods to integrate multinormals and compute classification ... - arXiv
    Dec 23, 2020 · Methods to integrate multinormals and compute classification measures. Authors:Abhranil Das, Wilson S Geisler.
  24. [24]
    Thurstonian Scaling for Sensory Discrimination Methods - MDPI
    Ten basic sensory discrimination methods including six forced-choice methods and four methods with response bias are used in this paper to estimate d ′ values ...
  25. [25]
    Optimal metacognitive decision strategies in signal detection theory
    Nov 18, 2024 · We use SDT to derive formulae for optimal type 2 criteria under four distinct objectives: maximizing type 2 accuracy, maximizing type 2 reward, calibrating ...
  26. [26]
    A signal-detection account of item-based and ensemble-based ... - NIH
    Feb 26, 2024 · In terms of multidimensional SDT, it means that the observer estimates evidence for change independently along each of the evidence axes which, ...
  27. [27]
    Confidence measurement in the light of signal detection theory
    The aim of this study is to compare confidence judgments elicited with three different rules that belong to each of these categories with the predictions of the ...
  28. [28]
    Towards Effective Discrimination Testing for Generative AI
    Jun 23, 2025 · We offer practical recommendations for improving discrimination testing to better align with regulatory goals and enhance the reliability of fairness ...