Hypergeometric distribution
The hypergeometric distribution is a discrete probability distribution that models the probability of k successes in n draws without replacement from a finite population of size N that contains exactly K items of the success type.[1] This distribution arises in sampling scenarios where the population is finite and draws affect subsequent probabilities, distinguishing it from the binomial distribution, which assumes independence via replacement or an infinite population.[2] The probability mass function is
where k ranges from max(0, n + K - N) to min(n, K), and the binomial coefficients \binom{a}{b} count combinations of b items from a.[2] The expected value is n K / N, reflecting the proportion of successes in the population scaled by sample size, while the variance is n (K/N) (1 - K/N) (N - n)/(N - 1), which accounts for the finite population correction that reduces variability compared to the binomial case.[3] For large N relative to n, the hypergeometric approximates the binomial distribution with success probability K/N.[4] Key applications include quality control inspections, where a batch of N items with K defectives is sampled without replacement, and exact tests for independence in contingency tables, such as Fisher's exact test in statistics.[5]
Definition
Probability Mass Function
The probability mass function (PMF) of the hypergeometric distribution specifies the probability \Pr(X = k) that a random variable X, representing the number of observed successes in n draws without replacement from a population of size N with K total successes, equals a specific integer k. This PMF is expressed as p_X(k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}}, where \binom{\cdot}{\cdot} denotes the binomial coefficient, defined for k ranging over the integers satisfying \max(0, n + K - N) \leq k \leq \min(n, K), and p_X(k) = 0 otherwise.[2][6] This formula derives from combinatorial counting principles: the denominator \binom{N}{n} counts all possible ways to select n items from the N available, while the numerator \binom{K}{k} \binom{N-K}{n-k} enumerates the favorable outcomes where exactly k successes are selected from the K successes and n-k non-successes from the N-K non-successes.[7] The ratio yields the exact probability under the uniform assumption over all subsets of size n.[8] The PMF is zero outside the specified support because it is impossible to observe more successes than available in the population (k > K), more than drawn (k > n), or negative successes; the lower bound ensures feasibility given the non-successes.[2] All probabilities sum to 1 over the support, confirming it as a valid PMF for the discrete uniform sampling model.[6]Parameters and Support
The hypergeometric distribution is parameterized by three non-negative integers: the total population size N, the number of success states (or "marked" items) in the population K, and the number of draws (sample size) n.[9] These parameters must satisfy the constraints $0 \leq K \leq N and $0 \leq n \leq N, ensuring the model reflects a finite population sampled without replacement where the number of successes cannot exceed the population totals.[9] The support of the random variable X (the number of successes in the sample) consists of all integers k in the range from \max(0, n + K - N) to \min(n, K), inclusive; probabilities are zero outside this interval due to the combinatorial impossibility of exceeding available successes or draws while accounting for the finite non-successes in the population.[10] This bounded support distinguishes the hypergeometric from distributions like the binomial, as it enforces dependence induced by without-replacement sampling.[9]Mathematical Properties
Moments and Expectations
The expected value of the hypergeometric random variable X, denoting the number of successes in a sample of size n drawn without replacement from a population of size N containing K successes, is \mathbb{E}[X] = n \frac{K}{N}. This follows from expressing X as the sum of n indicator variables I_j for the j-th draw being a success, where \mathbb{E}[I_j] = \frac{K}{N} for each j by symmetry, and applying linearity of expectation \mathbb{E}[X] = \sum_{j=1}^n \mathbb{E}[I_j] = n \frac{K}{N}, independent of the without-replacement dependence.[2][11] The variance is \mathrm{Var}(X) = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1}. To derive this, compute \mathrm{Var}(X) = \sum_{j=1}^n \mathrm{Var}(I_j) + \sum_{j \neq \ell} \mathrm{Cov}(I_j, I_\ell), where \mathrm{Var}(I_j) = \frac{K}{N} \left(1 - \frac{K}{N}\right) and \mathrm{Cov}(I_j, I_\ell) = \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{-1}{N-1} for j \neq \ell, yielding n \frac{K}{N} \left(1 - \frac{K}{N}\right) + n(n-1) \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{-1}{N-1} = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} after simplification; the factor \frac{N-n}{N-1} reflects reduced variability from sampling without replacement relative to the binomial case.[2][11] Higher moments exist in closed form but grow complex. The skewness is \gamma_1 = \frac{(N - 2K)(N - 2n)}{(N-2) \left[ n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} \right]^{3/2}} \sqrt{\frac{(N-1)^3}{N (N-2)}}, measuring asymmetry that is positive if K < N/2 and n < N/2 (or vice versa) and vanishes when K = N/2.[12] The excess kurtosis is \kappa = \frac{N-1}{(N-2)(N-3)} \left[ 1 - 6 \frac{K(N-K)}{N(N-1)} - 6 \frac{n(N-n)}{N(N-1)} + 6 \frac{n K (N-K) (N-n)}{N^2 (N-1)^2 / (N-2)} \right], often less than 3 for moderate n/N, indicating lighter tails than the normal distribution; exact computation for specific parameters requires evaluating these or using the moment-generating function.[12] Recursive relations, such as \mathbb{E}[X^r] = \frac{n K}{N} \mathbb{E}[(Y+1)^{r-1}] where Y \sim \mathrm{Hypergeometric}(N-1, K-1, n-1), facilitate numerical evaluation of raw moments.[11]Combinatorial Identities and Symmetries
The summation of the probability mass function over its support equals unity, as \sum_{k} \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} = 1, a direct consequence of Vandermonde's identity \sum_{k} \binom{K}{k} \binom{N-K}{n-k} = \binom{N}{n}.[13] This identity counts the total number of ways to choose n items from N by partitioning the choices into those including k successes from K and n-k failures from N-K, for all feasible k. The hypergeometric distribution possesses a combinatorial symmetry interchanging the roles of the number of successes K and the sample size n: \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} = \frac{\binom{n}{k} \binom{N-n}{K-k}}{\binom{N}{K}}. This equality holds because both expressions compute the probability of exactly k overlaps between a fixed set of n sample positions and a randomly selected set of K success positions in the population of N, via complementary counting arguments.[14] The right-hand form interprets the scenario as the distribution of successes falling into a prespecified sample when successes are assigned randomly to the population, dual to the standard sampling view.Tail Bounds and Inequalities
A fundamental tail inequality for the hypergeometric random variable X \sim \mathrm{HG}(N, K, n) with mean \mu = nK/N is Hoeffding's bound, which states that \Pr(X \geq \mu + t) \leq \exp\left( -\frac{2t^2}{n} \right) for all t > 0. This result, derived for sums of bounded random variables including those from sampling without replacement, matches the corresponding bound for the binomial approximation and highlights that the negative associations in hypergeometric sampling preserve concentration comparable to independent trials.[15] Serfling (1974) provided a refinement incorporating the finite population correction factor f = (n-1)/N, yielding the tighter upper tail bound \Pr\left(X \geq \left(\frac{K}{N} + t\right)n \right) \leq \exp\left( -\frac{2t^2 n}{1 - f} \right) = \exp\left( -2t^2 n \cdot \frac{N}{N - n + 1} \right) for t > 0. This adjustment accounts for reduced variance in without-replacement sampling relative to the infinite population case, making it superior to Hoeffding's bound when n is a substantial fraction of N.[15] Exponential bounds with higher-order terms further sharpen these estimates. For instance, Bardenet and Maillard (2015) derived improved exponential inequalities for the upper tail, incorporating factors like (1 - n/N) and quartic terms in the deviation, which outperform Serfling's bound in regimes where more than half the population is sampled. More recently, George (2024) unified existing inequalities and proposed refined confidence bounds derived from Serfling's form, such as c = N \sqrt{ -\frac{N-n+1}{2nN} \ln(\delta/2) } for the deviation ensuring \Pr(|X - \mu| \geq c) \leq \delta when n \leq N/2.[15] Simple yet effective recent derivations include a Chernoff-style bound using Kullback-Leibler divergence: \Pr(X \geq d) \leq \exp\left[ -K D\left( \frac{d}{K} \Big\| \frac{n}{N} \right) \right], for integer d \geq \mu + 1, where D(x \| y) = x \ln(x/y) + (1-x) \ln((1-x)/(1-y)), which sensitizes the bound to the sampling fraction n/N and excels when n > K. An alternative \beta-bound expresses the tail as \Pr(X \geq d) \leq I_{n/N}(d, K - d + 1), with I_x(a,b) the regularized incomplete beta function, offering computational advantages and tighter performance over Serfling in symmetric regimes. These advancements, validated via simulations against benchmarks like Hoeffding and Chatterjee (2007), underscore ongoing refinements tailored to specific parameter ranges in hypergeometric tails.Approximations and Limitations
Binomial Approximation Conditions
The hypergeometric distribution can be approximated by the binomial distribution with parameters n and p = K/N when the population size N is sufficiently large relative to the sample size n, rendering the dependence between draws negligible and approximating sampling with replacement.[16][17] This holds because the hypergeometric probability mass function \Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} simplifies asymptotically to the binomial form \binom{n}{k} p^k (1-p)^{n-k} as N \to \infty with n and p fixed, since the ratios in the falling factorials approach independence.[18] A practical rule of thumb for the approximation's adequacy is n/N < 0.05, ensuring the relative error in probabilities remains small across the support.[19][17] Some sources relax this to n/N < 0.10, though accuracy diminishes for values near this threshold, particularly for tail probabilities or when p is extreme (close to 0 or 1).[20][21] The means coincide exactly as \mathbb{E}[X] = n \cdot (K/N), but the hypergeometric variance n p (1-p) \frac{N-n}{N-1} approaches the binomial variance n p (1-p) only when \frac{N-n}{N-1} \approx 1, reinforcing the n \ll N requirement.[22] Violation of these conditions leads to underestimation of variance and poorer fit in finite samples, as verified in numerical comparisons.[22][20]Normal and Other Approximations
The hypergeometric random variable X \sim \text{Hypergeometric}(N, K, n) with mean \mu = n \frac{K}{N} and variance \sigma^2 = n \frac{K}{N} \left(1 - \frac{K}{N}\right) \frac{N-n}{N-1} converges in distribution to a normal random variable with the same mean and variance as N \to \infty and n \to \infty, provided n^2 / N \to 0.[23] Under these conditions, the local limit theorem yields P(X = k) \approx \frac{1}{\sqrt{2\pi \sigma^2}} \exp\left( -\frac{(k - \mu)^2}{2\sigma^2} \right).[23] Stronger uniform convergence bounds, such as those from the Berry–Esseen theorem adapted to the hypergeometric case, hold for a wide range of \frac{K}{N} and \frac{n}{N}, with error rates on the order of O\left( \frac{1}{\sqrt{\min(np, n(1-p))}} \right) where p = \frac{K}{N}.[24] A continuity correction enhances the approximation for tail probabilities: P(X \leq k) \approx \Phi\left( \frac{k + 0.5 - \mu}{\sigma} \right), where \Phi is the standard normal cumulative distribution function; this adjustment accounts for the discreteness of X by expanding the interval to [k+0.5, \infty) or similar.[23] When \frac{n}{N} approaches a constant t \in (0,1), the variance requires adjustment to \sigma^2 (1 - t), and the normal density scales accordingly to reflect the finite population correction.[23] Empirical rules of thumb for practical use include requiring \sigma^2 \geq 9 or np(1-p) \geq 10 (adjusted for the hypergeometric variance) to ensure reasonable accuracy, though these are heuristic and depend on the specific parameter regime.[16] For rare events where \frac{K}{N} \to 0 as N \to \infty while \lambda = n \frac{K}{N} remains fixed and finite, the hypergeometric distribution approximates a Poisson distribution with parameter \lambda, as the without-replacement sampling behaves similarly to independent rare trials.[25] This limit arises because the probability mass function P(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} simplifies to \frac{\lambda^k e^{-\lambda}}{k!} under the specified asymptotics, with dependencies between draws becoming negligible.[25] The approximation improves when n is moderate relative to N and K is small, but degrades if depletion effects are significant (i.e., n comparable to K). Bounds like the Stein-Chen method quantify the total variation distance between the distributions as O\left( \frac{n^2}{N} + \frac{\lambda}{K} \right).[25] Other approximations, such as Edgeworth expansions for higher-order corrections to the normal or saddlepoint approximations for tail probabilities, extend these limits but require more computational effort and are typically used when exact hypergeometric probabilities are intractable for large N.[26] These methods incorporate skewness and kurtosis of the hypergeometric (e.g., skewness \gamma_1 = \frac{(N-2K)(N-1)^{1/2} (N-2n)}{(N-2)(N K n (N-K)(N-n))^{1/2}}) to refine the normal approximation beyond the central limit regime.[26]Computational and Practical Limitations
Exact evaluation of the hypergeometric probability mass function \Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}} requires computing binomial coefficients, whose values grow rapidly with increasing N, K, and n, often exceeding the dynamic range of double-precision floating-point numbers (approximately $10^{308}) for N > 1000. [27] [28] This overflow occurs because intermediate factorials or products in naive multiplicative formulas for \binom{N}{k} become unrepresentable, yielding infinite or erroneous results. [29] To address numerical instability, modern implementations employ logarithmic transformations, computing \log \Pr(X = k) via differences of log-gamma functions: \log \binom{N}{k} \approx \lgamma(N+1) - \lgamma(k+1) - \lgamma(N-k+1), where \lgamma is evaluated using asymptotic expansions or table lookups for large arguments to maintain precision up to relative errors of $10^{-15} or better. [30] Recursive ratio methods, multiplying successive terms \frac{\Pr(X = k+1)}{\Pr(X = k)} = \frac{(k+1)(N-K-n+k+1)}{(n-k)(K-k)}, further avoid large intermediates by starting from a mode or boundary and iterating, though they still rely on log-space accumulation for exponentiation back to probabilities. [27] For cumulative distribution functions or tail probabilities, such as in Fisher's exact test for 2×2 contingency tables, exact computation demands summing over up to \min(n, K) terms, each potentially requiring the above techniques; while single-term evaluation is O(1) with precomputation, full p-values exhibit worst-case time complexity O(N) due to the summation extent and binomial evaluations, becoming prohibitive for N > 10^5 without optimization. [31] [32] In practice, for large-scale applications like gene set enrichment analysis with N in the millions (e.g., human genome ~3×10^7 bases), exact tails involve thousands of terms with minuscule probabilities (~10^{-100}), leading to underflow, rounding error propagation in summation, and excessive runtime, necessitating Monte Carlo simulation or Poisson/binomial approximations despite their asymptotic validity only when n \ll N. [31] [33]Illustrative Examples
Basic Sampling Example
A prototypical scenario for the hypergeometric distribution involves drawing a fixed-size sample without replacement from a finite population divided into two mutually exclusive categories, such as "success" and "failure." Formally, let the population size be N, with K successes and N - K failures; a sample of n items is selected, where n ≤ N, and X denotes the number of successes observed in the sample, with X ranging from max(0, n + K - N) to min(n, K). The probability that X = k is P(X = k) = \frac{\binom{K}{k} \binom{N - K}{n - k}}{\binom{N}{n}}, where \binom{a}{b} is the binomial coefficient representing the number of ways to choose b items from a without regard to order.[2][34] To illustrate, consider an urn containing N = 10 balls, of which K = 4 are red (successes) and 6 are blue (failures); draw n = 3 balls without replacement. The possible values of X (number of red balls drawn) are k = 0, 1, 2, 3. The probabilities are computed as follows:| k | P(X = k) | Calculation |
|---|---|---|
| 0 | 1/6 ≈ 0.1667 | \frac{\binom{4}{0} \binom{6}{3}}{\binom{10}{3}} = \frac{1 \cdot 20}{120} |
| 1 | 1/2 = 0.5 | \frac{\binom{4}{1} \binom{6}{2}}{\binom{10}{3}} = \frac{4 \cdot 15}{120} |
| 2 | 0.3 | \frac{\binom{4}{2} \binom{6}{1}}{\binom{10}{3}} = \frac{6 \cdot 6}{120} |
| 3 | 1/30 ≈ 0.0333 | \frac{\binom{4}{3} \binom{6}{0}}{\binom{10}{3}} = \frac{4 \cdot 1}{120} |
Real-World Scenario Interpretation
In quality control processes, the hypergeometric distribution quantifies the probability of encountering a specific number of defective items when sampling without replacement from a finite production batch, accounting for the depletion of the population that alters successive draw probabilities unlike independent trials in binomial models.[36] For example, consider a factory producing 1,000 widgets where quality assurance reveals K=50 defectives prior to full shipment; inspectors then draw n=100 widgets randomly without replacement to evaluate the lot. The random variable X representing observed defectives follows Hypergeometric(N=1000, K=50, n=100), with P(X=k) = [C(50,k) * C(950,100-k)] / C(1000,100), enabling calculation of risks such as P(X ≥ 10) to inform acceptance thresholds that balance false positives and negatives in lot disposition.[37] This interpretation underscores the distribution's utility in finite-population scenarios where sampling fraction n/N exceeds typical binomial approximations (e.g., here ~10%), as dependencies inflate variance relative to np(1-p).[38] In electoral auditing, the hypergeometric distribution interprets the consistency between sampled ballots and aggregate tallies to detect irregularities in finite vote universes without replacement assumptions.[39] For instance, in a jurisdiction with N=10,000 ballots where K=6,000 validly favor Candidate A per official count, auditors might hand-recount n=500 randomly selected ballots, modeling X=observed A votes as Hypergeometric(N=10,000, K=6,000, n=500); deviations like P(X ≤ 240) could signal fraud probabilities under null hypotheses of accurate reporting, guiding risk-limiting audits that scale sample sizes inversely with desired error bounds.[40] Such applications highlight causal dependencies in vote pools, where early discrepancies propagate evidential weight, prioritizing empirical verification over approximations valid only for negligible sampling fractions.[41]Statistical Inference
Point and Interval Estimation
The method of moments provides a straightforward point estimator for the success proportion p = K/N, given by the sample proportion \hat{p} = k/n. This follows from equating the observed mean k to the theoretical expectation E[X] = n p, yielding an unbiased estimator since E[\hat{p}] = p. The corresponding estimator for K is \hat{K} = \hat{p} N = k N / n, which is rounded to the nearest integer when K must be integral.[42] The maximum likelihood estimator (MLE) for K maximizes the hypergeometric probability mass function P(X = k \mid N, K, n) over integer values of K between \max(0, n + K - N) wait, \max(0, k + n - N) and \min(k, n), but typically from 0 to N. Computation involves finding the K where the likelihood ratio L(K+1)/L(K) \leq 1 \leq L(K)/L(K-1), with L(K+1)/L(K) = \frac{(K+1)(N - K - n + k)}{(K + 1 - k)(N - K)}. For large N and n, the MLE approximates the method of moments estimator but incorporates discreteness effects, often computed numerically or via software implementing recursive evaluation. In related capture-recapture contexts modeled by the hypergeometric distribution, bias-reduced MLE variants like \hat{K} = \frac{(n+1)(k+1)}{N+2} - 1 (floored if necessary) are used, though exact form requires case-specific verification.[43][44] Interval estimation for p or K accounts for the variance \mathrm{Var}(X) = n p (1-p) \frac{N-n}{N-1}, which includes a finite population correction factor \frac{N-n}{N-1}. An approximate $1 - \alpha confidence interval for p is \hat{p} \pm z_{\alpha/2} \sqrt{ \frac{\hat{p} (1 - \hat{p}) (N - n)}{n (N - 1)} }, where z_{\alpha/2} is the $1 - \alpha/2 quantile of the standard normal distribution; this performs well when n p (1-p) \geq 5 and N is not too small relative to n. For K, the interval is N times the one for p, clipped to integers [0, N].[42] Exact confidence intervals, preferred for small samples to achieve nominal coverage despite discreteness, are constructed by inverting hypergeometric tests: the $1 - \alpha interval for K comprises all integers K' such that the two-sided p-value for testing H_0: K = K' given observed k exceeds \alpha, computed as \min\left( \sum_{j=0}^k P(X=j \mid K'), \sum_{j=k}^{\min(n,K')} P(X=j \mid K') \right) \times 2 + P(X=k \mid K'). Efficient algorithms using tail probability recursions enable fast computation without full enumeration, yielding shortest intervals with guaranteed coverage at least $1 - \alpha. These methods outperform approximations in finite samples and are implemented in statistical software.[45][46]Hypothesis Testing with Fisher's Exact Test
Fisher's exact test utilizes the hypergeometric distribution to conduct precise hypothesis testing for independence between two dichotomous variables represented in a 2×2 contingency table, particularly suitable for small sample sizes where asymptotic approximations like the chi-squared test fail.[47] The test conditions on the observed row and column marginal totals, treating one cell entry—such as the count of successes in the first group—as a realization from a hypergeometric distribution with population size N equal to the grand total, K as the total successes in the population, and n as the sample size from the first group.[48] Under the null hypothesis of independence (equivalent to an odds ratio of 1), this conditional distribution holds exactly, without reliance on large-sample assumptions.[49] The probability mass function for the hypergeometric random variable X (representing the cell count) is given by \Pr(X = k) = \frac{\binom{K}{k} \binom{N-K}{n-k}}{\binom{N}{n}}, where k ranges from \max(0, n + K - N) to \min(n, K).[48] To compute the p-value, all possible tables with the fixed margins are enumerated, each assigned a hypergeometric probability, and the p-value is the sum of probabilities for tables at least as extreme as the observed one. For a two-sided test, this typically includes tables with probabilities less than or equal to that of the observed table; one-sided variants sum over the tail in the direction of the alternative hypothesis.[48] [49] This approach ensures the test maintains its nominal significance level exactly, even with sparse data where more than 20% of expected cell frequencies are below 5 or any below 1, conditions under which the chi-squared test's approximation is unreliable.[47] For instance, in analyzing whether a treatment affects binary outcomes across two groups, the test evaluates evidence against the null by quantifying the rarity of the observed association under hypergeometric sampling.[49] Computational implementation often involves software like R'sfisher.test() function, which handles enumeration directly for moderate sizes or simulation for larger ones.[48]