Fact-checked by Grok 2 weeks ago

Exact test

An exact test is a that computes the precise by directly evaluating the of the under the , without relying on large-sample approximations such as those based on or chi-squared distributions. This approach involves enumerating all possible outcomes consistent with the observed data margins or using exact probability models like the , making it particularly suitable for small sample sizes or discrete data where asymptotic methods may yield inaccurate results. The concept of exact tests emerged in the early as part of the foundational work in modern statistics by Ronald A. Fisher, who sought reliable methods for analyzing experimental data in and without assuming large samples. Fisher's seminal contributions, including the development of and permutation-based , laid the groundwork for exact procedures, with his 1935 book formalizing the experiment as a demonstration of exact conditional . This experiment illustrated testing the of no ability to distinguish tea preparations by computing the exact probability of specific outcomes, influencing the broader adoption of exact tests in hypothesis testing. Exact tests encompass a variety of procedures tailored to different data types and hypotheses, including for 2×2 contingency tables to assess independence between categorical variables, the for proportions, and permutation tests for comparing groups under exchangeability assumptions. They are widely applied in fields like , , and social sciences for analyzing sparse or small datasets, such as in clinical trials or genome-wide association studies, where computational advances have enabled their extension to larger tables via simulations when full enumeration is infeasible.

Definition and Motivation

Core Definition

An exact test is a in which the is computed directly from the exact of the under the , without dependence on large-sample approximations such as the or asymptotic normality. These tests are typically nonparametric, relying on the or combinatorial structure of the data to derive probabilities, ensuring applicability across diverse data types like categorical or discrete observations. A defining feature of exact tests is their validity for any sample size, as they impose no asymptotic assumptions that could invalidate results in small or sparse datasets. This guarantees exact control of the Type I error rate at the nominal significance level \alpha, meaning the probability of rejecting the when it is true is precisely \alpha or less, regardless of the underlying distribution's shape or sample magnitude. In contrast to approximate methods, which may inflate error rates in finite samples, exact tests provide conservative yet reliable by enumerating all possible outcomes under the null. The mathematical foundation of an exact test centers on the formula, which aggregates the probabilities of all outcomes at least as extreme as the observed data: p = \sum_{\{t : T(t) \leq T(t_{\obs})\}} P(T = t \mid H_0), where T denotes the , t_{\obs} is its observed value, and the summation runs over the support of the exact induced by the H_0. This direct computation, often via conditional distributions to eliminate parameters, underpins the test's and distinguishes it from methods that approximate this distribution.

Rationale for Exact Tests

Exact tests provide a rigorous alternative to approximate statistical methods by deriving p-values directly from the exact under the , ensuring precise control of the Type I error rate, especially when sample sizes are small or key assumptions like or large expected frequencies are not met. This precision is crucial because approximate tests, such as those based on asymptotic , can lead to inflated Type I error rates—rejecting the more often than intended—when applied to limited data, thereby compromising the reliability of inferences. By enumerating all possible outcomes under the null, exact tests maintain the Type I error rate at or below the nominal significance level \alpha, without depending on approximations that perform poorly in finite samples. These tests are particularly motivated for applications involving categorical , where variables are and outcomes may include rare events, such as in or studies with low event rates. In such scenarios, asymptotic methods like the often underestimate p-values due to the nature of the data and sparse cell counts, resulting in falsely significant findings; exact tests mitigate this by conditioning on sufficient statistics to compute unbiased probabilities. For distributions, where continuity corrections or simulations might introduce additional , exact approaches offer theoretical guarantees of validity across all sample sizes, though they become computationally intensive for larger datasets. The development of exact tests arose in the early 20th century to address the shortcomings of approximate methods introduced in the late 19th and early 20th centuries, such as , which relied on large-sample theory unsuitable for the modest datasets common in agricultural and biological research at the time. Ronald A. formalized the framework for exact inference in tables during the 1930s, motivated by the need for exact randomization-based tests in experimental designs, as detailed in his seminal work that emphasized conditional inference to achieve unbiased error control. This historical advancement shifted statistical practice toward methods that prioritize exactness over convenience, influencing modern applications where data limitations persist despite advances in computing.

Theoretical Framework

Hypothesis Testing Basics

Hypothesis testing provides a formal for making inferences about a based on sample data, by assessing evidence against a specified . The process begins with the formulation of a H_0, which posits no effect or a specific value for the (e.g., \theta = \theta_0), and an H_a, which represents the research claim (e.g., \theta > \theta_0 or \theta \neq \theta_0). These hypotheses partition the into two complementary regions, guiding the decision-making process. Central to hypothesis testing is the test statistic T, a function of the observed data that quantifies the discrepancy between the sample and the . The significance level \alpha is predefined as the maximum acceptable probability of rejecting H_0 when it is true, defining the rejection region as the set of T values sufficiently extreme to warrant rejection (e.g., T > c for a one-sided test). The , introduced by , measures the probability of obtaining a test statistic at least as extreme as observed, assuming H_0 is true; a small (typically below \alpha) indicates evidence against H_0. The framework controls the Type I error rate at \alpha = P(\text{reject } H_0 \mid H_0 \text{ true}), as formalized in the Neyman-Pearson approach, while the Type II error probability \beta = P(\text{accept } H_0 \mid H_a \text{ true}) measures the risk of failing to detect an effect when it exists. The power of the test, defined as $1 - \beta, represents the probability of correctly rejecting H_0 under H_a, and tests are designed to maximize power for a fixed \alpha. In practice, hypothesis tests often involve distributions of the under H_0, which can be continuous or . Continuous distributions, such as , allow for exact attainment of \alpha through smooth densities and integrals, facilitating precise rejection regions without . distributions, common in categorical data (e.g., or ), yield probabilities via sums over countable outcomes, where the discreteness can prevent exact \alpha levels, leading to conservative tests or the need for to handle ties and achieve precise control. This exactness in cases underscores the importance of p-values directly from the , as approximations may distort error rates.

Exact Distribution Computation

In exact statistical tests, the distribution under the H_0 is computed by enumerating all possible outcomes that are consistent with the observed and the constraints imposed by H_0, assigning probabilities according to the underlying probability model. This approach ensures that the reflects the exact tail probability of the T, without relying on large-sample approximations, by directly summing the probabilities of all outcomes at least as extreme as the observed one. For a discrete test statistic T, the null probability mass function is given by P(T = t \mid H_0), derived from the direct specification of the probability model under H_0. In cases involving , this often reduces to the , where the probability of k successes in n trials is P(K = k) = \binom{n}{k} p^k (1-p)^{n-k} under a null probability p. For more general categorical data, such as tables, multinomial coefficients are used to compute the probabilities of specific configurations, reflecting the joint of counts across categories. A canonical example arises in 2×2 contingency tables under the null hypothesis of independence, where the exact distribution follows the hypergeometric distribution after appropriate conditioning. The probability of observing cell counts a, b, c, d (with row totals n_1 = a+b, n_2 = c+d, column totals m_1 = a+c, m_2 = b+d, and grand total N = n_1 + n_2) is P = \frac{\binom{n_1}{a} \binom{n_2}{c}}{\binom{N}{m_1}} = \frac{n_1! \, n_2! \, m_1! \, m_2!}{N! \, a! \, b! \, c! \, d!}, or equivalently, P = \frac{n_1! \, n_2! \, m_1! \, m_2! \, N!}{N! \, a! \, b! \, c! \, d! \, N!}. This formula arises from the multinomial expansion under , normalized over all tables with fixed marginal totals. The conditioning on marginal totals plays a crucial role in simplifying the computation, as these totals are sufficient statistics under H_0 for the nuisance parameters (e.g., category probabilities in tests). By conditioning on the observed marginals, the exact distribution eliminates dependence on unknown parameters, yielding a conditional hypergeometric form that is free of such parameters and facilitates enumeration. This approach, central to , ensures the test's validity even for small samples by focusing solely on the variability attributable to the hypothesis of interest.

Comparisons with Approximate Methods

Limitations of Asymptotic Approximations

Asymptotic tests, such as those relying on the normal approximation to the of test statistics, are theoretically valid only in the limit as the sample size n approaches infinity, a condition derived from the and other large-sample results. In practice, this requirement means that the approximations hold reliably only for sufficiently large n, and deviations occur when samples are small, as the finite-sample of the statistic may not closely match the assumed asymptotic form. With small sample sizes, asymptotic tests often produce p-values that are either conservative—resulting in type I error rates below the nominal level \alpha—or anti-conservative, where type I error rates exceed \alpha. This discrepancy arises because the tail probabilities of the test statistic's distribution are poorly approximated, leading to unreliable inference. For instance, conservative behavior reduces the test's power to detect true effects, while anti-conservative behavior inflates false positives. A prominent example of these issues appears in chi-squared tests for tables, where expected frequencies below 5 in one or more cells cause the chi-squared approximation to the test statistic's distribution to become inaccurate, often yielding distorted p-values. studies confirm this, demonstrating that in small samples with sparse data, type I error rates can deviate substantially from the nominal \alpha, sometimes by up to 50% or more depending on the table configuration and . Asymptotic approximations generally suffice when all expected frequencies are at least 5, which generally requires moderate to large sample sizes depending on the table dimensions, ensuring the applies effectively; this is particularly true for continuous data where normality assumptions hold better. However, even in these cases, exact tests are often preferred for their guaranteed control of error rates and precision, avoiding any reliance on unverified large-sample conditions.

Chi-Squared Test Versus Exact Alternatives

Pearson's is an approximate statistical method used to assess between two categorical variables in a . The test statistic is given by X^2 = \sum_{i,j} \frac{(O_{ij} - E_{ij})^2}{E_{ij}}, where O_{ij} are the observed frequencies and E_{ij} are the expected frequencies under the of , calculated as E_{ij} = \frac{(row\ total_i) \times (column\ total_j)}{grand\ total}. This statistic is asymptotically distributed as a with (r-1)(c-1), where r and c are the number of rows and columns, respectively. In small samples, the chi-squared approximation can be unreliable, often resulting in p-values that are too low and thus overestimating the evidence against the . For instance, consider a 2×2 testing for association between treatment and outcome with observed frequencies as follows:
SuccessFailure
Treatment A13
Treatment B31
The row and column totals are 4 each, yielding expected frequencies of 2 in every (all <5). The chi-squared statistic is X^2 = 2, with a p-value of approximately 0.16 from the \chi^2(1) distribution. In contrast, the exact test, conditioning on the fixed margins and using the hypergeometric distribution, enumerates all possible tables and computes a p-value of 0.49 by summing the probabilities of tables at least as extreme as the observed one. This discrepancy highlights how the approximate test can mislead in sparse data. Exact tests are preferred over the chi-squared test when any expected frequency is less than 5 or the total sample size is less than 20, as these conditions violate the approximation's assumptions. The exact approach employs the hypergeometric distribution to derive precise p-values without relying on large-sample normality, ensuring greater accuracy for sparse contingency tables.

Key Examples

Fisher's Exact Test

Fisher's exact test is a statistical procedure for testing the null hypothesis of independence between two categorical variables in a 2x2 contingency table. It achieves this by conditioning on the observed marginal totals, which are treated as fixed, and calculates the exact probability of observing the given table or any table deemed more extreme under the null hypothesis using the hypergeometric distribution. This approach ensures that the test does not rely on large-sample approximations, making it particularly suitable for small sample sizes where asymptotic methods may fail. The test was developed by Ronald A. Fisher and first described in his 1935 book The Design of Experiments. To perform the test, the row totals (e.g., group sizes) and column totals (e.g., outcome counts) are fixed, enumerating all possible 2x2 tables consistent with these margins. The probability of each such table is computed via the hypergeometric formula: P(X = k) = \frac{\binom{r_1}{k} \binom{r_2}{c_1 - k}}{\binom{n}{c_1}} where r_1 and r_2 are the row totals, c_1 is a column total, n is the grand total, and k is the cell entry in the first row and first column. For the one-sided test, the p-value is the sum of probabilities for tables as extreme or more extreme than the observed in the direction of interest (e.g., greater association). The two-sided p-value typically sums the probabilities of all tables with probabilities less than or equal to that of the observed table, though alternative definitions exist for balancing the tails. A classic illustration of Fisher's exact test is the "lady tasting tea" experiment, also from Fisher's 1935 work. In this setup, a lady claimed she could distinguish whether milk was added to tea before or after the tea infusion. Fisher designed a randomized trial with 8 cups: 4 with milk first and 4 with tea first, presented in random order. The lady correctly identified all 4 of each type, corresponding to a 2x2 table with cell counts (4,0; 0,4). Under the null hypothesis of no discrimination ability, the exact one-sided p-value is the probability of 4 or more correct identifications for one type, calculated as \frac{\binom{4}{4}\binom{4}{0}}{\binom{8}{4}} = \frac{1}{70} \approx 0.014, rejecting the null at the 5% significance level. This example demonstrates the test's power in small samples and its foundation in exact conditional inference.

Exact Binomial Test

The exact binomial test assesses whether the observed proportion of successes in a fixed number of independent binary trials significantly deviates from a hypothesized probability p_0, under the null hypothesis H_0: p = p_0. The test leverages the exact probability mass function of the X \sim \text{Bin}(n, p_0), where n is the number of trials and X is the number of observed successes. For a one-sided test in the upper tail (testing for p > p_0) with observed successes x_{\text{obs}}, the p-value is the cumulative probability P(X \geq x_{\text{obs}}) = \sum_{k = x_{\text{obs}}}^{n} \binom{n}{k} p_0^k (1 - p_0)^{n - k}. Similarly, for the lower tail (p < p_0), it is P(X \leq x_{\text{obs}}) = \sum_{k = 0}^{x_{\text{obs}}} \binom{n}{k} p_0^k (1 - p_0)^{n - k}. These sums are computed directly from the discrete distribution, ensuring accuracy without reliance on large-sample approximations. Variants include one-sided tests for assessing superiority (p > p_0) or inferiority (p < p_0), often used in contexts like drug efficacy trials where directionality matters. For two-sided tests (p \neq p_0), a common approach doubles the smaller one-sided p-value and caps at 1: p_{\text{two-sided}} = \min(2 \times \min(p_{\text{lower}}, p_{\text{upper}}), 1). An alternative exact method enumerates all outcomes under H_0 whose probability densities are less than or equal to that of the observed outcome, summing their probabilities for the ; this controls the Type I error more conservatively in settings. The test is essential for small n, where the normal to the can produce erroneous s due to , discreteness, and tail inaccuracies, potentially leading to incorrect inferences. For instance, testing coin fairness (p_0 = 0.5) with n = 10 flips yielding 8 heads gives an one-sided upper-tail of approximately 0.055 and a two-sided of 0.109, both indicating non-significance at \alpha = 0.05; however, the normal yields a two-sided of about 0.114, which, while similar here, demonstrates greater discrepancies in smaller or imbalanced cases that could flip borderline decisions. Applications arise in scenarios with binary outcomes and limited trials, such as evaluating defect rates in manufacturing (e.g., proportion of faulty items below a threshold) or success rates in preliminary medical studies with few patients, where exactness prevents over- or under-rejection of H_0.

Practical Considerations

Computational Challenges

Computing exact tests, particularly for contingency tables, presents significant challenges due to the need to enumerate all possible tables that match the observed marginal totals, resulting in a combinatorial explosion as the sample size n or table dimensions increase. For example, a 4×4 table with 100 observations requires evaluating approximately $7.2 \times 10^9 configurations, while a 5×5 table of the same size demands around $9.2 \times 10^{14} tables, making direct computation impractical for larger structures like 10×10 tables, where the number of possible configurations becomes astronomically large. In the worst cases, such as certain formulations of exact logistic regression, the time complexity approaches O(2^n) because of the exponential growth in the support of the conditional distribution of sufficient statistics. To mitigate these issues, simulations provide an approximate solution by randomly sampling s from the exact conditional distribution—typically the multivariate hypergeometric—and estimating p-values as the proportion of simulated s at least as extreme as the observed one, achieving desired accuracy with thousands of iterations. For exact in moderate-sized problems, algorithms model the table generation as a , pruning infeasible paths to efficiently sum probabilities without listing all configurations, as pioneered by and for r \times c s. Recursive methods complement this by incrementally computing the , avoiding redundant calculations in structured cases like stratified s. Extensions to more complex settings amplify these challenges but leverage similar strategies. The Freeman-Halton test generalizes to r \times c tables by enumerating under the multivariate , though the reference set grows rapidly beyond 2×2, often requiring network or recursive approaches for feasibility. Exact , which conditions on sufficient statistics for unbiased inference in small or sparse , depends on full of the exact conditional , posing exponential demands that are typically addressed only for datasets with fewer than 20-30 observations or limited covariates.

Applications and Software

Exact tests find prominent applications in fields where sample sizes are small or event frequencies are low, ensuring reliable inference without relying on large-sample approximations. In , they are employed to detect s, particularly in genome-wide association studies (GWAS) involving small cohorts or rare variants, where assesses contingency tables for genotype-phenotype links. For instance, exact tests handle sequencing data from limited samples, providing precise p-values for rare variant identification without approximation biases. In clinical trials, exact methods analyze rare adverse events, such as through exact inference in meta-analyses of beta-binomial models for sparse event data, enabling accurate confidence intervals for safety evaluations when single trials yield few occurrences. In , exact tests like Fisher's evaluate s in presence-absence data, testing independence between occurrences and environmental factors in contingency tables derived from field surveys. Several software packages implement exact tests efficiently, addressing computational demands that historically constrained their use. In R, the stats package includes fisher.test for on 2x2 tables and binom.test for exact tests, while the exactci package extends confidence intervals for proportions. SAS PROC FREQ supports exact p-values and intervals via the EXACT statement for contingency tables and proportions, suitable for categorical . Python's SciPy library provides scipy.stats.fisher_exact for 2x2 tables and binomtest for exact tests, facilitating integration in workflows. Prior to the , computational limitations—such as manual calculations or early computers with restricted memory—restricted exact tests to very small samples (e.g., n < 10), often making them impractical beyond simple cases; advances in algorithms and hardware since then have broadened accessibility. Best practices recommend exact tests for small samples, typically when n < 20 or expected cell frequencies are below 5, to avoid inaccuracies from asymptotic approximations like chi-squared. For larger datasets, approaches combine exact methods for critical low-frequency components with approximations elsewhere, balancing and ; this is particularly useful in multi-testing scenarios like GWAS, where exact computations may reference permutation-based methods for estimation without delving into algorithmic details.

References

  1. [1]
    4.5 - Fisher's Exact Test | STAT 504
    The basic idea behind exact tests is that they enumerate all possible tables that have the same margins, eg, row sums and column sums.
  2. [2]
    Fisher Exact Test - Information Technology Laboratory
    Oct 21, 2008 · The Fisher exact test is based on the probability of obtaining a table more extreme than the observed table. For example, for the 2x2 case when ...
  3. [3]
    Chi-squared test and Fisher's exact test - PMC - NIH
    Mar 30, 2017 · The chi-squared test and Fisher's exact test can assess for independence between two variables when the comparing groups are independent and not correlated.
  4. [4]
    Ronald Fisher, a Bad Cup of Tea, and the Birth of Modern Statistics
    Aug 6, 2019 · Fisher published the fruit of his research in two seminal books, Statistical Methods for Research Workers and The Design of Experiments. The ...
  5. [5]
    [PDF] Notes: Hypothesis Testing, Fisher's Exact Test - GitHub Pages
    Mar 1, 2022 · Fisher in his 1935 book The Design of Experiments. As the story goes, he came up with these ideas one day when a colleague, Muriel Bristol, ...
  6. [6]
    Historical Hypothesis Testing
    R.A. Fisher credits Gosset with making a big leap forward in the development of hypothesis testing procedures. Fisher said: Though his was the first exact test ...
  7. [7]
    [PDF] Springer Texts in Statistics
    The Third Edition of Testing Statistical Hypotheses brings it into ... exact” test, is described by Boschloo (1970) and by Mc-. Donald, Davis, and ...
  8. [8]
    [PDF] A Survey of Exact Inference for Contingency Tables Alan Agresti ...
    Aug 27, 2007 · Exact conditional inferential methods utilize the distribution of the sufficient statistic for that parameter, conditional on sufficient ...
  9. [9]
    When possible, report a Fisher-exact P value and display its ... - PNAS
    Jul 23, 2020 · ... statistic Nt-c = 10 when conducting the Fisher-exact test. Because of the crossover structure, the period 1 assignments determined the ...
  10. [10]
    Fisher Exact Tests Convenient for Small Samples - SpringerLink
    Jan 23, 2016 · Fisher-exact test is used as a test for the analysis of cross-tabs, and also as a contrast test to the chi-square test (Chap. 38) and the z-test (Chap. 36).Missing: historical development
  11. [11]
    Exact association test for small size sequencing data
    Apr 20, 2018 · Since EXAT is valid for all sample sizes, it can be more accurate than SKAT in small sample studies, because SKAT relies on asymptotic tests, ...
  12. [12]
    Fisher's Exact and Barnard's Tests - Wiley Online Library
    Aug 15, 2006 · (The historical origin of Fisher's exact test.) 10.1080 ... overview of Fisher's exact test and Yates' correction for continuity. J ...
  13. [13]
    Fisher's Exact Test - jstor
    The purpose of this paper is to announce my conversion, brought about by conversations with Professor Barnard and the stimulus of his recent papers (Barnard, ...
  14. [14]
    Elucidating the Foundations of Statistical Inference with 2 x 2 Tables
    Apr 7, 2015 · Fisher's exact test is routinely used even though it has been fraught with controversy for over 70 years. The problem, not widely acknowledged, ...
  15. [15]
    A Survey of Exact Inference for Contingency Tables - Project Euclid
    February, 1992 A Survey of Exact Inference for Contingency Tables. Alan Agresti · DOWNLOAD PDF + SAVE TO MY LIBRARY. Statist. Sci. 7(1): 131-153 (February, 1992) ...
  16. [16]
    On the Interpretation of χ2 from Contingency Tables, and the ...
    On the Interpretation of χ2 from Contingency Tables, and the Calculation of P. R. A. Fisher M.A.,. R. A. Fisher M.A.. Rothamsted Experiment Station.
  17. [17]
    [PDF] 9 Asymptotic Approximations and Practical Asymptotic Tools
    Asymptotic approximations are derived by assuming large sample sizes. Consistency means that with many observations, an estimator is close to the true value.
  18. [18]
    [PDF] chi-square test - analysis of contingency tables - University of Vermont
    The general rule is that the smallest expected frequency should be at least five. However Cochran (1952), who is generally considered the source of this ...
  19. [19]
    [PDF] Chi-Square Tests - Support - Minitab
    By using Type I error to evaluate the validity of the test, we develop a rule to ensure that: • The probability of rejecting the null hypothesis when it is true ...Missing: study | Show results with:study
  20. [20]
    The Tale of Cochran's Rule: My Contingency Table has so Many ...
    Type I error rate of the chi-square test in independence in R × C tables that have small expected frequencies. Source: Psychological Bulletin. X. On the ...
  21. [21]
    Small numbers in chi-square and G–tests
    Dec 4, 2014 · Chi-square and G–tests are somewhat inaccurate when expected numbers are small, and you should use exact tests instead.<|control11|><|separator|>
  22. [22]
    Type I error rate of the chi-square test in independence in R × C ...
    The uncorrected chi-square test of independence is exceptionally robust with respect to small expected frequencies in R × C contingency tables.
  23. [23]
    Chi-Square Test of Independence Rule of Thumb: n > 5
    The rule of thumb for the chi-square test is that the expected number of observations in each cell must be greater than 5, not the observed number.
  24. [24]
    The Design Of Experiments : Fisher, R.a. - Internet Archive
    Jan 17, 2017 · The Design Of Experiments. by: Fisher, R.a.. Publication date: 1935. Topics: C-DAK. Collection: digitallibraryindia; JaiGyan. Language: Unknown.
  25. [25]
    Exact Binomial Test - R
    Exact Binomial Test. Description. Performs an exact test of a simple null hypothesis about the probability of success in a Bernoulli experiment.Missing: definition | Show results with:definition
  26. [26]
    How can we determine whether two processes produce the same ...
    If the samples are reasonably large we can use the normal approximation to the binomial to develop a test similar to testing whether two normal means are equal.
  27. [27]
  28. [28]
    The Binomial Test - Technology Networks
    Mar 26, 2024 · The Binomial test, sometimes referred to as the Binomial exact test, is a test used in sampling statistics to assess whether a proportion of a binary variable ...
  29. [29]
    [PDF] A Survey of Exact Inference for Contingency Tables - Statistics
    Birch's exact test using test statistic Eknllk as- sumes homogeneity of the odds ratios in the 2 x. 2 x K table. Zelen (1971) presented an exact test of this ...<|separator|>
  30. [30]
    A Survey of Exact Inference for Contingency Tables - Project Euclid
    This survey covers exact methods for contingency tables, focusing on the exact conditional approach, which eliminates nuisance parameters by conditioning on ...Missing: kxk | Show results with:kxk
  31. [31]
    A Network Algorithm for Performing Fisher's Exact Test in r × c ... - jstor
    Contingency tables; Implicit enumeration; Permutation distribution; Exact ... Mehta and Patel: A Network Algorithm for Performing Fisher's Exact Test 433.