Fact-checked by Grok 2 weeks ago

Fermat primality test

The Fermat primality test is a probabilistic used to assess whether a given positive n > 1 is prime by leveraging , which states that if n is prime and a is an not divisible by n, then a^{n-1} \equiv 1 \pmod{n}. The test operates by selecting one or more random bases a (typically between 2 and n-2) coprime to n, computing a^{n-1} \mod n, and checking if the result equals 1; if it does not for any a, then n is definitively composite, as this violates the theorem for primes. Conversely, if the congruence holds for multiple independent trials (say, t bases), n is declared a probable prime with high confidence, as the probability of a composite n passing decreases exponentially with t—often to less than $1/2^t for randomly chosen bases—though it provides no absolute proof of primality. Named after the 17th-century French mathematician , who formulated the underlying theorem in correspondence around 1640, the test was formalized as a practical primality procedure in the amid growing needs for efficient in and computing. It is computationally efficient, relying on fast algorithms like binary exponentiation, which run in O(\log n) multiplications modulo n, making it suitable for testing large numbers up to hundreds of digits. However, its reliability is undermined by Fermat pseudoprimes—composite numbers that pass the test for specific bases—and especially Carmichael numbers, a subset of square-free composites that satisfy the congruence for all bases coprime to n, such as 561 ($3 \times 11 \times 17) or 1105; infinitely many Carmichael numbers exist, rendering the test non-deterministic without additional checks. Despite these limitations, the Fermat test remains a foundational tool in probabilistic primality testing, often serving as a quick initial filter before more robust methods like the Miller-Rabin test (a strengthened variant that resists Carmichael numbers) or deterministic algorithms such as the . Its simplicity has influenced applications in , , and , where false positives can be mitigated by iterating with diverse bases or combining with trial division for small factors.

Background

Fermat's Little Theorem

Fermat's Little Theorem states that if p is a and a is an such that \gcd(a, p) = 1, then a^{p-1} \equiv 1 \pmod{p}. This congruence highlights a fundamental property of primes in , where the exponent p-1 acts as the order of the modulo p. The theorem was first proposed by in a 1640 letter to Frénicle de Bessy, though Fermat provided no proof and claimed only to have discovered one. The first published proof appeared in 1736 from Leonhard Euler, who used on the binomial expansion of (1 + 1)^p. Euler later offered additional proofs, including ones aligning with emerging group-theoretic ideas. A modern proof relies on group theory: Consider the multiplicative group G = (\mathbb{Z}/p\mathbb{Z})^\times, which consists of the integers from 1 to p-1 under multiplication modulo p and has order |G| = p-1. For a coprime to p, a generates a subgroup H = \langle a \rangle of G, where the order k of a is the smallest positive integer such that a^k \equiv 1 \pmod{p}. By Lagrange's theorem, k divides |G| = p-1, so p-1 = k \cdot n for some integer n, and thus a^{p-1} = (a^k)^n \equiv 1^n \equiv 1 \pmod{p}. Special cases clarify the theorem's scope. For a = 1, the congruence holds trivially since $1^{p-1} \equiv 1 \pmod{p}. However, if \gcd(a, p) > 1, since p is prime this implies p divides a, so a \equiv 0 \pmod{p} and a^{p-1} \equiv 0 \pmod{p} \not\equiv 1 \pmod{p}; the theorem requires coprimality. The theorem underpins efficient modular exponentiation, as exponents can be reduced modulo p-1 when computing powers coprime to p, enabling fast algorithms like binary exponentiation essential for primality testing procedures.

Overview of Probabilistic Primality Tests

Primality testing involves algorithms designed to determine whether a given positive integer is prime or composite, a fundamental problem in number theory with applications in cryptography and computational mathematics. Efficient primality tests are essential for identifying large primes required in secure systems, such as generating keys for public-key encryption. Deterministic primality tests, which provide absolute certainty without randomization, face significant challenges for large numbers. Simple methods like trial division, which check divisibility up to the of the number, have a of O(√n), rendering them impractical for numbers with hundreds of bits, as in cryptographic contexts. Even advanced deterministic algorithms, such as the AKS test introduced in 2002, achieve —originally O(log^{12+ε} n) and later improved to O(log^{6} n)—but remain computationally expensive due to high constants and massive storage requirements; for instance, verifying a 1024-bit number can demand billions of gigabytes of memory and excessive operations in checks. Probabilistic primality tests address these limitations by using to achieve high in results with far greater . These tests select random witnesses (bases) and verify number-theoretic conditions that primes satisfy, declaring a number "probably prime" if it passes multiple trials; the probability of error—misidentifying a composite as prime—decreases exponentially, typically bounded by less than (1/2)^k for k independent trials. This approach ensures that for practical purposes, such as in , the error rate can be made negligibly small, like 2^{-80}, with modest computational overhead. The development of probabilistic tests gained prominence in the , driven by the emergence of , which necessitated fast primality verification for enormous integers. Early methods, including the Fermat test based on , were among the first to exploit for speed, predating more robust variants like Solovay-Strassen (). Their key advantage lies in leveraging efficient , with around O(log^3 n) per trial, enabling practical testing of numbers up to thousands of bits in seconds on modern hardware—vastly outperforming deterministic alternatives for real-world applications.

Description of the Test

Core Principle

The Fermat primality test is grounded in , which states that if p is a and a is an coprime to p, then a^{p-1} \equiv 1 \pmod{p}. This theorem provides the foundational congruence that the test leverages to assess the primality of an odd n > 2. The core principle of the test relies on the contrapositive of : if n is composite and there exists an a coprime to n such that a^{n-1} \not\equiv 1 \pmod{n}, then n is definitely composite. In this scenario, such an a serves as a witness to the compositeness of n. Conversely, if a^{n-1} \equiv 1 \pmod{n} holds for a randomly selected a coprime to n, then n passes the test and is considered probably prime; however, if n is actually composite, this a acts as a liar. Bases a for the test are typically chosen from the range $2 \leq a < n with \gcd(a, n) = 1 to ensure coprimality. To increase confidence in the result, multiple distinct bases are tested iteratively, as a single liar does not rule out compositeness. For composite n that are not , at least half of the possible bases coprime to n are witnesses, yielding an error probability of at most $1/2 per test under these conditions.

Testing Procedure and Witnesses

The Fermat primality test begins by handling trivial cases: numbers such as n = 1 or even n > 2 are immediately determined to be not prime, as they fail basic primality conditions and do not satisfy the conditions of . For odd integers n \geq 3, the procedure relies on the contrapositive of , testing whether n behaves like a prime under . To apply the test, select k random bases a such that $1 < a < n and \gcd(a, n) = 1, ensuring coprimality to align with the theorem's assumptions. In practice, to avoid computing gcd beforehand, a random a in 2 to n-2 is selected; if \gcd(a, n) > 1, then n is composite. For each base, compute a^{n-1} \mod n using efficient methods like binary exponentiation. If a^{n-1} \not\equiv 1 \pmod{n} for any a, then n is definitively composite. If the congruence holds for all k bases, output "probably prime," with the error probability (false positive for composites) bounded above by $2^{-k} for non-Carmichael composites in the worst case among such numbers, providing a confidence level of $1 - 2^{-k} under those conditions. A base a (with $1 < a < n and \gcd(a, n) = 1) is termed a Fermat witness to the compositeness of n if a^{n-1} \not\equiv 1 \pmod{n}, as this violation proves n is not prime. Conversely, a Fermat liar is a base a for which the congruence holds (a^{n-1} \equiv 1 \pmod{n}) despite n being composite, potentially misleading the test into suggesting primality. The proportion of witnesses versus liars determines the test's reliability for a given composite n, with at least half of the possible bases typically serving as witnesses for non-Carmichael composites.

Examples and Illustrations

Basic Example with a Composite Number

Consider the composite number n = 91, which factors as $7 \times 13. To apply the , select bases a coprime to 91 and compute a^{90} \mod 91. If the result is not 1 for any such a, then a is a Fermat witness proving n composite; if it equals 1 for all tested a, n may be prime or a pseudoprime to those bases. First, take a = 2 (with \gcd(2, 91) = 1). Compute $2^{90} \mod 91 using the binary exponentiation method, which efficiently calculates large powers via repeated squaring and multiplication. The binary representation of 90 is $1011010_2. Initialize result r = 1 and base b = 2; process the exponent from least significant bit:
  • Exponent 90 (even): b \leftarrow 2^2 = 4 \mod 91, exponent \leftarrow 45.
  • Exponent 45 (odd): r \leftarrow 1 \times 4 = 4 \mod 91, b \leftarrow 4^2 = 16 \mod 91, exponent \leftarrow 22.
  • Exponent 22 (even): b \leftarrow 16^2 = 256 \equiv 74 \mod 91 (since $256 - 2 \times 91 = 74), exponent \leftarrow 11.
  • Exponent 11 (odd): r \leftarrow 4 \times 74 = 296 \equiv 23 \mod 91 (since $296 - 3 \times 91 = 23), b \leftarrow 74^2 = 5476 \equiv 16 \mod 91 (since $5476 - 60 \times 91 = 16), exponent \leftarrow 5.
  • Exponent 5 (odd): r \leftarrow 23 \times 16 = 368 \equiv 4 \mod 91 (since $368 - 4 \times 91 = 4), b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent \leftarrow 2.
  • Exponent 2 (even): b \leftarrow 74^2 = 5476 \equiv 16 \mod 91, exponent \leftarrow 1.
  • Exponent 1 (odd): r \leftarrow 4 \times 16 = 64 \mod 91, b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent \leftarrow 0.
Thus, $2^{90} \equiv 64 \mod 91, and $64 \not\equiv 1 \mod 91, so 2 is a Fermat witness confirming 91 is composite. Now consider a = 10 (with \gcd(10, 91) = 1). Direct computation via binary exponentiation yields $10^{90} \equiv 1 \mod 91, making 10 a Fermat liar that falsely suggests 91 might be prime. This example illustrates the probabilistic nature of the test: a single witness definitively proves compositeness, but the existence of liars (exactly 36 out of the 72 coprime bases for 91) requires testing multiple random bases to confidently declare a number prime, as repeated liars reduce but do not eliminate the chance of error.

Example with a Prime Number

Consider the number n = 97, which is a prime. To demonstrate the Fermat primality test succeeding on a prime, select bases a = 5 and a = 12 (both coprime to 97) and verify that a^{96} \equiv 1 \pmod{97} in each case, as required by the test procedure. For a = 5, compute $5^{96} \mod 97 using repeated squaring, which efficiently reduces the exponentiation to O(\log 96) multiplications modulo 97:
  • $5^1 \equiv 5 \pmod{97}
  • $5^2 \equiv 25 \pmod{97}
  • $5^4 \equiv 25^2 = 625 \equiv 43 \pmod{97} (since $625 - 6 \times 97 = 43)
  • $5^8 \equiv 43^2 = 1849 \equiv 6 \pmod{97} (since $1849 - 19 \times 97 = 6)
  • $5^{16} \equiv 6^2 = 36 \pmod{97}
  • $5^{32} \equiv 36^2 = 1296 \equiv 35 \pmod{97} (since $1296 - 13 \times 97 = 35)
  • $5^{64} \equiv 35^2 = 1225 \equiv 61 \pmod{97} (since $1225 - 12 \times 97 = 61)
Since $96 = 64 + 32 in binary, $5^{96} \equiv 5^{64} \times 5^{32} \equiv 61 \times 35 = 2135 \equiv 1 \pmod{97} (since $2135 - 22 \times 97 = 1). The result is 1, so 5 is not a witness against primality. For a = 12, the computation simplifies further:
  • $12^1 \equiv 12 \pmod{97}
  • $12^2 \equiv 144 \equiv 47 \pmod{97} (since $144 - 97 = 47)
  • $12^4 \equiv 47^2 = 2209 \equiv 75 \pmod{97} (since $2209 - 22 \times 97 = 75)
  • $12^8 \equiv 75^2 = 5625 \equiv 96 \pmod{97} (since $5625 - 58 \times 97 = -1 \equiv 96)
  • $12^{16} \equiv 96^2 = 9216 \equiv 1 \pmod{97} (since $9216 - 94 \times 97 = 98 - 97 = 1)
Thus, $12^{96} = (12^{16})^6 \equiv 1^6 = 1 \pmod{97}. Again, the result is 1. This outcome aligns with Fermat's Little Theorem, which guarantees a^{p-1} \equiv 1 \pmod{p} for prime p and a not divisible by p, ensuring no witnesses exist for primes—unlike composites, where witnesses may appear probabilistically. Testing multiple bases, as done here, increases confidence in primality by reducing the chance of misleading results from pseudoprimes, though for true primes all valid bases pass. For small n like 97, such computations are efficient and straightforward by hand or basic software.

Algorithm and Implementation

Step-by-Step Algorithm

The Fermat primality test is implemented as a probabilistic algorithm that verifies whether an input n > 1 satisfies for randomly selected bases, declaring n composite if it fails for any base or probably prime after a specified number of successful trials. The algorithm takes two inputs: an n > 1 to test for primality, and an integer k \geq 1 specifying the number of independent trials, with typical values of 10 to 20 used in cryptographic applications to achieve a low error probability (e.g., less than $2^{-20}). The following pseudocode outlines the core procedure, assuming efficient for the power computation:
function FermatPrimalityTest(n, k):
    if n < 2 or n == 2:
        return "Prime" if n == 2 else "Composite"
    if n % 2 == 0:
        return "Composite"
    if n == 3:
        return "Prime"
    
    for i from 1 to k:
        a = random integer in [2, n-2]
        if gcd(a, n) > 1:
            return "Composite"
        r = a^(n-1) mod n  // using [modular exponentiation](/page/Modular_exponentiation)
        if r != 1:
            return "Composite"
    
    return "Probably Prime"
This procedure first handles trivial cases, then performs k iterations: in each, a random base a is selected, its gcd with n is checked (an optimization to detect shared factors early), and a^{n-1} \mod n is computed; failure in any trial definitively identifies a composite, while success in all suggests probable primality. For simplicity in non-randomized variants or initial screening, fixed small bases such as 2, 3, and 5 can be used instead of random selection, though this reduces the probabilistic guarantees unless more bases are added. The output is either "Composite" (a deterministic declaration, as any failure proves compositeness) or "Probably Prime" (a probabilistic result, with error probability bounded by the choice of k and the proportion of misleading bases). Edge cases include n = 1 (always composite), n = 2 (prime), and powers of 2 greater than 2 (composite, handled by the even-check).

Computational Complexity

The computational complexity of the Fermat primality test is primarily determined by the step, which computes a^{n-1} \mod n (or an equivalent form) for each chosen base a. For a single trial, binary requires O(\log n) modular , each of which takes O(\log^2 n) bit operations using algorithms, yielding an overall of O(\log^3 n). With k trials, the total time complexity becomes O(k \log^3 n). Employing advanced multiplication methods, such as those based on the fast Fourier transform, reduces the per-trial complexity to O(\log^{2+\epsilon} n) for any \epsilon > 0, or approximately O(\log^2 n \log \log n) in practice, resulting in O(k \log^2 n \log \log n) overall. The space complexity is O(\log n), as it involves storing the input number n and intermediate values of comparable bit length during computations. Compared to trial division, which requires O(\sqrt{n}) operations and becomes infeasible for large n (exponential time in \log n), the Fermat test offers polynomial-time performance, enabling efficient testing of 1024-bit numbers (approximately $10^{308}) in seconds on modern hardware. Although probabilistic, the test scales well for extremely large numbers up to $10^{1000} or beyond, as its polylogarithmic dependence on the input size supports practical use in big-integer arithmetic libraries.

Limitations

Fermat Pseudoprimes

A to base a is a composite positive n such that \gcd(a, n) = 1 and a^{n-1} \equiv 1 \pmod{n}. This property causes the number to pass the Fermat primality test for that specific base, despite not being prime, thereby illustrating a key limitation of the test as a . Such pseudoprimes arise when n divides a^{n-1} - 1 even though n is composite, often due to the multiplicative structure of its prime factors aligning in a way that satisfies the for particular a. The smallest Fermat pseudoprime to base 2 is 341, which factors as $11 \times 31 and satisfies $2^{340} \equiv 1 \pmod{341}. This example was first noted by Pierre-François Sarrus in the early , illustrating an early recognition of the phenomenon, though the formal concept of Fermat pseudoprimes was developed later in literature. Fermat pseudoprimes to a fixed base are relatively rare among composite numbers. For instance, the count of base-2 Fermat pseudoprimes up to x is bounded above by x^{1 - \frac{1}{2} \frac{\log \log \log x}{\log \log x}} for sufficiently large x. Their density decreases significantly when multiple independent bases are used in the test; for example, while there are 78 base-2 Fermat pseudoprimes below $10^5, only a handful pass tests for both bases 2 and 3 in the same range, enhancing the test's reliability against false positives.

Carmichael Numbers and Stronger Flaws

s represent a severe limitation of the Fermat primality test, as they are composite s that satisfy the test's condition for every base coprime to them, unlike ordinary Fermat pseudoprimes which only do so for specific bases. A is defined as a composite positive n > 1 that is square-free and such that a^{n-1} \equiv 1 \pmod{n} for all s a coprime to n. This property is equivalent to Korselt's criterion, which states that n must be square-free and, for every prime p dividing n, p-1 divides n-1. These numbers are named after the mathematician Robert D. Carmichael, who in constructed the first examples while studying properties related to . The smallest Carmichael number is 561, which factors as $3 \times 11 \times 17 and satisfies Korselt's criterion since 2, 10, and 16 all divide 560. For such n, the Fermat test declares n probably prime regardless of the base chosen (as long as it is coprime to n), because every qualifying base serves as a "liar" that masks the compositeness of n. This universal failure renders the test completely unreliable for detecting when using random or single bases, potentially leading to misidentification in applications like prime generation. The connection between and the flaws in probabilistic primality testing, including the Fermat method, was explicitly analyzed by in 1956, where he introduced techniques to construct and bound their distribution. Although infinitely many Carmichael numbers exist—a result proven by Alford, Granville, and Pomerance in using probabilistic constructions involving products of primes with controlled factors—they remain exceedingly rare. Erdős established an upper bound showing that the count of Carmichael numbers up to n is at most O\left( \frac{n}{\exp(c \sqrt{\log n \log \log n})} \right) for some constant c > 0, far sparser than the density of primes, which is approximately n / \log n. This scarcity implies that Carmichael numbers pose a low-probability but catastrophic for single-base Fermat tests, emphasizing the need for multiple bases or stronger variants to mitigate false positives.

Variants and Extensions

Deterministic Fermat Testing

While the Fermat primality test is inherently probabilistic, using a fixed set of multiple bases can reduce the probability of and provide practical reliability for numbers below certain small bounds, combined with trial by small primes. However, due to the existence of Carmichael numbers—composite numbers that pass the Fermat test for every base coprime to them—no fixed set of small bases can guarantee determinism for arbitrarily large n, as infinitely many such Carmichael numbers with large prime factors exist. For true deterministic testing with a fixed number of bases up to large bounds (e.g., bits), stronger variants like the Miller-Rabin test are required, which modify the condition checked in each iteration to detect Carmichael numbers. The advantages of using multiple bases in the Fermat test include lower error probability compared to a single base, while keeping computational cost low with O(k · log^3 n) time for k bases, dominated by . Nonetheless, for applications needing certainty, such as cryptographic prime generation, the limitations of the Fermat test motivate reliance on more robust methods. Historical development of probabilistic tests based on dates to the 1970s, with empirical studies showing that multiple random or fixed bases improve confidence, though full determinism remains elusive without additional techniques.

Relation to Other Primality Tests

The Fermat primality test, while simple and computationally efficient, is generally weaker than the Miller-Rabin test, as it can be fooled by Carmichael numbers—composite numbers that pass the test for all bases coprime to the number—whereas Miller-Rabin detects such composites using the concept of strong pseudoprimes and achieves an error probability of less than $4^{-k} after k iterations. Despite this, the Fermat test is faster per iteration due to its reliance on a single , making it suitable for initial screening before applying more robust methods like Miller-Rabin. In practice, the Fermat test often serves as a preliminary filter in primality testing pipelines, quickly eliminating obvious composites before proceeding to Miller-Rabin or the deterministic AKS test, thereby reducing overall computational load for . Historically, the Fermat test predates modern probabilistic alternatives, such as the Solovay-Strassen test introduced in , which builds on Euler's criterion for improved reliability but at higher complexity; today, the Fermat method occupies a niche role overshadowed by these advancements. Its primary strengths lie in ease of implementation—requiring only basic —and utility for educational purposes, where understanding the foundational link to is emphasized without the intricacies of stronger tests. The Fermat test is preferred in scenarios involving very large numbers where absolute certainty is secondary to speed, such as rapid probabilistic assessments in early-stage cryptographic key exploration. For smaller numbers, combinations of the Fermat test with trial division can provide quick checks in constrained ranges.

Applications

Use in Software and Libraries

The (GIMPS) employs the Fermat probable prime test as the primary initial primality check for Mersenne numbers of the form $2^p - 1, using base 3 to avoid trivial passes that would occur with base 2 for such candidates. Introduced in 2018, this test replaced earlier deterministic methods for first-pass screening, allowing distributed volunteers to efficiently discard composites before more intensive Lucas-Lehmer verification on promising exponents. Its computational efficiency, requiring only a single modular exponentiation per trial, aligns well with the project's need to process vast search spaces across volunteer hardware. In the 1990s, early versions of the (PGP) software relied on the Fermat test for probabilistic primality assessment during the generation of large random primes for cryptographic keys, reflecting the era's emphasis on simple, fast implementations in standalone tools.

Role in Cryptography and Prime Generation

The Fermat primality test serves as an efficient initial filter in the generation of large primes for RSA key pairs, where candidate odd integers are subjected to multiple rounds of the test to identify probable primes before more computationally intensive deterministic or stronger probabilistic verification. This approach accelerates key generation by quickly eliminating obvious composites, as recommended in standards for integer factorization-based . In implementations adhering to OpenPGP standards, such as those in GnuPG (using ), the Fermat test is employed as an initial filter during RSA key creation, balancing speed and security for email encryption workflows. In projects for prime discovery, like the (GIMPS), the Fermat test functions as a preliminary sieving within the software to assess candidate Mersenne numbers (of the form $2^p - 1) for . Since , this (PRP) test has been the first-stage filter, discarding composites rapidly before proceeding to the definitive Lucas-Lehmer , thereby optimizing resource allocation across networks. For cryptographic , the Fermat test's rate—the chance a passes as —can be upper bounded by $2^{-k} in the worst case after k independent trials with random bases coprime to the candidate, for non-Carmichael composites. Performing 40 trials yields an probability of at most $2^{-40} \approx 9 \times 10^{-13}, which provides sufficient assurance for many pre-2020 cryptographic applications when the test is chained with trial division or other sieves, though it falls short of the $2^{-80} threshold often desired for long-term without supplementary methods. However, due to vulnerabilities like Carmichael numbers that can fool the test with probability 1, modern protocols typically pair it with stronger tests such as Miller-Rabin to mitigate risks. As of 2025, the Fermat test remains a supplementary tool in prime generation for , used as a quick initial filter in libraries like (underlying GnuPG), but largely supplanted by more reliable alternatives like optimized Miller-Rabin variants in high-security contexts. It persists in embedded systems and legacy implementations where minimal resource use is paramount, serving as a quick rejector for non-primes in constrained environments.