Fermat primality test
The Fermat primality test is a probabilistic algorithm used to assess whether a given positive integer n > 1 is prime by leveraging Fermat's Little Theorem, which states that if n is prime and a is an integer not divisible by n, then a^{n-1} \equiv 1 \pmod{n}.[1][2][3] The test operates by selecting one or more random bases a (typically between 2 and n-2) coprime to n, computing a^{n-1} \mod n, and checking if the result equals 1; if it does not for any a, then n is definitively composite, as this violates the theorem for primes.[1][2][3] Conversely, if the congruence holds for multiple independent trials (say, t bases), n is declared a probable prime with high confidence, as the probability of a composite n passing decreases exponentially with t—often to less than $1/2^t for randomly chosen bases—though it provides no absolute proof of primality.[2][3]
Named after the 17th-century French mathematician Pierre de Fermat, who formulated the underlying theorem in correspondence around 1640, the test was formalized as a practical primality procedure in the 20th century amid growing needs for efficient integer factorization in cryptography and computing.[2] It is computationally efficient, relying on fast modular exponentiation algorithms like binary exponentiation, which run in O(\log n) multiplications modulo n, making it suitable for testing large numbers up to hundreds of digits.[1][3] However, its reliability is undermined by Fermat pseudoprimes—composite numbers that pass the test for specific bases—and especially Carmichael numbers, a subset of square-free composites that satisfy the congruence for all bases coprime to n, such as 561 ($3 \times 11 \times 17) or 1105; infinitely many Carmichael numbers exist, rendering the test non-deterministic without additional checks.[1][2][3]
Despite these limitations, the Fermat test remains a foundational tool in probabilistic primality testing, often serving as a quick initial filter before more robust methods like the Miller-Rabin test (a strengthened variant that resists Carmichael numbers) or deterministic algorithms such as the AKS primality test.[1][3] Its simplicity has influenced applications in public-key cryptography, random number generation, and computational number theory, where false positives can be mitigated by iterating with diverse bases or combining with trial division for small factors.[2][3]
Background
Fermat's Little Theorem
Fermat's Little Theorem states that if p is a prime number and a is an integer such that \gcd(a, p) = 1, then a^{p-1} \equiv 1 \pmod{p}.[4] This congruence highlights a fundamental property of primes in modular arithmetic, where the exponent p-1 acts as the order of the multiplicative group modulo p.[4]
The theorem was first proposed by Pierre de Fermat in a 1640 letter to Frénicle de Bessy, though Fermat provided no proof and claimed only to have discovered one.[4] The first published proof appeared in 1736 from Leonhard Euler, who used mathematical induction on the binomial expansion of (1 + 1)^p.[4] Euler later offered additional proofs, including ones aligning with emerging group-theoretic ideas.[4]
A modern proof relies on group theory: Consider the multiplicative group G = (\mathbb{Z}/p\mathbb{Z})^\times, which consists of the integers from 1 to p-1 under multiplication modulo p and has order |G| = p-1.[5] For a coprime to p, a generates a subgroup H = \langle a \rangle of G, where the order k of a is the smallest positive integer such that a^k \equiv 1 \pmod{p}.[5] By Lagrange's theorem, k divides |G| = p-1, so p-1 = k \cdot n for some integer n, and thus a^{p-1} = (a^k)^n \equiv 1^n \equiv 1 \pmod{p}.[5]
Special cases clarify the theorem's scope. For a = 1, the congruence holds trivially since $1^{p-1} \equiv 1 \pmod{p}.[4] However, if \gcd(a, p) > 1, since p is prime this implies p divides a, so a \equiv 0 \pmod{p} and a^{p-1} \equiv 0 \pmod{p} \not\equiv 1 \pmod{p}; the theorem requires coprimality.[4]
The theorem underpins efficient modular exponentiation, as exponents can be reduced modulo p-1 when computing powers coprime to p, enabling fast algorithms like binary exponentiation essential for primality testing procedures.[4]
Overview of Probabilistic Primality Tests
Primality testing involves algorithms designed to determine whether a given positive integer is prime or composite, a fundamental problem in number theory with applications in cryptography and computational mathematics. Efficient primality tests are essential for identifying large primes required in secure systems, such as generating keys for public-key encryption.[6]
Deterministic primality tests, which provide absolute certainty without randomization, face significant challenges for large numbers. Simple methods like trial division, which check divisibility up to the square root of the number, have a time complexity of O(√n), rendering them impractical for numbers with hundreds of bits, as in cryptographic contexts. Even advanced deterministic algorithms, such as the AKS test introduced in 2002, achieve polynomial time complexity—originally O(log^{12+ε} n) and later improved to O(log^{6} n)—but remain computationally expensive due to high constants and massive storage requirements; for instance, verifying a 1024-bit number can demand billions of gigabytes of memory and excessive operations in polynomial congruence checks.[6][7]
Probabilistic primality tests address these limitations by using randomization to achieve high confidence in results with far greater efficiency. These tests select random witnesses (bases) and verify number-theoretic conditions that primes satisfy, declaring a number "probably prime" if it passes multiple trials; the probability of error—misidentifying a composite as prime—decreases exponentially, typically bounded by less than (1/2)^k for k independent trials. This approach ensures that for practical purposes, such as in cryptography, the error rate can be made negligibly small, like 2^{-80}, with modest computational overhead.[8][9]
The development of probabilistic tests gained prominence in the 1970s, driven by the emergence of public-key cryptography, which necessitated fast primality verification for enormous integers. Early methods, including the Fermat test based on Fermat's Little Theorem, were among the first to exploit randomization for speed, predating more robust variants like Solovay-Strassen (1977). Their key advantage lies in leveraging efficient modular exponentiation, with time complexity around O(log^3 n) per trial, enabling practical testing of numbers up to thousands of bits in seconds on modern hardware—vastly outperforming deterministic alternatives for real-world applications.[10][6][2]
Description of the Test
Core Principle
The Fermat primality test is grounded in Fermat's Little Theorem, which states that if p is a prime number and a is an integer coprime to p, then a^{p-1} \equiv 1 \pmod{p}.[3] This theorem provides the foundational congruence that the test leverages to assess the primality of an odd integer n > 2.
The core principle of the test relies on the contrapositive of Fermat's Little Theorem: if n is composite and there exists an integer a coprime to n such that a^{n-1} \not\equiv 1 \pmod{n}, then n is definitely composite.[11] In this scenario, such an a serves as a witness to the compositeness of n. Conversely, if a^{n-1} \equiv 1 \pmod{n} holds for a randomly selected a coprime to n, then n passes the test and is considered probably prime; however, if n is actually composite, this a acts as a liar.[3]
Bases a for the test are typically chosen from the range $2 \leq a < n with \gcd(a, n) = 1 to ensure coprimality.[11] To increase confidence in the result, multiple distinct bases are tested iteratively, as a single liar does not rule out compositeness. For composite n that are not Carmichael numbers, at least half of the possible bases coprime to n are witnesses, yielding an error probability of at most $1/2 per test under these conditions.[11]
Testing Procedure and Witnesses
The Fermat primality test begins by handling trivial cases: numbers such as n = 1 or even n > 2 are immediately determined to be not prime, as they fail basic primality conditions and do not satisfy the conditions of Fermat's Little Theorem.[12][13] For odd integers n \geq 3, the procedure relies on the contrapositive of Fermat's Little Theorem, testing whether n behaves like a prime under modular exponentiation.[12]
To apply the test, select k random bases a such that $1 < a < n and \gcd(a, n) = 1, ensuring coprimality to align with the theorem's assumptions. In practice, to avoid computing gcd beforehand, a random a in 2 to n-2 is selected; if \gcd(a, n) > 1, then n is composite.[13] For each base, compute a^{n-1} \mod n using efficient methods like binary exponentiation. If a^{n-1} \not\equiv 1 \pmod{n} for any a, then n is definitively composite. If the congruence holds for all k bases, output "probably prime," with the error probability (false positive for composites) bounded above by $2^{-k} for non-Carmichael composites in the worst case among such numbers, providing a confidence level of $1 - 2^{-k} under those conditions.[12][13]
A base a (with $1 < a < n and \gcd(a, n) = 1) is termed a Fermat witness to the compositeness of n if a^{n-1} \not\equiv 1 \pmod{n}, as this violation proves n is not prime. Conversely, a Fermat liar is a base a for which the congruence holds (a^{n-1} \equiv 1 \pmod{n}) despite n being composite, potentially misleading the test into suggesting primality.[12][13] The proportion of witnesses versus liars determines the test's reliability for a given composite n, with at least half of the possible bases typically serving as witnesses for non-Carmichael composites.[12]
Examples and Illustrations
Basic Example with a Composite Number
Consider the composite number n = 91, which factors as $7 \times 13. To apply the Fermat primality test, select bases a coprime to 91 and compute a^{90} \mod 91. If the result is not 1 for any such a, then a is a Fermat witness proving n composite; if it equals 1 for all tested a, n may be prime or a pseudoprime to those bases.[2]
First, take a = 2 (with \gcd(2, 91) = 1). Compute $2^{90} \mod 91 using the binary exponentiation method, which efficiently calculates large powers via repeated squaring and multiplication. The binary representation of 90 is $1011010_2. Initialize result r = 1 and base b = 2; process the exponent from least significant bit:
- Exponent 90 (even): b \leftarrow 2^2 = 4 \mod 91, exponent \leftarrow 45.
- Exponent 45 (odd): r \leftarrow 1 \times 4 = 4 \mod 91, b \leftarrow 4^2 = 16 \mod 91, exponent \leftarrow 22.
- Exponent 22 (even): b \leftarrow 16^2 = 256 \equiv 74 \mod 91 (since $256 - 2 \times 91 = 74), exponent \leftarrow 11.
- Exponent 11 (odd): r \leftarrow 4 \times 74 = 296 \equiv 23 \mod 91 (since $296 - 3 \times 91 = 23), b \leftarrow 74^2 = 5476 \equiv 16 \mod 91 (since $5476 - 60 \times 91 = 16), exponent \leftarrow 5.
- Exponent 5 (odd): r \leftarrow 23 \times 16 = 368 \equiv 4 \mod 91 (since $368 - 4 \times 91 = 4), b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent \leftarrow 2.
- Exponent 2 (even): b \leftarrow 74^2 = 5476 \equiv 16 \mod 91, exponent \leftarrow 1.
- Exponent 1 (odd): r \leftarrow 4 \times 16 = 64 \mod 91, b \leftarrow 16^2 = 256 \equiv 74 \mod 91, exponent \leftarrow 0.
Thus, $2^{90} \equiv 64 \mod 91, and $64 \not\equiv 1 \mod 91, so 2 is a Fermat witness confirming 91 is composite.[2]
Now consider a = 10 (with \gcd(10, 91) = 1). Direct computation via binary exponentiation yields $10^{90} \equiv 1 \mod 91, making 10 a Fermat liar that falsely suggests 91 might be prime.[2]
This example illustrates the probabilistic nature of the test: a single witness definitively proves compositeness, but the existence of liars (exactly 36 out of the 72 coprime bases for 91) requires testing multiple random bases to confidently declare a number prime, as repeated liars reduce but do not eliminate the chance of error.[2]
Example with a Prime Number
Consider the number n = 97, which is a prime.[14] To demonstrate the Fermat primality test succeeding on a prime, select bases a = 5 and a = 12 (both coprime to 97) and verify that a^{96} \equiv 1 \pmod{97} in each case, as required by the test procedure.[2]
For a = 5, compute $5^{96} \mod 97 using repeated squaring, which efficiently reduces the exponentiation to O(\log 96) multiplications modulo 97:
- $5^1 \equiv 5 \pmod{97}
- $5^2 \equiv 25 \pmod{97}
- $5^4 \equiv 25^2 = 625 \equiv 43 \pmod{97} (since $625 - 6 \times 97 = 43)
- $5^8 \equiv 43^2 = 1849 \equiv 6 \pmod{97} (since $1849 - 19 \times 97 = 6)
- $5^{16} \equiv 6^2 = 36 \pmod{97}
- $5^{32} \equiv 36^2 = 1296 \equiv 35 \pmod{97} (since $1296 - 13 \times 97 = 35)
- $5^{64} \equiv 35^2 = 1225 \equiv 61 \pmod{97} (since $1225 - 12 \times 97 = 61)
Since $96 = 64 + 32 in binary, $5^{96} \equiv 5^{64} \times 5^{32} \equiv 61 \times 35 = 2135 \equiv 1 \pmod{97} (since $2135 - 22 \times 97 = 1). The result is 1, so 5 is not a witness against primality.[2]
For a = 12, the computation simplifies further:
- $12^1 \equiv 12 \pmod{97}
- $12^2 \equiv 144 \equiv 47 \pmod{97} (since $144 - 97 = 47)
- $12^4 \equiv 47^2 = 2209 \equiv 75 \pmod{97} (since $2209 - 22 \times 97 = 75)
- $12^8 \equiv 75^2 = 5625 \equiv 96 \pmod{97} (since $5625 - 58 \times 97 = -1 \equiv 96)
- $12^{16} \equiv 96^2 = 9216 \equiv 1 \pmod{97} (since $9216 - 94 \times 97 = 98 - 97 = 1)
Thus, $12^{96} = (12^{16})^6 \equiv 1^6 = 1 \pmod{97}. Again, the result is 1.[2]
This outcome aligns with Fermat's Little Theorem, which guarantees a^{p-1} \equiv 1 \pmod{p} for prime p and a not divisible by p, ensuring no witnesses exist for primes—unlike composites, where witnesses may appear probabilistically. Testing multiple bases, as done here, increases confidence in primality by reducing the chance of misleading results from pseudoprimes, though for true primes all valid bases pass. For small n like 97, such computations are efficient and straightforward by hand or basic software.[2]
Algorithm and Implementation
Step-by-Step Algorithm
The Fermat primality test is implemented as a probabilistic algorithm that verifies whether an input integer n > 1 satisfies Fermat's Little Theorem for randomly selected bases, declaring n composite if it fails for any base or probably prime after a specified number of successful trials.[15]
The algorithm takes two inputs: an integer n > 1 to test for primality, and an integer k \geq 1 specifying the number of independent trials, with typical values of 10 to 20 used in cryptographic applications to achieve a low error probability (e.g., less than $2^{-20}).[15][16]
The following pseudocode outlines the core procedure, assuming efficient modular exponentiation for the power computation:
function FermatPrimalityTest(n, k):
if n < 2 or n == 2:
return "Prime" if n == 2 else "Composite"
if n % 2 == 0:
return "Composite"
if n == 3:
return "Prime"
for i from 1 to k:
a = random integer in [2, n-2]
if gcd(a, n) > 1:
return "Composite"
r = a^(n-1) mod n // using [modular exponentiation](/page/Modular_exponentiation)
if r != 1:
return "Composite"
return "Probably Prime"
function FermatPrimalityTest(n, k):
if n < 2 or n == 2:
return "Prime" if n == 2 else "Composite"
if n % 2 == 0:
return "Composite"
if n == 3:
return "Prime"
for i from 1 to k:
a = random integer in [2, n-2]
if gcd(a, n) > 1:
return "Composite"
r = a^(n-1) mod n // using [modular exponentiation](/page/Modular_exponentiation)
if r != 1:
return "Composite"
return "Probably Prime"
This procedure first handles trivial cases, then performs k iterations: in each, a random base a is selected, its gcd with n is checked (an optimization to detect shared factors early), and a^{n-1} \mod n is computed; failure in any trial definitively identifies a composite, while success in all suggests probable primality.[15][2]
For simplicity in non-randomized variants or initial screening, fixed small bases such as 2, 3, and 5 can be used instead of random selection, though this reduces the probabilistic guarantees unless more bases are added.[17]
The output is either "Composite" (a deterministic declaration, as any failure proves compositeness) or "Probably Prime" (a probabilistic result, with error probability bounded by the choice of k and the proportion of misleading bases).[15]
Edge cases include n = 1 (always composite), n = 2 (prime), and powers of 2 greater than 2 (composite, handled by the even-check).[15]
Computational Complexity
The computational complexity of the Fermat primality test is primarily determined by the modular exponentiation step, which computes a^{n-1} \mod n (or an equivalent form) for each chosen base a.
For a single trial, binary exponentiation requires O(\log n) modular multiplications, each of which takes O(\log^2 n) bit operations using standard multiplication algorithms, yielding an overall time complexity of O(\log^3 n).[18]
With k trials, the total time complexity becomes O(k \log^3 n).[18]
Employing advanced multiplication methods, such as those based on the fast Fourier transform, reduces the per-trial complexity to O(\log^{2+\epsilon} n) for any \epsilon > 0, or approximately O(\log^2 n \log \log n) in practice, resulting in O(k \log^2 n \log \log n) overall.[18]
The space complexity is O(\log n), as it involves storing the input number n and intermediate values of comparable bit length during computations.[18]
Compared to trial division, which requires O(\sqrt{n}) operations and becomes infeasible for large n (exponential time in \log n), the Fermat test offers polynomial-time performance, enabling efficient testing of 1024-bit numbers (approximately $10^{308}) in seconds on modern hardware.[19][3]
Although probabilistic, the test scales well for extremely large numbers up to $10^{1000} or beyond, as its polylogarithmic dependence on the input size supports practical use in big-integer arithmetic libraries.[3]
Limitations
Fermat Pseudoprimes
A Fermat pseudoprime to base a is a composite positive integer n such that \gcd(a, n) = 1 and a^{n-1} \equiv 1 \pmod{n}.[20][21] This property causes the number to pass the Fermat primality test for that specific base, despite not being prime, thereby illustrating a key limitation of the test as a probabilistic method. Such pseudoprimes arise when n divides a^{n-1} - 1 even though n is composite, often due to the multiplicative structure of its prime factors aligning in a way that satisfies the congruence for particular a.[21]
The smallest Fermat pseudoprime to base 2 is 341, which factors as $11 \times 31 and satisfies $2^{340} \equiv 1 \pmod{341}.[20][21] This example was first noted by Pierre-François Sarrus in the early 19th century, illustrating an early recognition of the phenomenon, though the formal concept of Fermat pseudoprimes was developed later in number theory literature.[22]
Fermat pseudoprimes to a fixed base are relatively rare among composite numbers. For instance, the count of base-2 Fermat pseudoprimes up to x is bounded above by x^{1 - \frac{1}{2} \frac{\log \log \log x}{\log \log x}} for sufficiently large x.[23] Their density decreases significantly when multiple independent bases are used in the test; for example, while there are 78 base-2 Fermat pseudoprimes below $10^5, only a handful pass tests for both bases 2 and 3 in the same range, enhancing the test's reliability against false positives.[20][23]
Carmichael Numbers and Stronger Flaws
Carmichael numbers represent a severe limitation of the Fermat primality test, as they are composite integers that satisfy the test's condition for every base coprime to them, unlike ordinary Fermat pseudoprimes which only do so for specific bases. A Carmichael number is defined as a composite positive integer n > 1 that is square-free and such that a^{n-1} \equiv 1 \pmod{n} for all integers a coprime to n.[24] This property is equivalent to Korselt's criterion, which states that n must be square-free and, for every prime p dividing n, p-1 divides n-1.[24] These numbers are named after the mathematician Robert D. Carmichael, who in 1910 constructed the first examples while studying properties related to Fermat's Little Theorem.[25]
The smallest Carmichael number is 561, which factors as $3 \times 11 \times 17 and satisfies Korselt's criterion since 2, 10, and 16 all divide 560.[25] For such n, the Fermat test declares n probably prime regardless of the base chosen (as long as it is coprime to n), because every qualifying base serves as a "liar" that masks the compositeness of n.[25] This universal failure renders the test completely unreliable for detecting Carmichael numbers when using random or single bases, potentially leading to misidentification in applications like prime generation. The connection between Carmichael numbers and the flaws in probabilistic primality testing, including the Fermat method, was explicitly analyzed by Paul Erdős in 1956, where he introduced techniques to construct and bound their distribution.[26]
Although infinitely many Carmichael numbers exist—a result proven by Alford, Granville, and Pomerance in 1994 using probabilistic constructions involving products of primes with controlled factors—they remain exceedingly rare.[27] Erdős established an upper bound showing that the count of Carmichael numbers up to n is at most O\left( \frac{n}{\exp(c \sqrt{\log n \log \log n})} \right) for some constant c > 0, far sparser than the density of primes, which is approximately n / \log n.[26] This scarcity implies that Carmichael numbers pose a low-probability but catastrophic risk for single-base Fermat tests, emphasizing the need for multiple bases or stronger variants to mitigate false positives.[25]
Variants and Extensions
Deterministic Fermat Testing
While the Fermat primality test is inherently probabilistic, using a fixed set of multiple bases can reduce the probability of error and provide practical reliability for numbers below certain small bounds, combined with trial division by small primes. However, due to the existence of Carmichael numbers—composite numbers that pass the Fermat test for every base coprime to them—no fixed set of small bases can guarantee determinism for arbitrarily large n, as infinitely many such Carmichael numbers with large prime factors exist. For true deterministic testing with a fixed number of bases up to large bounds (e.g., 64 bits), stronger variants like the Miller-Rabin test are required, which modify the condition checked in each iteration to detect Carmichael numbers.
The advantages of using multiple bases in the Fermat test include lower error probability compared to a single base, while keeping computational cost low with O(k · log^3 n) time for k bases, dominated by modular exponentiation. Nonetheless, for applications needing certainty, such as cryptographic prime generation, the limitations of the Fermat test motivate reliance on more robust methods.
Historical development of probabilistic tests based on Fermat's Little Theorem dates to the 1970s, with empirical studies showing that multiple random or fixed bases improve confidence, though full determinism remains elusive without additional techniques.
Relation to Other Primality Tests
The Fermat primality test, while simple and computationally efficient, is generally weaker than the Miller-Rabin test, as it can be fooled by Carmichael numbers—composite numbers that pass the test for all bases coprime to the number—whereas Miller-Rabin detects such composites using the concept of strong pseudoprimes and achieves an error probability of less than $4^{-k} after k iterations.[16][28] Despite this, the Fermat test is faster per iteration due to its reliance on a single modular exponentiation, making it suitable for initial screening before applying more robust methods like Miller-Rabin.[16][29]
In practice, the Fermat test often serves as a preliminary filter in primality testing pipelines, quickly eliminating obvious composites before proceeding to Miller-Rabin or the deterministic AKS test, thereby reducing overall computational load for large numbers.[3][16] Historically, the Fermat test predates modern probabilistic alternatives, such as the Solovay-Strassen test introduced in 1977, which builds on Euler's criterion for improved reliability but at higher complexity; today, the Fermat method occupies a niche role overshadowed by these advancements.[12][3]
Its primary strengths lie in ease of implementation—requiring only basic modular arithmetic—and utility for educational purposes, where understanding the foundational link to Fermat's Little Theorem is emphasized without the intricacies of stronger tests.[3][16] The Fermat test is preferred in scenarios involving very large numbers where absolute certainty is secondary to speed, such as rapid probabilistic assessments in early-stage cryptographic key exploration.[28][29] For smaller numbers, combinations of the Fermat test with trial division can provide quick checks in constrained ranges.[3]
Applications
Use in Software and Libraries
The Great Internet Mersenne Prime Search (GIMPS) employs the Fermat probable prime test as the primary initial primality check for Mersenne numbers of the form $2^p - 1, using base 3 to avoid trivial passes that would occur with base 2 for such candidates.[30] Introduced in 2018, this test replaced earlier deterministic methods for first-pass screening, allowing distributed volunteers to efficiently discard composites before more intensive Lucas-Lehmer verification on promising exponents.[31] Its computational efficiency, requiring only a single modular exponentiation per trial, aligns well with the project's need to process vast search spaces across volunteer hardware.[32]
In the 1990s, early versions of the Pretty Good Privacy (PGP) software relied on the Fermat test for probabilistic primality assessment during the generation of large random primes for cryptographic keys, reflecting the era's emphasis on simple, fast implementations in standalone tools.[33]
Role in Cryptography and Prime Generation
The Fermat primality test serves as an efficient initial filter in the generation of large primes for RSA key pairs, where candidate odd integers are subjected to multiple rounds of the test to identify probable primes before more computationally intensive deterministic or stronger probabilistic verification. This approach accelerates key generation by quickly eliminating obvious composites, as recommended in standards for integer factorization-based cryptography.[34] In implementations adhering to OpenPGP standards, such as those in GnuPG (using Libgcrypt), the Fermat test is employed as an initial filter during RSA key creation, balancing speed and security for email encryption workflows.[35]
In distributed computing projects for prime discovery, like the Great Internet Mersenne Prime Search (GIMPS), the Fermat test functions as a preliminary sieving mechanism within the Prime95 software to assess candidate Mersenne numbers (of the form $2^p - 1) for probable primality. Since 2018, this probable prime (PRP) test has been the first-stage filter, discarding composites rapidly before proceeding to the definitive Lucas-Lehmer primality test, thereby optimizing resource allocation across volunteer computing networks.[30]
For cryptographic security, the Fermat test's error rate—the chance a composite number passes as probable prime—can be upper bounded by $2^{-k} in the worst case after k independent trials with random bases coprime to the candidate, for non-Carmichael composites. Performing 40 trials yields an error probability of at most $2^{-40} \approx 9 \times 10^{-13}, which provides sufficient assurance for many pre-2020 cryptographic applications when the test is chained with trial division or other sieves, though it falls short of the $2^{-80} threshold often desired for long-term security without supplementary methods.[12] However, due to vulnerabilities like Carmichael numbers that can fool the test with probability 1, modern protocols typically pair it with stronger tests such as Miller-Rabin to mitigate risks.[8]
As of 2025, the Fermat test remains a supplementary tool in prime generation for cryptography, used as a quick initial filter in libraries like Libgcrypt (underlying GnuPG), but largely supplanted by more reliable alternatives like optimized Miller-Rabin variants in high-security contexts. It persists in embedded systems and legacy implementations where minimal resource use is paramount, serving as a quick rejector for non-primes in constrained environments.[35][36]